With the continual increase in volumes invested in financial markets, as well as the up- and down
swings, market capitalization remains volatile.
Consider the current levels of investments in US stocks, which alone amount to roughly $ 18 trillion
according to the Worldbank. And what about the recent
turmoil in China that chopped off over $ 2 trillion in six days – just like that?
The alarm bells are ringing, not only for the lack of proper methods suitable for wealth preservation,
but also, or even more urgently, for the need of adequate risk assessment. But don’t we already have
all that? Let's look at what economic theory suggests here.
The foundations of modern day mathematical finance were laid over one hundred years ago by Louis
Bachelier, who described the motion of a financial asset price as being a random walk with unpredictable
future levels given today’s levels. This idea was reiterated by the hypothesis of efficient markets
claiming that there is no information available that has not already been incorporated into current prices.
Unfortunately, this theory does little to address the challenge every portfolio manager has to face:
handling a mix of several different securities in such a way that it best meets one or even a collection
of targets. The Capital Asset Pricing Model (CAPM), a major milestone introduced in the 1960s, responds
to this by relating the performance of a financial asset to the entire market – commonly represented by
an index (such as S&P 500) and an individual expected value. This results in measuring the performance
of a collection of assets in a portfolio in exactly the same way.
However, since the expected return itself is insufficient to express the exposure to risk, as in the case
of unpredictable deviations from a targeted value, the variance as a measure of portfolio volatility is
the second quantity gauged by the portfolio manager.
The higher the risk for a given expected return, the less appealing the portfolio is, when compared to
other assets that achieve the same return with less risk. To gauge the ratio between return and risk,
the so-called Sharpe ratio is most commonly used.
The limitation of the CAPM is that the only reward in terms of return is granted for following the market.
This is generally referred to as beta. The individual ability of the manager is ignored. To remedy that short
fall, the alpha quantity was introduced to measure just that.
This rationale suggests that two parameters suffice to bring a portfolio under control:
- adjusting the portfolio alongside the market, as in defining our beta, and
- meeting the overall expectation by achieving a high uncorrelated alpha, independent of
the market, and satisfying the expectations of investors.
So why do we still incur loss? Wouldn't it be enough to balance the trade-off between risk and return,
and watch the Sharpe ratio closely? The answer is no. But why?
There are two reasons:
-
Distributional assumptions, as in the way probability distributions of asset prices and returns are modeled. Most
commonly, theory and practice resort to a powerful and tractable, yet overly optimistic distributional class;
the normal distribution. The normal distribution is characterized by two parameters, (i) the mean and (ii)
the variance.
Risk measured in magnitude of variance fails to account for asymmetries in return behavior.
Variance is not a determining factor when attempting to foresee the likelihood of extreme events, as with the
$ 2 trillion loss mentioned in the beginning of this article. The most recent financial crisis – initially dubbed
the mortgage crisis – saw a similar rate of wealth destruction. Because extreme events are indeed rare, one can
easily be tempted to speak of exceptions when they happen.
-
Even though models in finance have become quite sophisticated when it comes to accounting for the so-called stylized facts,
they are still too static when observing a financial time series during a respective time frame.2
So, not only do the many parameters need to be recalibrated regularly, which is often a tedious task, but the entire
model itself needs to be overhauled on a regular basis. The open question remains, When?’ And even worse,
much of the state-of-the-art portfolio theory ignores the interlock between a portfolio manager’s own action
and the response by others, which in turn should be anticipated, and usually is in practice. It is this precise
point where current theory abandons us … and our portfolios.
Two Stone
We offer an approach that considers the cherished alpha and beta, yet goes into a totally different direction when it
comes to bringing risk under control.
Our rationale is basically two-fold.
First, we introduce reflection into the set-up, by considering many different views and weights according to the most
plausible feedback.
Second, we part from the traditional probabilistic setting of strict distributional assumptions, and resort to methods
from data mining to machine learning, to subdivide information sets, each corresponding to a particular trading decision.
Proven by our research and enabled by our technology, this enables us to achieve a high positive alpha with a positive
beta, in times of upward market movements and zero beta – this means complete independence from the market, and a
secure position in times of falling markets.
1 Selected Articles (Refereed Journals)
-
International Stock Market Comovement and News, with Stephan Meyer, R.Ryordan, and Andreas Storkenmaier, 2014
(Journal of Financial Research)
-
IMulti-Tail Elliptical Distributions, with S. Kring, S.T. Rachev, F. Fabozzi (The Econometrics Journal)
-
Price Calibration and Hedging of Correlation Dependent Credit Derivatives using a Structural Model with
alpha-Stable Distributions, with J. Papenbrock, S.T. Rachev, F. Fabozzi (Applied Financial Economics)
-
Estimation of Alpha-Stable Sub-Gaussian Distributions for Asset Returns, with S. Kring, S. T. Rachev,
F. Fabozzi, 2009, in Risk Assessment, Bol, G. et al, Physica-Verlag
-
Distributional Analysis of the Stocks Comprising the DAX 30, with S.Rachev, F.J.Fabozzi, 2005
(Probability and Mathematical Statistics)
-
Analysis of Loss Given Default, with A. Nazemi (Investment Management and Financial Innovations)
Technical Reports
-
Anwendung der Dempster-Shafer Evidenztheorie auf die Bonitätsprüfung
-
The Normal Tempered Stable Distribution for synthetic CDO pricing, with Philipp Ehrler
-
Spreading of US Mortgage Default, with Lowell Mason and Henning Wechsung
-
International Comovement of Equity Markets and Foreign Exchage
-
Reflection on Recovery and Loss Given Default. What Is and What Is Amiss, with A. Nazemi and S.T. Rachev
-
Change Point Analysis and Regime Switching Models, with Paul Weskamp
-
Composed and Factor Composed Multivariate GARCH Models, with S. Kring, Svetlozar T. Rachev, Frank J. Fabozzi, 2007
-
CDO Correlation Smile/Skew in One-Factor Copula Models: An Extension with Smoothly Truncated alpha-Stable Distributions, withMichael Schmitz, Svetlozar T. Rachev
« go back to Inside