This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Capital Asset Pricing Model with Size Factor and Normalizing by Volatility Index

Abraham Atsiwo, Andrey Sarantsev Department of Mathematics & Statistics; University of Nevada, Reno aatwiso@unr.edu Department of Mathematics & Statistics; University of Nevada, Reno asarantsev@unr.edu
Abstract.

The Capital Asset Pricing Model (CAPM) relates a well-diversified stock portfolio to a benchmark portfolio. We insert size effect in CAPM, capturing the observation that small stocks have higher risk and return than large stocks, on average. Dividing stock index returns by the Volatility Index makes them independent and normal. In this article, we combine these ideas to create a new discrete-time model, which includes volatility, relative size, and CAPM. We fit this model using real-world data, prove the long-term stability, and connect this research to Stochastic Portfolio Theory. We fill important gaps in our previous article on CAPM with the size factor.

1. Introduction

Here, we briefly describe the parts of the model analyzed in this article. We remind the readers that price returns for a stock or a portfolio measure only price changes, ignoring dividends, while total returns include both price changes and dividends paid. Also, equity premium is computed as total returns minus risk-free returns (usually measured by short-term Treasury bills). We combine three main ideas in this article.

Idea 1: Capital Asset Pricing Model. We model a target stock portfolio (well-diversified) using a simple linear regression versus the benchmark stock portfolio. A typical example is the Standard & Poor 500 (S&P 500), a well-known index of large USA stocks. The classic measuring tool there is equity premia (for target and benchmark), but we can also apply this for price returns. The slope and intercept of this regression are called beta and alpha.

Idea 2: Size Effect. Small stocks, on average, have higher volatility but also higher returns than large stocks. This feature implies long-term stability: Well-diversified stock portfolios stay together and not split into several clouds. We combined these two ideas in the previous article by the second author [13]. This article is a sequel of that work. Such long-term stability is of interest in Stochastic Portfolio Theory, which constructs portfolios as functions of market weights, utilizing the observation that small stocks have higher risk and return.

Idea 3: The Volatility Index. In another article [22] by the second author, we observe that total monthly returns of the benchmark are not IID (independent identically distributed) Gaussian. However, dividing them by volatility makes them IID Gaussian. Here, volatility is measured by the Volatility Index (VIX) monthly average. The volatility itself is modeled as an autoregression of order 1 on the logarithmic scale. Similar observation holds for price monthly returns of large stocks, and for the small stocks.

In this article, we continue research of [13] by inserting Idea 3 in this setting, thus combining all three ideas. We create a new discrete-time model, and fit it using real market data from Kenneth French’s data library and Federal Reserve Economic Data web site. We then state and prove long-term stability of this market model, and interpret this for Stochastic Portfolio Theory. Unlike [13], we consider only discrete time models in this article. We fill a couple of important gaps left in the previous research [13].

In Section 2, we provide a comprehensive motivation of proposed models and historical review. Section 3 is devoted to data description and statistical fit of these models. In Section 4, we state and prove long-term stability (ergodicity) result for this model: Theorem 1, and analyze sufficient conditions for Theorem 1 to hold. We reduce the capital distribution curve to the order statistics (sorted values) of a standard normal sample. The last part of this section contains a short discussion of stability conditions for the case where volatility is constant. We have already discussed this in our previous article [13] but only for continuous time. Thus we fill a gap in our research.

In Section 5, we interpret our results in terms of Stochastic Portfolio Theory. We simulate capital distribution curve (ranked market weights vs ranks). We prove stability of this curve in Theorem 2. Next, Theorem 3 contains results about its shape: We reduce it to normal order statistics. This includes the case of constant volatility, which was studied in [13]. There, we did not have rigorous results; here, we fill this gap. Finally, the Appendix contains a discussion of the capital distribution curve based on ranked standard normal sample, continuing research of Section 5.

2. Background and Motivation

2.1. Capital Asset Pricing Model

This celebrated model, abbreviated as CAPM, compares returns of a stock portfolio with returns of the benchmark. This model was proposed by [16, 19, 25] The benchmark is usually taken to be the Standard & Poor 500 for the American stock market. The model states that the only factor which matters for a well-diversified portfolio is market exposure, otherwise known by a standard term beta and denoted by β\beta. The case β=0\beta=0 corresponds to the risk-free portfolio, with guaranteed deterministic return. This is usually measured using a benchmark of a short-term rate rr, for example 1-month Treasury rate. The case β=1\beta=1 corresponds to the market portfolio (the benchmark). When β(0,1)\beta\in(0,1), the stock portfolio can be replciated by investing in a portfolio of risk-free bonds and the stock market benchmark, in proportions 1β1-\beta and β\beta, respectively. In other words, total returns QQ (including price changes and dividends) of this portfolio are related to total returns Q0Q_{0} of the benchmark:

(1) Q=(1β)r+βQ0.Q=(1-\beta)r+\beta Q_{0}.

Equivalently, we can rewrite (1) in terms of equity premia P=QrP=Q-r of the portfolio and P0=Q0rP_{0}=Q_{0}-r of the benchmark:

(2) P=βP0.P=\beta P_{0}.

If β>1\beta>1, this (2) still holds, and can be interpreted as shorting bonds and investing everything in the benchmark. We can treat β\beta as a risk measure: β>1\beta>1 means that the portfolio is riskier than the benchmark, and β(0,1)\beta\in(0,1) means the opposite. The case β<0\beta<0 does not usually happen in practice, see [1, Chapter 7].

It is not considered a big achievement if a money manager improved returns by increasing β\beta. In fact, often these managers are expected to maximize excess return: Total returns of a portfolio adjusted for market exposure. This quantity is denoted by α\alpha and, accordingly, is called alpha. These two Greek letters α\alpha and β\beta are standard notation in Finance. This methodology of market exposure and excess return is well-accepted by both finance academics and practitioners. One can include α\alpha into (2) as

(3) P=α+βP0+ε,𝔼ε=0.P=\alpha+\beta P_{0}+\varepsilon,\quad\mathbb{E}\varepsilon=0.

We also include an error term ε\varepsilon, since the model might not hold almost surely. This makes (3) a simple linear regression of PP upon P0P_{0}.

Subsequent research disproved the strong claims of CAPM that market exposure β\beta is the only risk measure and quantity of interest for a diversified stock portfolio, see [8, 9], and [1, Chapter 7]. For one, there are systematic ways to generate excess return α\alpha by using several factors. Also, β\beta might be unstable in the long run. Still, β\beta is an established risk measure, accepted by financial theorists, analysts, and managers. The CAPM is useful as a benchmark model, a starting point for more complicated and real-life models. Calculating β\beta for mutual funds and exchange-traded funds is common practice.

2.2. Size and value

The most well-known and accepted factors are size (average market capitalization of portfolio stocks) and value (fundamentals such as earnings, dividends, or book value, compared to price). These are well-accepted by financial academic community and are considered useful by industry practitioners, to the extent that there are multiple size- and value-based funds traded alongside the S&P 500 funds. See [3, 8, 10].

Including a few factors (for example size and value) would enrich (3). In particular, size factor is related to β\beta as follows: Well-diversified portfolios of small stocks have equity premia PP closely correlated with that P0P_{0} of large stock benchmark S&P 500. This simple linear regression of PP vs P0P_{0} has very large R2R^{2} but β\beta which is slightly larger as 11. This can be done, for example, with exchange-traded funds tracking small, mid, and large (=benchmark) stock indices, see [13, Appendix]. The β\beta for mid-cap index is 1.15, and for small-cap index is 1.27. A natural idea then is to model β\beta as a function of a portfolio size relative to benchmark. We can try the same for α\alpha, although in [13, Appendix] we have α\alpha is not significantly different from zero for both small-cap and mid-cap indices. See more on the size effect in [24], and the discussion in [1, Chapter 8].

2.3. CAPM-based model with size as factor

Therefore, we developed the following model in [13]: Let SS = market capitalization of target portfolio, and S0S_{0} = market capitalization of benchmark portfolio. Then the relative size (on the log scale) is defined as

(4) C=ln(S/S0)C=\ln(S/S_{0})

For C=0C=0, the relative size is 0 (on the log scale) or 1 (on the absolute scale). This corresponds to the target portfolio having the same properties as the benchmark portfolio, with α=0\alpha=0 and β=1\beta=1. The simplest model is linear: For some coefficients a,ba,b,

(5) α=aC,β=bC+1\alpha=aC,\quad\beta=bC+1

In fact, in [13] we have more general conditions: α\alpha and β\beta are general functions of CC. Here, we focus only on linear functions, see [13, Example 1]. Analysis for small-cap and mid-cap funds above shows that a0a\approx 0 but b<0b<0. Rewrite (3) as follows by plugging there (5):

(6) P=AC+(1+BC)P0+δ,P=AC+\left(1+BC\right)P_{0}+\delta,

where δ\delta are IID regression residuals with mean zero. A good way to generalize (6) is to make it dynamic: a time series. To this end, we make the model complete by writing an equation for S,S0,P0S,S_{0},P_{0}. With regard to the benchmark equity premia P0P_{0} and market cap S0S_{0}, we shall write this equation in the next subsection. Unlike total returns, which include dividends, price returns are computed only using price movements. The main idea is to take CAPM linear regression, and replace equity premia with price returns. We get

(7) R=aC+(1+bC)R0+ε,R=aC+\left(1+bC\right)R_{0}+\varepsilon,

where a,ba,b are some coefficients (not necessarily the same as the A,BA,B from (6)), and ε\varepsilon are IID regression residuals with mean zero (not necessarily the same as ε\varepsilon, but possibly correlated with these). We can interpret change in logarithm as price returns:

(8) R(t)=lnS(t+1)lnS(t)=S(t+1)S(t),R0(t)=lnS0(t+1)lnS0(t)=lnS0(t+1)S0(t).\displaystyle\begin{split}R(t)&=\ln S(t+1)-\ln S(t)=\frac{S(t+1)}{S(t)},\\ R_{0}(t)&=\ln S_{0}(t+1)-\ln S_{0}(t)=\ln\frac{S_{0}(t+1)}{S_{0}(t)}.\end{split}

In real-life finance, a stock market is stable in the long run: Small stocks grow, on average, faster than large stocks. Thus formerly small stocks might become large, and formerly large stocks might become small. The relative size of a stock portfolio exhibits mean-reversion. In our article [13], we proved this for continuous time and more general systems than (6), (7), under the assumptions that the benchmark follows a lognormal Samuelson (Black-Scholes) model of geometric random walk with growth rate gg and total returns GG, and

(9) a+bg<0.a+bg<0.

This is consistent with the observation made above that a0a\approx 0 and b<0b<0, since g>0g>0. Analysis of real-world finance data in [13] showed that AaA\approx a and BbB\approx b.

2.4. Stochastic volatility

However, there is a drawback in our modeling from [13]. We fit regressions (6) and (7) using real-world monthly data from 1926 taken from Kenneth French’s Dartmouth College Financial Data Library. Residuals ε,δ\varepsilon,\delta are not IID. Absolute values are autocorrelated. This feature is also true for S&P 500 monthly returns themselves, see our recent article [22]. Also, these returns are not Gaussian. We wish to improve the fit of regressions (6) and (7) to make residuals closer to IID Gaussian. For S&P 500 returns, we did this in [22] as follows: We divided these returns by monthly average VIX: The S&P 500 Volatility Index VV computed daily by the Chicago Board of Options Exchange. Our main idea of this article [22] is (in our notation, see [22, Subsection 2.2]):

(10) R0(t)V(t)IID Gaussian.\frac{R_{0}(t)}{V(t)}\sim\mbox{IID Gaussian}.

Similarly, it is reasonable to model normalized equity premia as IID Gaussian:

(11) P0(t)V(t)IID Gaussian.\frac{P_{0}(t)}{V(t)}\sim\mbox{IID Gaussian}.

We did not do this in [22], but we complete this work in this article. For the original equity premia of benchmark P0P_{0}, we plot in Figure 1 the quantile-quantile plot and the autocorrelation function for P0P_{0} and |P0||P_{0}|. For the normalized equity premia P0/VP_{0}/V, we plot these in Figure 2. We see that division by VIX is needed to model equity premia as in (11). Data for P0P_{0} is taken from Kenneth French’s data library: Top 10% decile, January 1986–October 2024. For short-term rate rr, we use the 3-month Treasury rate from Federal Reserve Economic Data: start of month data. Data and Python code is available on Github repository: asarantsev/size-capm-vix

Refer to caption
(a) QQ
Refer to caption
(b) ACF
Refer to caption
(c) ACF
Figure 1. The quantile-quantile (QQ) plot for equity premia P0P_{0}, and the empirical autocorrelation function (ACF) for P0P_{0} and for |P0||P_{0}|
Refer to caption
(a) QQ
Refer to caption
(b) ACF
Refer to caption
(c) ACF
Figure 2. The quantile-quantile (QQ) plot for equity premia P0/VP_{0}/V, and the empirical autocorrelation function (ACF) for P0/VP_{0}/V and for |P0/V||P_{0}/V|

The VIX itself is modeled by an autoregression of order 1 on the log scale, see [22]:

(12) lnV(t)=α+βlnV(t1)+W(t).\ln V(t)=\alpha+\beta\ln V(t-1)+W(t).

Slightly abusing the notation, but following [22], in the rest of the article we use α\alpha and β\beta as the intercept and the slope of this autoregression (12), instead of excess return and market exposure from the CAPM. This model (12) has good fit, as shown in [22, Section 3]. Innovations WW are IID but not Gaussian, with some finite exponential moments. The point estimate β^=0.88\hat{\beta}=0.88, and we reject the unit root hypothesis β=1\beta=1.

Methodology of [22] stands in contrast to classic stochastic volatility models [4], where the volatility VV is not observed directly and must be inferred from the data R0R_{0} or P0P_{0}.

2.5. Capital distribution curve

A newly developed framework for portfolio management in Stochastic Portfolio Theory, see [11]. One question which concerns it is model stability: In a model of NN stocks, do they move together in the long run, or do they split into several subsets, moving away from each other in the long run? A related question is analysis of market weights: A market weight μ\mu of a stock is its market capitalization (size) divided by the sum of all market capitalizations. Rank market weights at each time from top to bottom:

μ(1)(t)μ(N).\mu_{(1)}(t)\geq\ldots\geq\mu_{(N)}.

The plot of NN points

(lnn,lnμ(n)(t)),n=1,,N(\ln n,\ln\mu_{(n)}(t)),\,n=1,\ldots,N

is called the capital distribution curve. For real-world markets, this curve is concave and straight at left upper end, see [11, 12]. See also our own plots in Figure 3. Python code and data are available in GitHub repository asarantsev/size-capm-vix The data is from Kenneth French’s Data Library.

Refer to caption
Figure 3. Split the US stock market into 20 equal parts based on market size: top 5%, next 5%, etc up to bottom 5%. Take the 19 breakpoints between these parts and use these as a substitution for market size SkS_{k}. We can compute market weights μk\mu_{k} from these quantities SkS_{k}, for k=1,,19k=1,\ldots,19. We plot only 5 curves here for clarity but other months and years give very similar curves.

If the model is stable, the capital distribution curve is also stable in the long run: see Theorem 2. Previous research analyzed long-term stability and capital distribution curve for various continuous-time models: [7, 20]. These models were designed to capture the observations that, on average, small stocks have higher volatility and growth rates than large stocks. In [7], competing Brownian particle models were analyzed, where drift and diffusion coefficients depend on the rank of the stock, and linearity of the capital distribution curve was reproduced. In [20], the drift and diffusion depend on the market weight of a stock, but the capital distribution curve is not linear. These models do not use CAPM or VIX.

In the previous article by the second coauthor [13], devoted to the CAPM with size but without VIX (in other words, with constant volatility), we also reproduced model stability for its continuous-time version [13, Theorem 2]. Using simulation, we reproduced the linearity of the capital distribution curve. But we did not state and prove rigorous results on this. A natural question is to reproduce these results for our model here. We accomplish both tasks in this article: (a) we simulate the capital distribution curve in Subsection 5.2, which reproduces its linearity; (b) we prove results on convergence to a Poisson point process. This fills a gap in [13].

2.6. Our contributions

In this article, we combine the ideas of our articles [13] on CAPM and [22] on VIX to state a reasonable generalization of [13]. This model can be truncated: Include only price changes for the benchmark R0R_{0} and the target RR (or, equivalently, market size S0S_{0} and SS), and volatility VV. Alternatively, it can be completed: Include also equity premia P0,PP_{0},P for the benchmark and the target. A truncated model contains 3 time series, and a completed model contains 5 time series.

In [13], we model (7) and (6) but with P0P_{0} or S0S_{0} i.i.d. Gaussian, in continuous time. The article [13] does not include VIX VV. However, this article is concerned with the idea from [22] that division by VIX normalized returns and premia. We replace PP and P0P_{0} with P/VP/V and P0/VP_{0}/V in (6). Also, we replace RR with R/VR/V and R0R_{0} with R0/VR_{0}/V in (7). We model (R0/V,P0/V)(R_{0}/V,P_{0}/V) as IID (but maybe not Gaussian), following (10) and (11). As mentioned in the Introduction, in Section 3 we perform statistical data analysis.

We fill two lacunas left in our research [13]. The first lacuna is stability results for discrete time. In Section 4 of this article, we state and prove this for the case of stochastic VIX. Our results work for constant volatility as well, which is the setting of [13], see (9). Next, we check the stability condition numerically. The second lacuna is rigorous results for the capital distribution curve, which are lacking in [13]. There, we present only the simulation for the constant volatility. In Section 5 of this article, we state and prove a convergence result for the capital distribution curve: Theorem 2. Next, in Theorem 3, we show a remarkable result: the curve is

(lnn,X(n)),n=1,,N,(\ln n,X_{(n)}),\quad n=1,\ldots,N,

where X(1)X(N)X_{(1)}\geq\ldots\geq X_{(N)} are order statistics of a conditional normal sample:

X1,,XNμ,σ𝒩(μ,σ2)X_{1},\ldots,X_{N}\mid\mu,\sigma\sim\mathcal{N}(\mu,\sigma^{2})

for random μ\mu and σ\sigma. We perform the simulation in Section 5 for stochastic VIX. In Appendix, we further analyze this curve for μ=\mu= and σ=1\sigma=1, and show its upper left and lower right ends replicate real-world behavior.

3. Financial Data and Statistical Analysis

3.1. Data description

We take monthly data January 1990 – September 2022. Total T=405T=405 data points. As discussed in the Introduction and Background sections, we measure volatility VV as Chicago Board of Options Exchange VIX: The Volatility Index for S&P 500, monthly average data. This data is taken from Federal Reserve Economic Data (FRED) web site. For equity premia computation, we need short-term Treasury rates, which are also taken from the FRED web site.

The rest of the data is taken from Kenneth French’s Dartmouth College Financial Data Library. It contains equally-weighted portfolios of stocks split into 10 deciles by size: Decile 1 has smallest 10% stocks by size (market capitalization), Decile 2 has the next 10% smallest stocks, etc up to Decile 10, which contains top 10% largest stocks. In practice, during this time span this Decile 10 portfolio closely corresponds to the S&P 500, the classic benchmark for American stocks. (Although the S&P 500 index is size-weighted, not equally-weighted.) We also use Decile 10 as the benchmark. These portfolios are reconstituted at the end of June of each year. For each decile and each month, the data contains average market size, price returns (excluding dividends), and total returns (including dividends).

3.2. Price returns results

Our goal is to fit (7). We rewrite it as

R(t)R0(t)V(t)=aC(t)+bC(t)R0(t)V(t)+ε(t).\frac{R(t)-R_{0}(t)}{V(t)}=aC(t)+bC(t)\frac{R_{0}(t)}{V(t)}+\varepsilon(t).

This linear regression does not have an intercept, however. To make the model complete, we add an intercept mm and get:

(13) R(t)R0(t)V(t)=aC(t)+bC(t)R0(t)V(t)+m+ε(t).\frac{R(t)-R_{0}(t)}{V(t)}=aC(t)+bC(t)\frac{R_{0}(t)}{V(t)}+m+\varepsilon(t).

For the benchmark with price returns R0R_{0}, we take Decile 10. For the target with price returns RR, we use Deciles 1, …, 9. We fit these 9 linear regressions (13) separately.

Decile m^\hat{m} a^\hat{a} b^\hat{b} s2s^{2} Ljung-Box pp Jarque-Bera pp
1 -.0567 -.0116 -.1179 .9860 0 0
2 -.3286 -.0371 -.0686 .9941 .75 0
3 -.5481 -.0506 -.1110 .9855 .6 .17
4 -.5446 -.0447 -.1092 .9866 .18 .08
5 -.5646 -.0433 -.1708 .9695 .35 .05
6 -.0486 .0074 -.1446 .9790 .56 .99
7 -.9318 -.0836 -.1661 .9673 .55 .06
8 .7978 -.0761 -.1745 .9651 .11 .09
9 -.6832 -.0688 -.0437 .9937 .69 .07

Table 1. Results of ordinary least squares fit for the regression (13). Point estimates m^\hat{m}, a^\hat{a}, b^\hat{b} of parameters m,a,bm,a,b, the empirical variance s2s^{2} of residuals ε\varepsilon, and the pp-values for the Ljung-Box white noise test (the L1L^{1} norm version) and the Jarque-Bera normality test for residuals ε\varepsilon.

For Deciles 3–9, it is reasonable to model residuals using IID Normal. The Student tt-test gives pp-values greater than 5% for mm and aa for each decile. However, the Student tt-test gives pp-values greater than 5% for bb, for all deciles, except Deciles 2 and 9. Thus we see that for most deciles, we can assume m=a=0m=a=0 but b0b\neq 0. Finally, the 95% confidence intervals for s2s^{2} for each decile contain 11. Thus, for Deciles 3–9, we could model

R(t)V(t)=(1+bC(t))R0(t)V(t)+ε(t),ε(t)𝒩(0,1)IID,\frac{R(t)}{V(t)}=(1+bC(t))\frac{R_{0}(t)}{V(t)}+\varepsilon(t),\quad\varepsilon(t)\sim\mathcal{N}(0,1)\ \mbox{IID},

and bb is between 0.1-0.1 and 0.2-0.2 (except Decile 9, where we fail to reject b=0b=0). This model is an improvement over [13] where residuals were not IID or Gaussian.

3.3. Equity premia results

Our goal is to fit (6). Similarly to (7), we rewrite it as

P(t)P0(t)V(t)=AC(t)+BC(t)P0(t)V(t)+δ(t).\frac{P(t)-P_{0}(t)}{V(t)}=AC(t)+BC(t)\frac{P_{0}(t)}{V(t)}+\delta(t).

This linear regression does not have an intercept, however. To make the model complete, we add an intercept MM and get:

(14) P(t)P0(t)V(t)=AC(t)+BC(t)P0(t)V(t)+M+δ(t).\frac{P(t)-P_{0}(t)}{V(t)}=AC(t)+BC(t)\frac{P_{0}(t)}{V(t)}+M+\delta(t).

For the benchmark with equity premia P0P_{0}, we use Decile 10. And for the target with equity premia PP, we use Deciles 1, …, 9. We fit these 9 linear regressions (13) separately.

Decile M^\hat{M} A^\hat{A} B^\hat{B} s2s^{2} Ljung-Box pp Jarque-Bera pp
1 -.1215 -.0178 .1151 .9887 0 0
2 -.2832 -.0330 -.0714 .9937 .73 0
3 -.4680 -.0446 -.1113 .9854 .59 .15
4 -.4739 -.0402 -.1128 .9858 .17 .06
5 -.4215 -.0339 -.1703 .9696 .35 .03
6 .1443 .0137 -.1464 .9784 .54 1
7 -.8078 -.0746 -.1559 .9605 .51 .04
8 -.6793 -.0671 -.1784 .9643 .12 .05
9 -.6596 -.0672 -.0432 .9937 .69 .06

Table 2. Results of ordinary least squares fit for the regression (14). Point estimates M^\hat{M}, A^\hat{A}, B^\hat{B} of parameters M,A,BM,A,B, the empirical variance s2s^{2} of residuals δ\delta, and the pp-values for the Ljung-Box white noise test (the L1L^{1} norm version) and the Jarque-Bera normality test for residuals δ\delta.

Similarly to regression (13), for Deciles 3–9, it is reasonable to model residuals using IID, but the evidence for normality is much weaker than in (13). The Student tt-test gives pp-values greater than 5% for MM and AA for each decile. However, the Student tt-test gives pp-values greater than 5% for BB, for all deciles, except Deciles 2 and 9. Thus we see that for most deciles, we can assume M=A=0M=A=0 but B0B\neq 0. Finally, the 95% confidence intervals for s2s^{2} for each decile contain 11. Thus, for Deciles 3–9, we could model

P(t)V(t)=(1+BC(t))P0(t)V(t)+δ(t),δ(t)IID,𝔼[δ(t)]=0,𝔼[δ2(t)]=1.\frac{P(t)}{V(t)}=(1+BC(t))\frac{P_{0}(t)}{V(t)}+\delta(t),\quad\delta(t)\sim\mbox{IID},\quad\mathbb{E}[\delta(t)]=0,\quad\mathbb{E}[\delta^{2}(t)]=1.

and BB is between 0.1-0.1 and 0.2-0.2 (except Decile 9, where we fail to reject B=0B=0). Similarly to (13), this model is an improvement over [13] where residuals were not IID.

4. Long-Term Stability

4.1. Formal construction of the model

Take a sequence of five-dimensional vectors:

(15) 𝐘(t):=(W(t),Z(t),U(t),δ(t),ε(t)),t=1,2,\mathbf{Y}(t):=(W(t),Z(t),U(t),\delta(t),\varepsilon(t)),\quad t=1,2,\ldots
Assumption 1.

Vectors 𝐘(t)\mathbf{Y}(t) are IID with mean zero and finite second moment, with Lebesgue joint density on 5\mathbb{R}^{5} which is everywhere strictly positive.

These five components might be correlated between themselves. First, we model VV using (12) with innovations WW. Next, we model R0R_{0} following (10) for some constant gg\in\mathbb{R}:

(16) R0(t)V(t)=g+U(t),\frac{R_{0}(t)}{V(t)}=g+U(t),

but we do not necessarily assume UU is Gaussian. We subtract gg because Assumption 1 states that 𝔼[U(t)]=0\mathbb{E}[U(t)]=0, but the left-hand side of (16) has nonzero mean. Similarly, we model P0P_{0} following (11) for another constant GG\in\mathbb{R}:

(17) P0(t)V(t)=G+Z(t),\frac{P_{0}(t)}{V(t)}=G+Z(t),

and ZZ might not be Gaussian. These two equations (16) and (17) model the benchmark. Next, we combine (6) and (7) with these new ideas, following the outline in Section 2. We can replace P0P_{0} and PP with P0/VP_{0}/V and P/VP/V in (6):

(18) P(t)V(t)=M+AC(t)+(1+BC(t))P0(t)V(t)+δ(t).\frac{P(t)}{V(t)}=M+AC(t)+(1+BC(t))\frac{P_{0}(t)}{V(t)}+\delta(t).

Similarly, we can replace R0R_{0} and RR with R0/VR_{0}/V and R/VR/V in (7):

(19) R(t)V(t)=m+aC(t)+(1+bC(t))R0(t)V(t)+ε(t).\frac{R(t)}{V(t)}=m+aC(t)+(1+bC(t))\frac{R_{0}(t)}{V(t)}+\varepsilon(t).

Recall the definition of the relative size process

(20) C(t)=lnS(t)S0(t).C(t)=\ln\frac{S(t)}{S_{0}(t)}.

4.2. Stability results

Consider a discrete-time process 𝐗=(𝐗(0),𝐗(1),)\mathbf{X}=(\mathbf{X}(0),\mathbf{X}(1),\ldots) in d\mathbb{R}^{d}.

Definition 1.

This process 𝐗\mathbf{X} is called time-homogeneous Markov if there exists a transition function 𝒬:d×[0,1]\mathcal{Q}:\mathbb{R}^{d}\times\mathcal{B}\to[0,1] (where \mathcal{B} is the Borel σ\sigma-algebra on d\mathbb{R}^{d}) such that for all t=1,2,t=1,2,\ldots, 𝐱𝟎,𝐱𝟏,,𝐱𝐭𝟏𝐝\bf{x}_{0},\bf{x}_{1},\ldots,\bf{x}_{t-1}\in\mathbb{R}^{d}, and AA\in\mathcal{B}, we have:

(𝐗(t)A𝐗(0)=𝐱𝟎,𝐗(𝟏)=𝐱𝟏,,𝐗(𝐭𝟏)=𝐱𝐭𝟏)=𝒬(𝐱𝐭𝟏,𝐀).\mathbb{P}(\mathbf{X}(t)\in A\mid\mathbf{X}(0)=\bf{x}_{0},\mathbf{X}(1)=\bf{x}_{1},\ldots,\mathbf{X}(t-1)=\bf{x}_{t-1})=\mathcal{Q}(\bf{x}_{t-1},A).
Lemma 1.

Under Assumption 1, the process CC is Markov. Also, the truncated model (V,R0,R)(V,R_{0},R) is Markov. Finally, the completed model (V,R0,R,P0,P)(V,R_{0},R,P_{0},P) is Markov.

Proof.

It is clear from (12) that the process lnV\ln V (and therefore VV) is Markov. Together with (16), this shows that (V,R0)(V,R_{0}) is Markov. From definition (20), we write

(21) C(t+1)C(t)=lnS(t+1)S0(t+1)lnS(t)S0(t)=lnS(t+1)S(t)lnS0(t+1)S0(t)=R(t)R0(t).\displaystyle\begin{split}C(t+1)-C(t)&=\ln\frac{S(t+1)}{S_{0}(t+1)}-\ln\frac{S(t)}{S_{0}(t)}\\ &=\ln\frac{S(t+1)}{S(t)}-\ln\frac{S_{0}(t+1)}{S_{0}(t)}=R(t)-R_{0}(t).\end{split}

Using (21), we rearrange (19) as follows:

(22) C(t+1)=(1+aV(t)+bR0(t))C(t)+V(t)(m+ε(t)).C(t+1)=(1+aV(t)+bR_{0}(t))C(t)+V(t)(m+\varepsilon(t)).

But from Assumption 1, this process CC from (22) is Markov, too. From (21) and (16), this equation (22) shows that (V,R0,C)(V,R_{0},C), or, equivalently, (V,R0,R)(V,R_{0},R), is Markov. Finally, from (18) and (17), we get that the completed model is Markov. ∎

Definition 2.

A time-homogeneous Markov process 𝐗\mathbf{X} has a stationary distribution π\pi if π\pi is a probability measure on d\mathbb{R}^{d}, and from 𝐗(0)π\mathbf{X}(0)\sim\pi it follows that 𝐗(1)π\mathbf{X}(1)\sim\pi (and therefore 𝐗(t)π\mathbf{X}(t)\sim\pi for all tt). Equivalently, in terms of transition function 𝒬\mathcal{Q}: For every AA\in\mathcal{B},

d𝒬(x,A)π(dx)=π(A).\int_{\mathbb{R}^{d}}\mathcal{Q}(x,A)\pi(\mathrm{d}x)=\pi(A).
Definition 3.

A time-homogeneous Markov process 𝐗\mathbf{X} is ergodic if it has a unique stationary distribution π\pi, and for every xdx\in\mathbb{R}^{d}, we have:

supAd|(𝐗(t)A𝐗(0)=x)π(A)|0,t.\sup\limits_{A\subseteq\mathbb{R}^{d}}|\mathbb{P}(\mathbf{X}(t)\in A\mid\mathbf{X}(0)=x)-\pi(A)|\to 0,\quad t\to\infty.
Assumption 2.

We have: β(0,1)\beta\in(0,1), and for stationary versions of (V(t),R0(t))(V(t),R_{0}(t)),

(23) 𝔼ln|1+aV(t)+bR0(t)|<0.\mathbb{E}\ln|1+aV(t)+bR_{0}(t)|<0.

Below, we show that, if β(0,1)\beta\in(0,1), these stationary versions (V(t),R0(t))(V(t),R_{0}(t)) exist, and Assumption 23 is well-defined. The following is the main result of this article.

Theorem 1.

Consider the relative size process CC, the truncated model (V,R0,R)(V,R_{0},R), and the completed model (V,P0,P)(V,P_{0},P). Under Assumptions 123, each of these models is ergodic.

Proof.

Step 1. Under Assumption 1 and β(0,1)\beta\in(0,1), it is a well-known result that the process VV has a unique stationary distribution. Under (16) and (17), (P0,R0,V)(P_{0},R_{0},V) has a unique stationary distribution. The second condition in Assumption 23 is taken for this stationary distribution.

Step 2. Let us show that CC from (22) is stationary. Apply the main result of [5]. In the notation of [5], we have An:=1+aV(n)+bR0(n)A_{n}:=1+aV(n)+bR_{0}(n) and Bn:=V(n)(m+ε(n))B_{n}:=V(n)(m+\varepsilon(n)). Therefore, Assumption 23 ensures that 𝔼max(ln|An|,0)<0\mathbb{E}\max(\ln|A_{n}|,0)<0. We need only to show that

(24) 𝔼max(ln|Bn|,0)=𝔼max(ln|V(t)(m+ε(t))|,0)<.\mathbb{E}\max(\ln|B_{n}|,0)=\mathbb{E}\max(\ln|V(t)(m+\varepsilon(t))|,0)<\infty.

For any real number c0c\neq 0, we have:

(25) ln|c||c|.\ln|c|\leq|c|.

Applying (25) to the left-hand side of (24).

(26) ln|V(t)(m+ε(t))|=ln|m+ε(t)|+lnV(t)|m+ε(t)|+lnV(t).\ln|V(t)(m+\varepsilon(t))|=\ln|m+\varepsilon(t)|+\ln V(t)\leq|m+\varepsilon(t)|+\ln V(t).

Next, for any real numbers c1,c2c_{1},c_{2},

(27) max(c1+c2,0)max(c1,0)+max(c2,0)max(c1,0)+|c2|.\max(c_{1}+c_{2},0)\leq\max(c_{1},0)+\max(c_{2},0)\leq\max(c_{1},0)+|c_{2}|.

From (27) and (26), the left-hand side of (24) is dominated by

(28) 𝔼max(|m+ε(t)|,0)+𝔼|lnV(t)|.\mathbb{E}\max(|m+\varepsilon(t)|,0)+\mathbb{E}|\ln V(t)|.

The innovations ε\varepsilon have finite second moment (and therefore first moment) by Assumption 1. Therefore,

(29) 𝔼max(|m+ε(t)|,0)<.\mathbb{E}\max(|m+\varepsilon(t)|,0)<\infty.

Next, lnV\ln V is governed by an autoregression of order 1 with innovations WW having finite second moment. Thus the stationary distribution of lnV\ln V also has finite second moment and therefore finite first moment. Together with (29), this proves that (28) is finite. This proves (24) and with this the stationarity of CC.

Step 3. Further, (R0,P0)=V(g+U,G+Z)(R_{0},P_{0})=V(g+U,G+Z) is stationary, and therefore R=eCR0R=e^{C}R_{0} is stationary. All this proves stationarity of the truncated and the completed models:

(30) (lnV,R0,R)and(lnV,R0,R,P0,P).(\ln V,R_{0},R)\quad\mbox{and}\quad(\ln V,R_{0},R,P_{0},P).

Step 4. Finally, let us show ergodicity for the completed model. The transition function 𝒬\mathcal{Q} of this five-dimensional Markov chain is strictly positive: For any set E5E\subseteq\mathbb{R}^{5} of positive Lebesgue measure, the transition probability 𝒬(𝐱,E)>0\mathcal{Q}(\mathbf{x},E)>0 for every 𝐱5\mathbf{x}\in\mathbb{R}^{5}. Indeed, this transition probability is a push-forward of the distribution of the innovations in d\mathbb{R}^{d} under a certain smooth bijection 55\mathbb{R}^{5}\to\mathbb{R}^{5}. And for every 𝐱5\mathbf{x}\in\mathbb{R}^{5}, this probability measure 𝒬(𝐱,)\mathcal{Q}(\mathbf{x},\cdot) and the Lebesgue measure on 5\mathbb{R}^{5} are mutually absolutely continuous. For the rest of this proof, we refer the reader to the classic book [17] for terminology. It is straightforward to show that this Markov chain is irreducible and aperiodic. We have already shown it has a stationary distribution. Therefore, this Markov chain is positive Harris recurrent, and by [17, Theorem 13.0.1], this Markov chain is ergodic.

Exactly the same argument works for the truncated three-dimensional version, or for the relative size process in one dimension. Since we proved these stability and ergodicity results for (30), they are all true if we replace lnV\ln V with VV in (30). ∎

4.3. First-order approximation

To check Assumption 23 is hard. But for real-world values of coefficients found in Section 3,

(31) x:=aV(t)+bR0(t)is small.x:=aV(t)+bR_{0}(t)\quad\mbox{is small.}

Indeed, from the data analysis we can assume a=0a=0, because the point estimates a^\hat{a} are not significantly different from zero. Next, b(0.2,0.1)b\in(-0.2,-0.1), and R0(t)R_{0}(t) is the growth rate of the benchmark. A well-known fact for the growth rate (for example, found in [1]) is that per annum, it is of order 10%. Per month, it is on average even less. Thus bR0(t)bR_{0}(t) is small. This completes the explanation of (31).

For x0x\approx 0, we use the first-order approximation: ln|1+x|=ln(1+x)x\ln|1+x|=\ln(1+x)\approx x. We simplify Assumption 23:

(32) 𝔼[aV(t)+bR0(t)]=a𝔼[V(t)]+b𝔼[R0(t)]<0.\mathbb{E}[aV(t)+bR_{0}(t)]=a\mathbb{E}[V(t)]+b\mathbb{E}[R_{0}(t)]<0.

This is analogous to the result (9) from [13]. Let us quickly show that condition (32) holds for real-world market data. As mentioned above, we can assume a=0a=0. Next, we have b^<0\hat{b}<0 for each of the Deciles 3–9. Finally, 𝔼[R0(t)]>0\mathbb{E}[R_{0}(t)]>0, because R0R_{0} is the growth rate of the benchmark; together with the US economy, stock market indices in the USA grow in the long run. To show when the left-hand side of (32) is well-defined, we state the following result. The assumptions are reasonable in the light of discussion from Subsection 2.4.

Lemma 2.

Under Assumptions 123, if 𝔼[e2W(t)]<\mathbb{E}[e^{2W(t)}]<\infty, then stationary version of V(t)V(t) has finite second moment, and the stationary version of R0(t)R_{0}(t) has finite first moment.

Proof.

The statement 𝔼[Vu(t)]<\mathbb{E}[V^{u}(t)]<\infty for all u(0,2]u\in(0,2] follows from the proof of [22, Lemma 1]. There, we can find that for MGF MW(u):=𝔼[euW(t)]M_{W}(u):=\mathbb{E}[e^{uW(t)}] we have:

𝔼[Vu(t)]=exp[αu1β]k=0MW(βku).\mathbb{E}[V^{u}(t)]=\exp\left[\frac{\alpha u}{1-\beta}\right]\cdot\prod\limits_{k=0}^{\infty}M_{W}(\beta^{k}u).

Finite first moment of R0(t)=V(t)(g+Z(t))R_{0}(t)=V(t)(g+Z(t)) follows from finite second moment of V(t)V(t), finite second moment of Z(t)Z(t) (which is from Assumption 1) and the Cauchy inequality. ∎

4.4. The case of constant volatility

We stated and proved in [13] results on stationarity and ergodicity of a continuous-time analogue of our model, but with constant volatility. We did not state and prove stability result for discrete time. Here, we fill this gap. In the notation of this article, we can assume V(t)=1V(t)=1 is constant. This corresponds to W(t)0W(t)\equiv 0.

Refer to caption
Figure 4. The set of (μ,σ)(\mu,\sigma) such that ξ𝒩(μ,σ2)\xi\sim\mathcal{N}(\mu,\sigma^{2}) satisfies 𝔼[ln|ξ|]<0\mathbb{E}\left[\ln|\xi|\right]<0. This domain cannot be described in closed form.

Assume Z(t)𝒩(0,σ2)Z(t)\sim\mathcal{N}(0,\sigma^{2}) IID, and V(t)=1V(t)=1. This violates Assumption 1 (positive density). But this does not affect the stationarity proof. Let us check Assumption (9). We have: R0(t)=g+Z(t)R_{0}(t)=g+Z(t). Then we can rewrite the left-hand side of Assumption 23:

(33) 𝔼[ln|ξ|]<0,ξ:=1+a+bR0(t)𝒩(μ,ρ2),μ:=1+a+bg,ρ:=|b|σ.\displaystyle\begin{split}\mathbb{E}&\left[\ln|\xi|\right]<0,\quad\xi:=1+a+bR_{0}(t)\sim\mathcal{N}(\mu,\rho^{2}),\\ \mu&:=1+a+bg,\quad\rho:=|b|\sigma.\end{split}

It is impossible to compute this logarithmic moment directly. But one can do this numerically using Monte Carlo. The set

{(μ,ρ)[0,)2:𝔼[ln|ξ|]<0,ξ𝒩(μ,ρ2)}\left\{(\mu,\rho)\in[0,\infty)^{2}:\quad\mathbb{E}\left[\ln|\xi|\right]<0,\quad\xi\sim\mathcal{N}(\mu,\rho^{2})\right\}

is shown in Figure 4. Python code is on GitHub repository asarantsev/size-capm-vix in file gaussian-simulation.py

5. Stochastic Portfolio Theory

5.1. Capital distribution curve

Consider the benchmark and NN portfolios 1,,N1,\ldots,N. We model each pair (benchmark, portfolio kk) with this model. The model must be the same for all kk. We let Sk(t)S_{k}(t) be the market capitalization (size) for the kkth portfolio, and S0(t)S_{0}(t) the market size of the benchmark. We define market weights as

(34) μk(t)=Sk(t)S0(t)+S1(t)++SN(t),k=0,1,,N.\mu_{k}(t)=\frac{S_{k}(t)}{S_{0}(t)+S_{1}(t)+\ldots+S_{N}(t)},\quad k=0,1,\ldots,N.

We can also rank them from top to bottom:

(35) μ(0)(t)μ(N)(t).\mu_{(0)}(t)\geq\ldots\geq\mu_{(N)}(t).

Market weights and portfolios based on them are the main topic of Stochastic Portfolio Theory in both discrete time, see [6, 21, 28] and continuous time, see [11, 12].

Define by δk\delta_{k} and εk\varepsilon_{k} the sequences of innovations for equity premium, as in (18), and price returns, as in (19), for the kkth portfolio, k=1,,Nk=1,\ldots,N:

(36) Pk(t)V(t)=M+ACk(t)+(1+BCk(t))P0(t)V(t)+δk(t);Rk(t)V(t)=m+aCk(t)+(1+bCk(t))R0(t)V(t)+εk(t);Rk(t)=lnSk(t)Sk(t1),Ck(t)=lnSk(t)S0(t).\displaystyle\begin{split}\frac{P_{k}(t)}{V(t)}&=M+AC_{k}(t)+(1+BC_{k}(t))\frac{P_{0}(t)}{V(t)}+\delta_{k}(t);\\ \frac{R_{k}(t)}{V(t)}&=m+aC_{k}(t)+(1+bC_{k}(t))\frac{R_{0}(t)}{V(t)}+\varepsilon_{k}(t);\\ R_{k}(t)&=\ln\frac{S_{k}(t)}{S_{k}(t-1)},\quad C_{k}(t)=\ln\frac{S_{k}(t)}{S_{0}(t)}.\end{split}

Together with (12), (16), (17), this is a time series Markov model for 2N+32N+3 series.

Assumption 3.

The 2N+32N+3-dimensional vectors

(W(t),Z(t),U(t),δ1(t),,δN(t),ε1(t),,εN(t))(W(t),Z(t),U(t),\delta_{1}(t),\ldots,\delta_{N}(t),\varepsilon_{1}(t),\ldots,\varepsilon_{N}(t))

are independent identically distributed with mean zero, finite second moment, with strictly positive Lebesgue density on 2N+3\mathbb{R}^{2N+3}.

Theorem 2.

Under Assumptions 233, the process of market weights (μ0,,μN)(\mu_{0},\ldots,\mu_{N}) from (34) is ergodic.

This is the main stability result. It has the meaning that if we have several portfolios, they stay in the long run as one cloud, and do not split into several clouds.

Proof.

The relative size processes are time-homogeneous Markov and ergodic:

(37) Ck(t)=lnSk(t)S0(t),k=1,,N.C_{k}(t)=\ln\frac{S_{k}(t)}{S_{0}(t)},\quad k=1,\ldots,N.

Their vector (C1,,CN)(C_{1},\ldots,C_{N}) is also time-homogeneous Markov and ergodic: Follows from Assumption 3 in the same way as in Theorem 1. And there exists a one-to-one continuous mapping (C1,,CN)(μ0,μ1,,μN)(C_{1},\ldots,C_{N})\mapsto(\mu_{0},\mu_{1},\ldots,\mu_{N}), between N\mathbb{R}^{N} and the NN-dimensional simplex

{(m0,,mN)[0,)N+1m0++mN=1}.\{(m_{0},\ldots,m_{N})\in[0,\infty)^{N+1}\mid m_{0}+\ldots+m_{N}=1\}.

Thus the process of market weights from (34) is ergodic. ∎

Thus the ranked market weights process also has a stationary distribution and converges to this distribution in the long run, regardless of the initial conditions. In this stationary distribution, we can plot these ranked market weights versus their ranks on the log scale:

((ln(n+1),lnμ(n)(t)),n=0,,N)\left((\ln(n+1),\ln\mu_{(n)}(t)),\quad n=0,\ldots,N\right)

This plot is called the capital distribution curve. With real-world markets, this curve is linear on most span, and concave overall. Moreover, it shows remarkable long-term stability. See the famous picture in [11, Chapter 4] for eight capital dsitribution curves at end of years 1929, 1939, …, 1999; see the same picture as [12, Figure 13.4].

In our previous article [13], we captured the observation that well-diversified portfolios of small stocks have higher risk but higher return than that of large stocks. We reproduced this shape of capital distribution curve in [13] using simulation.

From (37) and (34), we get a simple expression of μk(t)\mu_{k}(t) from Ck(t)C_{k}(t):

lnμk(t)=Ck(t)+lnS0(t)ln(S0(t)++SN(t)),k=1,,N.\ln\mu_{k}(t)=C_{k}(t)+\ln S_{0}(t)-\ln(S_{0}(t)+\ldots+S_{N}(t)),\,k=1,\ldots,N.

Therefore, ranking market weights μk(t),k=1,,N\mu_{k}(t),k=1,\ldots,N from top to bottom at any fixed time tt is equivalent to ranking relative size terms Ck(t),k=1,,NC_{k}(t),k=1,\ldots,N from top to bottom: C(1)(t)C(N)(t)C_{(1)}(t)\geq\ldots\geq C_{(N)}(t). Thus instead of plotting the (slightly modified) capital distribution curve (lnk,lnμ(k)(t)),k=1,,N(\ln k,\ln\mu_{(k)}(t)),k=1,\ldots,N, we can plot (lnk,C(k)(t)),k=1,,N(\ln k,C_{(k)}(t)),k=1,\ldots,N.

Refer to caption
(a) Simulation 1
Refer to caption
(b) Simulation 2
Figure 5. Two simulations of ranked relative size terms (lnk,C(k)),k=1,,N(\ln k,C_{(k)}),k=1,\ldots,N. We pick (c,ρ)(c,\rho) to be (0.1,0)(0.1,0), (0.2,0)(0.2,0), (0.1,0.5)(0.1,-0.5), (0.2,0.5)(0.2,-0.5)

The linearity of the capital distribution curve was rigorously proved in [7] for competing Brownian particles and disproved in [20] for volatility-stabilized models. These two types of continuous-time models both capture the property that small stocks have higher risk but higher return than large stocks. But these models do not use CAPM.

5.2. Simulation study

Instead of ranking all market weights, we can rank only weights of all portfolios but the benchmark. This technique will miss one portfolio from the capital distribution curve, but will not influence the overall behavior of the said curve. Consider the combined model:

  • Volatility VV governed by (12), with innovations WW having the variance-gamma distribution as in [22, subsection 3.3]

  • Benchmark price returns R0R_{0} governed as (16) with Gaussian ZZ having mean and standard deviation as in [22, Table 2, Large Price]

  • NN portfolios with relative size CkC_{k} as in (37), k=1,,Nk=1,\ldots,N, each governed by (22) for a=m=0a=m=0 and negative bb to be chosen, with innovations εk(t)𝒩(0,1)\varepsilon_{k}(t)\sim\mathcal{N}(0,1) independent for each kk

For simplicity, we write all equations with number coefficients, replacing b-b by cc:

lnV(t)\displaystyle\ln V(t) =0.346+0.882lnV(t1)+W(t),\displaystyle=0.346+0.882\ln V(t-1)+W(t),
W(t)\displaystyle W(t) =0.0621+0.0621Γ(t)+0.1392Γ(t)Y(t),\displaystyle=0.0621+0.0621\Gamma(t)+0.1392\sqrt{\Gamma(t)}Y(t),
Y(t)\displaystyle Y(t) 𝒩(0,1),Γ(t)is Gamma with shape 1/0.6573,𝔼[Γ(t)]=1,\displaystyle\sim\mathcal{N}(0,1),\quad\Gamma(t)\ \mbox{is Gamma with shape}\ 1/0.6573,\ \mathbb{E}[\Gamma(t)]=1,
R0(t)\displaystyle R_{0}(t) =V(t)Z(t),Z(t)𝒩(0.062,0.2022),Corr(Y(t),Z(t))=ρ,\displaystyle=V(t)Z(t),\quad Z(t)\sim\mathcal{N}(0.062,0.202^{2}),\quad\mathrm{Corr}(Y(t),Z(t))=\rho,
Ck(t+1)\displaystyle C_{k}(t+1) =Ck(t)(1cR0(t))+V(t)εk(t),εk(t)𝒩(0,1),k=1,,N,\displaystyle=C_{k}(t)(1-cR_{0}(t))+V(t)\varepsilon_{k}(t),\quad\varepsilon_{k}(t)\sim\mathcal{N}(0,1),\quad k=1,\ldots,N,
(Y(t),Z(t)),Γ(t),ε1(t),,εN(t)independent for fixedt\displaystyle(Y(t),Z(t)),\Gamma(t),\varepsilon_{1}(t),\ldots,\varepsilon_{N}(t)\ \mbox{independent for fixed}\ t
(Y(t),Z(t),Γ(t),ε1(t),,εN(t)),t=1,2,IID.\displaystyle\bigl{(}Y(t),Z(t),\Gamma(t),\varepsilon_{1}(t),\ldots,\varepsilon_{N}(t)\bigr{)},\ t=1,2,\ldots\quad\mbox{IID}.

We pick N=100N=100 and simulate these equations for T=400T=400 time steps, starting from

lnV(0)=3,Ck(0)=0,k=1,,N.\ln V(0)=3,\quad C_{k}(0)=0,\,k=1,\ldots,N.

We do this for t=Tt=T, to give enough time to converge to the stationary distribution. We see various slopes for different cases: ρ=0\rho=0 or ρ=50%\rho=-50\% (this latter correlation is found in [22, Table 2, Large Price]); and c=0.1c=0.1 or c=0.2c=0.2 (lower and upper bound for b-b found in Subsection 3.2). We perform two simulations for each case. Each simulation has the same values of VV and R0R_{0}. We show all four curves for each simulation in Figure 5. See the Python code in the file capital-distribution-simulation.png from the GitHub repository asarantsev/size-capm-vix

5.3. Rigorous results: reduction to normal order statistics

Fix a constant k=1,2,k=1,2,\ldots. Assume the market weights μn\mu_{n}, or, equivalently, relative size CnC_{n} are in the stationary distribution, which (by Theorem 2) is limiting distribution. To stress this, we write t=t=\infty for time argument. Thus we have the ranked (sorted, ordered from top to bottom) values of the relative size process in the stationary distribution:

C(1)()>>C(N)().C_{(1)}(\infty)>\ldots>C_{(N)}(\infty).

Here, NN is the overall number of portfolios (excluding the benchmark). We are interested in the joint distribution of these sorted relative size values.

Assume 𝒵1,,𝒵N𝒩(0,1)\mathcal{Z}_{1},\ldots,\mathcal{Z}_{N}\sim\mathcal{N}(0,1) IID is the standard normal sample.

Theorem 3.

There exists random variables \mathcal{M} and 𝒮>0\mathcal{S}>0 independent of the point process 𝒩\mathcal{N} which are functions of two time series: VV and ZZ, such that in law,

[C1()𝒮,,CN()𝒮]=(𝒵1,,𝒵N).\left[\frac{C_{1}(\infty)-\mathcal{M}}{\mathcal{S}},\ldots,\frac{C_{N}(\infty)-\mathcal{M}}{\mathcal{S}}\right]=(\mathcal{Z}_{1},\ldots,\mathcal{Z}_{N}).
Refer to caption
Figure 6. Three simulations of the standard normal market curve (lnk,𝒵(k))(\ln k,\mathcal{Z}_{(k)}) for k=1,,Nk=1,\ldots,N if 𝒵1,,𝒵N𝒩(0,1)\mathcal{Z}_{1},\ldots,\mathcal{Z}_{N}\sim\mathcal{N}(0,1) IID sample, for N=100N=100.
Proof.

Fix a k=1,,Nk=1,\ldots,N. Apply [5, (0.6)] to express the stationary distribution of the relative size process CkC_{k} from (22): letting

(38) An:=1+aV(n)+bR0(n)=1+aV(n)+bV(n)(g+Z(n)),Bn:=V(n)(m+εk(n)),\displaystyle\begin{split}A_{n}&:=1+aV(n)+bR_{0}(n)=1+aV(n)+bV(n)(g+Z(n)),\\ B_{n}&:=V(n)(m+\varepsilon_{k}(n)),\end{split}

pick any n=1,2,n=1,2,\ldots and get:

(39) Ck():=Bn+AnBn1+AnAn1Bn2+C_{k}(\infty):=B_{n}+A_{n}B_{n-1}+A_{n}A_{n-1}B_{n-2}+\ldots

We assume V(n)V(n) and Z(n)Z(n) are defined for all nn\in\mathbb{Z}, not just n0n\geq 0. Assume σε\sigma_{\varepsilon} is the standard deviation of innovations εk\varepsilon_{k}. From (38) and (39), given time series VV and ZZ, the stationary distribution of Ck()C_{k}(\infty) is Gaussian with mean and variance

\displaystyle\mathcal{M} :=m[V(n)+AnV(n1)+AnAn1V(n2)+],\displaystyle:=m\left[V(n)+A_{n}V(n-1)+A_{n}A_{n-1}V(n-2)+\ldots\right],
𝒮2\displaystyle\mathcal{S}^{2} :=σε2[V2(n)+An2V2(n1)+An2An12V2(n2)+].\displaystyle:=\sigma^{2}_{\varepsilon}\left[V^{2}(n)+A_{n}^{2}V^{2}(n-1)+A_{n}^{2}A_{n-1}^{2}V^{2}(n-2)+\ldots\right].

Given time series VV and ZZ, the random variables C1(),,CN()C_{1}(\infty),\ldots,C_{N}(\infty) are independent. Thus their standardized versions are (conditionally on VV and ZZ) IID standard Gaussian:

(40) 𝒵n:=Cn()𝒮,n=1,,N.\mathcal{Z}_{n}:=\frac{C_{n}(\infty)-\mathcal{M}}{\mathcal{S}},\quad n=1,\ldots,N.

This motivates the following definition.

Definition 4.

We define the standard normal market curve

(41) (lnk,𝒵(k)),k=1,,N,(\ln k,\mathcal{Z}_{(k)}),\quad k=1,\ldots,N,

for standard normal sample 𝒵1,,𝒵N\mathcal{Z}_{1},\ldots,\mathcal{Z}_{N} and its ranked version 𝒵(1)𝒵(N)\mathcal{Z}_{(1)}\geq\ldots\geq\mathcal{Z}_{(N)}.

We study this curve in Appendix. The significance of Theorem 3 is as follows. Assume we rank 𝒵(1)>>𝒵(N)\mathcal{Z}_{(1)}>\ldots>\mathcal{Z}_{(N)}. An immiediate consequence of Theorem 3 is that the (slightly modified) capital distribution curve (lnk,C(k))(\ln k,C_{(k)}) for k=1,,Nk=1,\ldots,N has almost the same shape as the standard normal market curve. The only difference is a shift and change in slope, both random but independent of the standard normal market slope itself. Multiplication by 𝒮\mathcal{S} preserves ordering, since 𝒮>0\mathcal{S}>0. We plotted three simulations of such curve in Figure 6. The Python code for this simulation in Figure 6 is available on GitHub repository asarantsev/size-capm-vix, file standard-normal-curve.py

It reproduces the real-world shape of the curve, but up to constant but random change in shift and slope, governed by \mathcal{M} and 𝒮\mathcal{S}. This is why the difference in simulations in Figure 5 is greater than in Figure 6. Indeed, much of the difference between two windows in Figure 5 (for example, between the two curves with b=0.1b=-0.1 and zero correlation) is accounted for by the difference in \mathcal{M} and 𝒮\mathcal{S} from simulation to simulation.

Changing the parameters of this model also affects the shape of the capital distribution curve, through the slope 𝒮\mathcal{S}. This accounts for the differences within each window from Figure 5.

6. Conclusion and Further Research

We combined main results of our previous articles [13, 22]. In [13], the main model (CAPM plus linear dependence of α\alpha and β\beta upon relative size) was used to capture the property that small stocks have, on average, higher risks and returns. In this article, we add to this model the normalization (division by the Volatility Index). The resulting multivariate Markov time series model fits better using real-world market data: Its residuals are better descrbied as IID Gaussian. We state and prove a simple sufficient condition (Theorem 1) for ergodicity. And we make some connections with Stochastic Portfolio Theory, adding our model to the collection of proposed models capturing this size effect. We state and prove rigorous results on the capital distribution curve.

We fill two lacunas in our previous research from [13].

First lacuna: Long-term stability results apply to the case of constant volatility from [13], see subsection 4.4 of the current article.

Second lacuna: The curve is order statistics of the normal sample (but with random mean and standard deviation), see subsection 5.3 of the current article, continued below in the Appendix. Such curve reproduces the real-world shape of the captial distribution curve: linear at the upper left end, and concave at the lower right end.

For future research, we could fit non-normal distributions for innovation series W,δ,εW,\delta,\varepsilon, and check conditions of Theorem 1 for the resulting distributions. Also, we could include the value effect: Stocks priced cheaply to fundamentals (earnings, dividends, book value) tend to outperform other stocks, on average. We would include, for example, dividend yield as a factor in α\alpha and β\beta from CAPM. To make the model complete, we need to model dividend yield separately (for example, as an autoregression). Our goal is to statistically fit this model and prove long-term stability.

Appendix: Standard Normal Market Curve

To continue subsection 5.3, it remains to study the behavior of the curve (41) using the classic Extreme Value Theory. Most of the definitions and results of this subsection are well-known. We do not even attempt to provide an exhaustive list of references. Instead, we mention classic monograph [23] and a classic textbook [2]. For Poisson point processes on the real line, see another classic monograph [15].

Pick a function λ:[0,)\lambda:\mathbb{R}\to[0,\infty) which is locally integrable: Λ(I):=Iλ(x)dx<\Lambda(I):=\int_{I}\lambda(x)\,\mathrm{d}x<\infty for any bounded interval II\subseteq\mathbb{R}.

Definition 5.

A Poisson point process on +\mathbb{R}_{+} with intensity or rate λ\lambda is defined as a random countable subset 𝒩\mathcal{N}\subseteq\mathbb{R} such that 𝒩IPoi(Λ(I))\mathcal{N}\cap I\sim\mathrm{Poi}(\Lambda(I)). That is, the random number of points on any bounded interval II is Poisson with mean Iλ(u)du\int_{I}\lambda(u)\,\mathrm{d}u.

In particular, if λ(t)=et\lambda(t)=e^{-t}, then we can rank points from the rightmost to the left. In other words, the kkth rightmost point 𝒩k+\mathcal{N}^{+}_{k} is well-defined. This point exists, since λ\lambda is integrable on [u,)[u,\infty) for any uu\in\mathbb{R}. Denote by τk\tau_{k} the kkth jump time of the standard Poisson process on [0,)[0,\infty): τkτk1\tau_{k}-\tau_{k-1} are IID exponential with mean 1, where by convention τ0:=0\tau_{0}:=0. For fixed mm, we can also express

(42) 𝒩k+=ln(τk),k=1,,m.\mathcal{N}^{+}_{k}=-\ln(\tau_{k}),\quad k=1,\ldots,m.

Similarly, if λ(t)=et\lambda(t)=e^{t}, then we can rank points from the leftmost to the right. And the formula (42) becomes the formula for the kkth leftmost point:

(43) 𝒩k=lnτk,k=1,,m.\mathcal{N}^{-}_{k}=\ln\tau_{k},\quad k=1,\ldots,m.

See also [2, Chapter 8, Exercises 6, 7].

Definition 6.

The (standard) Gumbel distribution GG is defined by its cumulative distribution function exp(exp(x))\exp(-\exp(-x)).

It is known, see classic references [2, Theorem 8.3.1, Example 8.3.4] or [23, Chapter 1], that the normal distribution belongs to the Gumbel domain of attraction: Consider the maximum of nn IID normal variables:

Mn=max(X1,,Xn),X1,X2,𝒩(0,1)IID.M_{n}=\max(X_{1},\ldots,X_{n}),\quad X_{1},X_{2},\ldots\sim\mathcal{N}(0,1)\ \mbox{IID.}

After scaling, this maximum converges weakly to Gumbel distribution as nn\to\infty:

(44) Mnbnan𝑑G,n,\frac{M_{n}-b_{n}}{a_{n}}\xrightarrow{d}G,\quad n\to\infty,

where an>0a_{n}>0 and bnb_{n} are suitable constants. A common suggestion is

an=12lnn,bn=2lnnln(4πlnn)22lnn.a_{n}=\frac{1}{\sqrt{2\ln n}},\quad b_{n}=\sqrt{2\ln n}-\frac{\ln(4\pi\ln n)}{2\sqrt{2\ln n}}.

But the convergence rate for this choice of constants is not the best, as shown in [2, Example 8.3.4]. There are ways to improve this, for example [14]:

nφ(bn)=bn,an=1/bn,φ(u):=12πexp[u22].n\varphi(b_{n})=b_{n},\quad a_{n}=1/b_{n},\quad\varphi(u):=\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{u^{2}}{2}\right].

For any such sequences (an)(a_{n}) and (bn)(b_{n}), we have convergence of top kk ranked standardized variables X(1)>>X(k)X_{(1)}>\ldots>X_{(k)} to the rightmost kk points of 𝒩\mathcal{N}, as NN\to\infty:

(45) [X(1)bNaN,,X(k)bNaN]𝑑[𝒩1+,,𝒩k+].\left[\frac{X_{(1)}-b_{N}}{a_{N}},\ldots,\frac{X_{(k)}-b_{N}}{a_{N}}\right]\xrightarrow{d}\left[\mathcal{N}^{+}_{1},\ldots,\mathcal{N}^{+}_{k}\right].

Similarly and symmetrically, as NN\to\infty,

(46) [X(N)+bNaN,,X(Nk+1)+bNaN]𝑑[𝒩1,,𝒩k].\left[\frac{X_{(N)}+b_{N}}{a_{N}},\ldots,\frac{X_{(N-k+1)}+b_{N}}{a_{N}}\right]\xrightarrow{d}\left[\mathcal{N}^{-}_{1},\ldots,\mathcal{N}^{-}_{k}\right].

This convergence (45) or (46) follows from [2, Theorem 8.4.2], or [18], or [23, Chapter 4]; see also [2, Chapter 8, Exercises 6–8] and compare with (42) or (43). Recall (42) and plot this Poisson point process with xx-axis log scale. This represents (up to shift by bNb_{N} and rescaling by aNa_{N}) the left upper end of the capital distribution curve:

(47) (lnk,𝒩k+)=(lnk,ln(τk)),k=1,2,,m.(\ln k,\mathcal{N}^{+}_{k})=(\ln k,-\ln(\tau_{k})),\,k=1,2,\ldots,m.

Similarly, recall (43) and plot this Poisson point process with xx-axis log scale. This represents (up to shift by bN-b_{N} and rescaling by aNa_{N}) the right lower end of the capital distribution curve:

(48) (ln(N+1k),𝒩k)=(ln(N+1k),ln(τk)),k=1,2,,m.(\ln(N+1-k),\mathcal{N}^{-}_{k})=(\ln(N+1-k),\ln(\tau_{k})),\,k=1,2,\ldots,m.
Lemma 3.

For each kk, the distribution of lnτk+1lnτk\ln\tau_{k+1}-\ln\tau_{k} is exponential with mean 1/k1/k.

Proof.

From definitions of τk\tau_{k}, we have: τk\tau_{k} and τk+1τk\tau_{k+1}-\tau_{k} are independent; τk\tau_{k} has Gamma distribution (sum of kk IID exponential random variables with mean 11); τk+1τk\tau_{k+1}-\tau_{k} is another exponential random variable with mean 11. Therefore, lnτk+1lnτk0\ln\tau_{k+1}-\ln\tau_{k}\geq 0 almost surely. What is more, the tail of the exponential distribution with mean 1 is given by:

(49) (τk+1τkv)=ev,v0.\mathbb{P}(\tau_{k+1}-\tau_{k}\geq v)=e^{-v},\quad v\geq 0.

Using (49), we get: for every u0u\geq 0,

(50) (lnτk+1lnτku)=(τk+1euτk)=(τk+1τk(eu1)τk)=𝔼((τk+1τk(eu1)τkτk)),\displaystyle\begin{split}\mathbb{P}&(\ln\tau_{k+1}-\ln\tau_{k}\geq u)=\mathbb{P}(\tau_{k+1}\geq e^{u}\tau_{k})\\ &=\mathbb{P}(\tau_{k+1}-\tau_{k}\geq(e^{u}-1)\tau_{k})=\mathbb{E}(\mathbb{P}(\tau_{k+1}-\tau_{k}\geq(e^{u}-1)\tau_{k}\mid\tau_{k})),\end{split}

Next, the Laplace transform of the Gamma random variable τk\tau_{k} with shape kk is

(51) 𝔼[exp(vτk)]=(1+v)k.\mathbb{E}\left[\exp(-v\tau_{k})\right]=(1+v)^{-k}.

Use independence of τk+1τk\tau_{k+1}-\tau_{k} and τk\tau_{k} for (50) and (51):

(52) (lnτk+1lnτku)=𝔼[exp(τk(eu1))].\mathbb{P}(\ln\tau_{k+1}-\ln\tau_{k}\geq u)=\mathbb{E}\left[\exp(-\tau_{k}(e^{u}-1))\right].

Next, apply the Laplace transform of τk\tau_{k} to v:=eu1v:=e^{u}-1. The right-hand side of (52) is (1+(eu1))k=eku(1+(e^{u}-1))^{-k}=e^{-ku}. This is the tail of the exponential distribution with mean 1/k1/k. ∎

Refer to caption
(a) Upper Left End
Refer to caption
(b) Lower Right End
Figure 7. Left panel: (lnk,lnτk)(-\ln k,-\ln\tau_{k}) for k=1,,100k=1,\ldots,100 and y=xy=-x for x[0,ln100]x\in[0,\ln 100]. Right panel: (ln(501k),lnτk)(\ln(501-k),-\ln\tau_{k}) for k=1,,100k=1,\ldots,100 and y=ln(500ex)y=\ln(500-e^{x}) for x[ln401,ln500]x\in[\ln 401,\ln 500]. For each panel, we do three simulations (solid) and plot the deterministic function (dotted).

The next lemma is the key to our analysis of both ends of the standard normal curve (41). It is proved very similarly to our results in [27, Proposition A1], but we provide the full proof for completeness.

Lemma 4.

With probability 11, the sequence |lnτklnk|,k=1,2,|\ln\tau_{k}-\ln k|,\,k=1,2,\ldots is bounded.

Proof.

From [2, Chapter 8, Problem 8.7], we know that the random variables lnτklnτk1\ln\tau_{k}-\ln\tau_{k-1} are independent. By Lemma 3, the mean of lnτk+1lnτk\ln\tau_{k+1}-\ln\tau_{k} is 1/k1/k and the variance is 1/k21/k^{2}. These differences lnτk+1lnτk\ln\tau_{k+1}-\ln\tau_{k} all are independent. Therefore,

(53) 𝔼[lnτk]=j=1k1j=:S(k),Var[lnτk]=j=1k1j2:=V(k).\mathbb{E}\left[\ln\tau_{k}\right]=\sum_{j=1}^{k}\frac{1}{j}=:S(k),\quad\mathrm{Var}\left[\ln\tau_{k}\right]=\sum_{j=1}^{k}\frac{1}{j^{2}}:=V(k).

It is well-known that S(k)lnkS(k)\sim\ln k as kk\to\infty, and the sequence (S(k)lnk)(S(k)-\ln k) is bounded. Also, V(k)π2/6V(k)\to\pi^{2}/6 as kk\to\infty. By [26, Theorem 1.4.2], from (53) we complete the proof. ∎

Thus we can replace lnτk\ln\tau_{k} in (48) and (47) with lnk\ln k. This result allows us to approximate curves (47) and (48) with continuous functions. The upper-left end of the curve (47) then becomes (lnk,lnk)(\ln k,-\ln k). This is a linear curve with 45 degrees of incline. Next, the lower-right end of the curve (48) becomes (lnk,ln(N+1k)(\ln k,\ln(N+1-k). Let x=lnkx=\ln k and y=ln(N+1k)y=\ln(N+1-k): Then

y=ln(aex),a:=N+1.y=\ln(a-e^{x}),\quad a:=N+1.

This function is concave but obviously not linear:

y′′=((aex)aex)=(1+aexa)=aex(exa)2<0.y^{\prime\prime}=\left(\frac{(a-e^{x})^{\prime}}{a-e^{x}}\right)^{\prime}=\left(1+\frac{a}{e^{x}-a}\right)^{\prime}=-\frac{ae^{x}}{(e^{x}-a)^{2}}<0.

This reproduces the shape of this capital distribution curve. See Figure 7 for the upper left and lower right ends of the capital distribution curve. The simulation of lnτk\ln\tau_{k} for k=1,,100k=1,\ldots,100 and the plot of the comparative deterministic function (with N=500N=500) was done in Python. The code standard-curve-simulation.py is in GitHub repository asarantsev/size-capm-vix The shape of both ends of the curve in Figure 6 are reproduced here in Figure 7.

References

  • [1] Andrew Ang (2014). Asset Management. A Systematic Approach to Factor Investing. Oxford University Press.
  • [2] Barry C. Arnold, N. Balakrishnan, H. N. Nagaraja (2008). A First Course in Order Statistics. Society for Industrial and Applied Mathematics. Classics in Applied Mathematics 54.
  • [3] Rolf Banz (1981). The Relationship Between Return and Market Value of Common Stocks. Journal of Financial Economics 9 (1), 3–18.
  • [4] Lorenzo Bergomi (2015). Stochastic Volatility Modeling. Chapman & Hall/CRC.
  • [5] Andreas Brandt (1986). The Stochastic Equation Yn+1=AnYn+BnY_{n+1}=A_{n}Y_{n}+B_{n} with Stationary Coefficients. Advances in Applied Probability 18 (1), 211–220.
  • [6] Steven Campbell, Ting-Kam Leonard Wong (2022). Functional Portfolio Optimization in Stochastic Portfolio Theory. SIAM Journal of Financial Mathematics 13 (2), 576–618.
  • [7] Sourav Chatterjee, Soumik Pal (2010). A Phase Transition Behavior for Brownian Motions Interacting Through Their Ranks. Probability Theory and Related Fields 147 (1), 123–159.
  • [8] Eugene F. Fama, Kenneth R. French (1993). Common Risk Factors in the Returns on Stocks and Bonds. Journal of Financial Economics 33 (1), 3–56.
  • [9] Eugene F. Fama, Kenneth R. French (2004). The Capital Asset Pricing Model: Theory and Evidence. Journal of Economic Perspectives 18 (3), 25–46.
  • [10] Eugene F. Fama, Kenneth R. French (2015). A Five-Factor Asset Pricing Model. Journal of Financial Economics 116 (1), 1–22.
  • [11] E. Robert Fernholz (2002). Stochastic Portfolio Theory. Springer.
  • [12] E. Robert Fernholz, Ioannis Karatzas (2009). Stochastic Portfolio Theory: an Overview. Handbook of Numerical Analysis 15.
  • [13] Brandon Flores, Blessing Ofori-Atta, Andrey Sarantsev (2021). A Stock Market Model Based on CAPM and Market Size. Annals of Finance 17 (3), 405–424.
  • [14] Peter Hall (1979). On the Rate of Convergence of Normal Extremes. Journal of Applied Probability 16 (2), 433–439.
  • [15] John Frank Charles Kingman (1993). Poisson Processes. Oxford University Press.
  • [16] John Lintner (1965). The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. Review of Economics and Statistics 47 (1), 13–37.
  • [17] Sean P. Meyn, Richard L. Tweedie (2009). Markov Chains and Stochastic Stability.
  • [18] Douglas R. Miller (1976). Order Statistics, Poisson Processes, and Repairable Systems. Journal of Applied Probability 13 (3), 519–529.
  • [19] Jan Mossin (1966). Equilibrium in a Capital Asset Market. Econometrica 34 (4), 768–783.
  • [20] Soumik Pal (2011). Analysis of Market Weights Under Volatility-Stabilized Market Models. Annals of Applied Probability 21 (3), 1180–1213.
  • [21] Soumik Pal, Ting-Kam Leonard Wong (2016). The Geometry of Relative Arbitrage. Mathematics and Financial Economics 10 (3), 263–293.
  • [22] Jihyun Park, Andrey Sarantsev (2024). Log Heston Model for Monthly Average VIX. arXiv:2410.22471.
  • [23] Sidney I. Resnick (1987). Extreme Values, Regular Variation and Point Processes. Springer.
  • [24] Andrei Semenov (2015). The Small-Cap Effect in the Predictability of Individual Stock Returns. International Journal of Economics and Finance 38, 178–197.
  • [25] William F. Sharpe (1964). Capital Asset Prices: a Theory of Market Equilibrium Under Conditions of Risk. Journal of Finance 19 (3), 425–442.
  • [26] Daniel Stroock (2010). Probability Theory, An Analytic View. Cambridge University Press.
  • [27] Andrey Sarantsev, Li-Cheng Tsai (2017). Stationary Gap Distributions for Infinite Systems of Competing Brownian Particles. Electronic Journal of Probability 22 (56), 1–20.
  • [28] Ting-Kam Leonard Wong (2015). Optimization of Relative Arbitrage. Annals of Finance 11 (3–4), 345–382.