My Finance

University Home Page


  • Bank of America Corporate Bonds: Annual Returns

    A continuation of research in github.com/asarantsev repository Annual-Bank-of-America-Rated-Bond-Data from my previous post. Consider total returns  Q(t) computed from total wealth process  U(t) as log change:  Q(t) = \ln U(t) - \ln U(t-1). The plot of the wealth process  U(t) normalized so that  U(0) = 1 is given below. We see that high-yield, low-rated bonds provide more long-run returns but with much more risk.

    If these were Treasury bonds and there were no risk of default, and if these were zero-coupon bonds (with only principal payment at maturity) then total returns would be equal to the rate minus maturity times rate change. See my manuscript arXiv:2411.03699. The equation is:  Q(t) = R(t-1) - m(R(t) - R(t-1)). If the bonds were having coupons, then instead of maturity there would be duration (average time of coupon and principal payments, weighted by payment size). But we add noise (innovation) terms, and an intercept:

     Q(t) - R(t-1) = k - m(R(t) - R(t-1)) + Z(t).

    The maturity is given in the following table, together with analysis of residuals: skewness, kurtosis, Shapiro-Wilk and Jarque-Bera normality test  p valued. Also, we take the sum of absolute values of the autocorrelation function for the first five lags, separately for original values of residuals and for their absolute values.

    Rating m SkewnessKurtosisShapiro-Wilk  p Jarque-Bera  p ACF of  Z(t) ACF of  |Z(t)|
    AAA6.03-2.0495.1730.014%<0.001%0.5390.687
    AA4.89-0.9411.1542.549%5.831%0.9440.874
    A4.94-0.8941.3077.877%5.726%0.9770.704
    BBB5.17-0.5730.02725%46%0.380.604
    BB3.81-1.753.640.054%<0.001%0.830.245
    B3.12-2.0364.3230.003%<0.001%1.170.653
    CCC2.55-2.3045.340.001%<0.001%0.90.716

    The autocorrelation function plots for  Z(t) and for  |Z(t)| show that these are independent identically distributed. However, the quantile-quantile plot of  Z(t) versus the Gaussian distribution show these are not normal, for most ratings. See the plots below.

    This is confirmed by the results of Shapiro-Wilk and Jarque-Bera tests, shown in the table above.

    Apply the same technique as in the previous post: Normalize residuals by dividing them by annual average VIX. We get:  Q(t) - R(t-1) = k - m(R(t) - R(t-1)) + V(t)\delta(t). We divide this equation by  V(t) and get an ordinary least squares regression without intercept. This is not usual, so let us add an intercept:

     Q(t) - R(t-1) = k - m(R(t) - R(t-1)) + hV(t) + V(t)\delta(t).

    Coefficient estimates and analysis of innovations  \delta(t) is shown in the table below.

    Rating k  m  h SkewnessKurtosisShapiro-Wilk  p Jarque-Bera  p ACF of  Z(t) ACF of  |Z(t)|
    AAA0.06617.0787-0.0034-0.7370.57628%23%0.9220.361
    AA0.04535.3226-0.00230.126-0.03946%96%0.570.761
    A0.04235.3423-0.0022-0.1810.0235%93%0.730.459
    BBB0.02935.6074-0.0016-0.232-0.77567%62%0.8750.498
    BB0.04223.6671-0.0024-0.8941.42619%6.3%0.5740.888
    B0.06822.9970-0.0050-1.5623.4790.397%<0.001%0.9980.354
    CCC0.07122.6532-0.0075-1.5882.6490.072%0.005%0.8350.723

    We see the residuals can be well described as Gaussian white noise for ratings BB and higher, especially well for investment-grade bonds. But for B and CCC ratings, not so much. However, judging by the ACF, new residuals (see the second table) are comparable to old residuals (see the first table) in being close to independent identically distributed. See also the following plots for AAA rated bonds:

    And for the lowest-rated CCC bonds the situation is different: We see that the first lag is quite significant for both version of the autocorrelation function.

    Combining the model above with the results of the previous post, we get the trivariate model:

     \ln V(t) = \alpha + \beta \ln V(t-1) + W(t)

     R(t) = a + bR(t-1) + cV(t) + V(t)\,\varepsilon(t)

     Q(t) - R(t-1) = k - m(R(t) - R(t-1)) + hV(t) + V(t)\,\delta(t)

    And the wealth process is given by  U(t) = \exp(Q(1)+\ldots + Q(t)). It is possible to show this model for  (V(t), R(t), Q(t)) is long-term stable and ergodic, because for each of seven ratings,  \beta, b \in (0, 1). We have done this for  (V, R) in our previous work. For  Q this is trivial.

    Next, for ratings BB and above, the trivariate innovations sequence  (W, \delta, \varepsilon) is modeled as independent identically distributed trivariate Gaussian. Our code find the covariance matrix for these. We do not put it here but an interested reader can run the code.

    February 25, 2025

  • Bank of America Corporate Bonds: Annual Rates

    Here we use annual volatility to model bond rates: annual, end-of-year 1996-2024. We take Bank of America bond portfolios with the following seven rates: AAA, AA, A, BBB (investment-grade) and BB, B, CCC (junk, high-yield). Data and code are available on GitHub/asarantsev depository Annual-Bank-of-America-Rated-Bond-Data.

    First, consider the rate on the last day of years 1996-2024. Below is the graph of them.

    Let  R(t) be this rate at end of year  t. Model as an autoregression:

     R(t) - R(t-1) = a + bR(t-1) + \delta(t).

    And the results are available in the table below. The last three columns are: Autocorrelation function for innovations, sum of absolute values of the first 5 lags (ACFO); same but for absolute values of innovations (ACFA); Pearson test for  b = 0.

    Rate b  aStdev of residualsSkewKurtosisShapiro-Wilk  pJarque-Bera  pACFOACFAPearson Test  p
    AAA-0.210.00810.0080.9941.2410.0290.0410.6290.6270.051
    AA-0.230.00870.0090.7470.8420.1080.180.7160.9180.049
    A-0.260.0110.010.5710.530.1380.3970.480.7440.043
    BBB-0.330.0170.0120.7991.6710.090.0440.2620.6990.025
    BB-0.460.0310.021.4793.9510.007<0.0010.4820.8140.009
    B-0.570.0480.0261.6123.6290.003<0.0010.3660.7270.003
    CCC-0.570.0820.0551.3382.0510.0070.0010.5760.6260.004

    The autocorrelation function (ACF) plots for  \delta(t) and for  |\delta(t)| shows that this is well explained by independent identically distributed random variables (white noise). But these are not necessarily normal, judging by the Shapiro-Wilk and Jarque-Bera normality tests. Especially for junk-rated bonds (BB, B, CCC) but also sometimes for investment-grade bonds. The random walk hypothesis could be rejected (using low  p values) for all rates (even AAA is just barely above  5\% ). See also the plots below. We present only the plots for AAA, other ratings are similar. One can generate these graphs by running the code from the GitHub repository mentioned above.

    As usual, we can improve fit and make innovations Gaussian by dividing them by annual volatility. Now we take average annual VIX instead of monthly. This parallels research by Angel Piotrowski mentioned in previous posts. But she computed annual realized volatility, and I use averaged VIX (implied volatility). Let us first fit the log Heston model for VIX 1996-2024:

     \ln V(t) = \alpha + \beta \ln V(t-1) + W(t).

    Here,  \alpha = 1.41 and  \beta = 1 - 0.475. Next,  R^2 = 23.5\% and  p = 0.9\% for Student  t test for  \beta = 1. The standard deviation for  W(t) is  0.247. The normality tests for innovations  W(t) give us  p = 27.5\% for Shapiro-Wilk and  p = 60.8\% for Jarque-Bera. The plots for ACF of  W(t) and for  |W(t)| show independent identically distributed. See below. Thus the log volatility is indeed modeled by the autoregression of order 1, statistically significantly mean-reverting, with Gaussian innovations. This is similar to Angel Piotrowski’s research.

    Consider the autoregression with normalization of innovations  \delta(t) by dividing them by volatility  V(t). We have then

     R(t) = a + bR(t-1) + V(t)\,\varepsilon(t).

    Divide by VIX and then get an ordinary least squares regression with residuals  \varepsilon(t) without intercepts. Let us add intercepts:

     R(t) = a + bR(t-1) + cV(t) + V(t)\,\varepsilon(t).

    Results are available below in the table: Coefficients and analysis of innovations  \varepsilon(t).

    Rate a  b  10000*c Stdev SkewKurtS-W J-B ACFOACFA
    AAA0.01-0.14-2.410.000370.7740.66422%19%0.3520.816
    AA0.0099-0.116-2.780.00040.670.80743%24%0.4020.924
    A0.0087-0.157-1.060.000440.3420.293%74%0.3950.963
    BBB0.0099-0.279-2.10.000510.039-0.496%91%0.4050.779
    BB0.015-0.53910.00.000760.303-0.2188%79%0.6560.594
    B0.0235-0.7720.30.000960.35-0.12646%74%0.5760.722
    CCC0.0217-0.7341.50.001980.580.2536%44%0.8170.948

    All correlation between  W(t) and  \varepsilon(t) are not statistically significant. For investment-grade ratings, we have  p higher than 5% for all coefficients. But for junk ratings,  p < 5\% for  b = 0. And for the two bottom ratings,  p < 5\% for  c = 0. Next,  R^2 for linear regression with  cV(t) for most ratings is much higher than without it. So we need to include this term.

    Thus we see a joint model: For  \beta \in (0, 1) and  b \in (-1, 0) we get:

     \ln V(t) = \alpha + \beta \ln V(t-1) + W(t)

     R(t)  - R(t-1) = a + bR(t-1) + cV(t) + V(t)\varepsilon(t)

     (W(t), \varepsilon(t)) \sim \mathcal N_2([0, 0], \Sigma) IID

    As discussed, we might consider  \Sigma to be the diagonal matrix, but might as well make it a complete matrix. This model fits very well. A disadvantage is that we have only ~30 years of data. We do need to normalize innovations of rates by dividing these by VIX. We do need the term  cV(t).

    In our previous research, we proved long-term stability of this bivariate model. This is true not just for Gaussian innovations, but for more general cases, under certain conditions.

    We continue this research in the next post, where we model total returns.

    February 25, 2025

  • New Bubble Measure Work Replicated

    This is the work with my undergraduate student Angel Piotrowski. She used her annual volatility data 1928-2023 to replicate my previous work published on arXiv and discussed in my previous blog entry about the new valuation measure for the Standard & Poor 500 and its predecessors. She used the end-of-year S&P 500 close trading price instead of the January S&P 500 daily close averaged price used by me. She used December Consumer Price Index (CPI) data instead of me using January CPI. And she used data only 1928-2023 instead of my work 1871-2023. But this almost a century of data is still enough to make conclusions.

    First, adjust earnings for inflation and consider the trailing average  E(t) of earnings for the last 5 years. This is similar to the classic Campbell-Shiller approach, when the cyclically adjusted price-earnings ratio (CAPE) features last 10 years of earnings. But we chose 5 years to make room for more data. Next, take inflation-adjusted wealth  S(t) at end of year  t. Consider the linear regression

     \ln S(t) - \ln E(t) - ct = a + b(\ln S(t-1) - \ln E(t-1) - c(t-1)) + \delta(t).

    This has the meaning of subtracting the trend from relative growth of wealth over earnings. Historical earnings averaged 1-2% per year and wealth growth is around 6-7% per year so the value for the trend  c must be 4-5%. This equation shows that after detrending, this is the classic autoregression of order 1. We can rewrite this in the more standard ordinary least squares regression form:

     Q(t) = a + bc + t(c - bc) + (b-1)(\ln S(t-1) - \ln E(t-1)) + \delta(t),

    where we define for short notation the quantity which we called implied dividend yield:

     Q(t) := \ln S(t) - \ln E(t) - \ln S(t-1) - \ln E(t-1)

    This gives us  a + bc = 0.1351 and  c - bc = 0.0101 and  b - 1 = -0.1307. All three coefficients are significantly different from zero: Student T-test gives p values  0.1\%, 2.9\%, 1.9\%. But the Jarque-Bera test shows that innovations are not Gaussian:  p = 0.1\%. This is confirmed by the following quantile-quantile plot below. But the autocorrelation function plots for  \delta(t) and for  |\delta(t)| below show that these are independent identically distributed.

    This stays in contrast with our original research, when residuals (innovations) are independent identically distributed and Gaussian. Further research will include extending this to years 2024 and 1924-1928 when we have total returns and volatility data for shifted years.

    The GitHub repository New Valuation Measure Replication contains code, data, and these graphs.

    February 20, 2025

  • My bubble measure work

    My undergraduate student Angel Piotrowski replicated my previous unpublished work A New Stock Market Valuation Measure with Applications to Retirement Planning (available at arXiv:1905.04603) with annual volatility which she computed previously. Here I try to explain my research. In a further post, I will explain Angel’s contributions.

    In this manuscript, I considered total Standard & Poor 500 returns (and their predecessors before 1957) taken from Robert Shiller’s data library (published on my web site) and compared them with annual earnings growth. Often, price-earnings or price-dividend ratio is used to analyze the stock market: If such ratios are high the market is overvalued, like at the top of the dotcom bubble. The average price-earnings historical ratio is around 15-20. So if this ratio is much more than 15-20, the market is overvalued. This is classic research by John Campbell and Robert Shiller, which won Nobel Prize in Economics. They used 10-year trailing averaged earnings to reduce noise and adjust for recessions and expansions (which take on average no more than 10 years), and their ratio is called cyclically adjusted

    But comparing only prices with earnings might not be enough: In the XXI century, earnings recently are used for buybacks which raise prices, not dividend payouts. This artificially increases prices but does not mean overvaluation. Use of price-earnings or price-dividend ratios will incorrectly show the market is overvalued. The true comparison must be between stock returns (total, including dividends) and earnings growth. Historically, earnings growth  G(t) is ~2% per year, and total market returns  Q(t) are ~6-7%. Here we consider only real (inflation-adjusted) returns and growth terms.

    Thus if we take the difference  \Delta(t) = Q(t) - G(t) and it is much higher than 4-5% per year for a few years, then the stock market starts to be overvalued. We formally can write this as an autoregression of order 1 for the cumulative sum  H(t) = \Delta(1) + \ldots + \Delta(t) after subtracting the trend  ct where  c \approx 4-5\% . Our bubble measure is  H(t) - ct and it is high when the market is overvalued. Let us write linear regression:

     H(t) - ct - h = b(H(t-1) - c(t-1) - h) + \delta(t)

    After fitting this as multiple linear regression, we get:  b = 0.86, c = 4.5\%, h = 0.36. We can see that the estimate for  c is within this 4-5% range. The Student test for  b = 1 gives  p = 0.1\% so we can reject the random walk hypothesis: The model is stationary after detrending. Moreover, we can apply the Student test, since the residuals (innovations)  \delta(t) are well modeled by independent identically distributed Gaussian  \mathcal N(0, \sigma^2) with  \sigma = 0.18. This is shown by the autocorrelation function plot for  \delta(t) and the autocorrelation plot for  |\delta(t)| as well as the quantile-quantile plot for  \delta(t) versus the Gaussian distribution. See Figure 5 from the article.

    Here we used 5-year averaged earnings, similarly to Campbell-Shiller’s 10-year averaged earnings. But we also do a separate analysis for simple annual earnings, simple annual dividends, and 5-year averaged dividends. The results fit the best for 5-year averaged earnings. The best means the residuals are closest to independent identically distributed normal.

    Previously, Campbell and Shiller did not do analysis for residuals: whether they are truly distributed as independent identically distributed normal. Unfortunately, this is common in economics research. But I did this here.

    According to the analysis, the classic cyclically adjusted price-earnings ratio in 2024 is very high compared to the historical average. Thus the classic analysis shows that the Standard \& Poor 500 is overvalued, almost like the dotcom bubble. But our new stock market valuation measure  H(t) - ct is not historically high. So our own measure does not show the market is overvalued. The current situation is not like the top of the dotcom bubble. See Figure 7 in the article.

    February 20, 2025

  • Updated annual volatility for 2024

    My undergraduate student Angel Piotrowski updated annual realized volatility for 2024. Previously she computed it for 1928-2023, each year. She took log change in daily closing prices of the Standard & Poor 500, or its predecessor, Standard & Poor 90, and computed empirical standard deviation. Given this annual volatility data  V(t) she analyzed this. First, she computed the autocorrelation function for  V(t).

    This strongly suggests using the autoregression model, which is called the Heston model in quantitative finance:

     V(t) = \alpha + \beta V(t-1) + W(t).

    Results after fitting this simple linear regression using ordinary least squares method are:

     \alpha = 0.003920,\, \beta = 0.626196

    Now let us analyze residuals (innovations)  W(t) which are supposed to be independent identically distributed mean-zero Gaussian:  W(t) \sim \mathcal N(0, \sigma^2). Angel did this by making the autocorrelation function plots for  W(t) and for  |W(t)| , as well as the quantile-quantile plot of  W(t) versus the normal distribution:

    We see that the ACF plot for  W(t) corresponds to white noise but the ACF plot for  |W(t)| does not. A few first lags have significant autocorrelation. Less importantly but also unfortunately, the quantile-quantile plot shows the innovations  W(t) are not Gaussian.

    Yet another problem with this Heston model: Volatility can go negative according to this model, but this is impossible in real life. As a standard deviation of market fluctuations, volatility is always supposed to stay positive.

    Next, Angel modeled the resulting series  V(t) as an autoregression of order 1 on the logarithmic scale:

     \ln V(t) = \alpha + \beta \ln V(t-1) + W(t).

    For updated data,  \alpha = -1.776087 and  \beta = 0.620147 so there is mean-reversion. She did not test for unit root but I am very sure this hypothesis (that  \beta = 1 ) would be rejected.

    See the autocorrelation function plots for innovations  W(t) and for their absolute values  |W(t)| which show these  W(t) can be modeled as independent identically distributed. And the quantile-quantile plot versus the normal distribution shows these are Gaussian.

    The resulting stationary distribution  \ln V(\infty) is Gaussian with mean -4.68 and variance 0.218. Using the moment generating function for the normal distribution, we can compute  \mathbb E[V(\infty)] = 0.0104 and variance  \mathrm{Var}(V(\infty)) = 1.34\cdot 10^{-4}.

    Updated data for 1928-2024 volatility, nominal and real returns, and index prices, can be found on my web site

    The code and data for the current post can be found on https://github.com/asarantsev/Annual-Volatility

    February 19, 2025

  • Annual dividend growth terms does not become white noise after dividing by annual volatility

    This is the work by my undergraduate student Ian Anderson, continued from the previous post. He showed that annual earnings growth (nominal or real) are not Gaussian white noise. But after dividing these growth terms by annual volatility (computed by my other undergraduate student Angel Piotrowski) they indeed become Gaussian white noise.

    Ian continued his work for dividend growth instead of earnings growth. The data is taken from Robert Shiller’s data library, as for earnings. But the results are negative in this case. The autocorrelation function plots for nominal dividend growth  G(t) is shown on the left. After dividing by annual volatility  V(t) the autocorrelation plot for  Q(t)/V(t) is shown on the right. It is clear there is significant autocorrelation with lag 1. Both plots seem to be for white noise, no autocorrelation.

    I think that this is because dividends are persistent: Companies are reluctant to cut dividends even in poor times. This is why there are significant autocorrelations.

    February 19, 2025

  • Make S&P Returns IID Gaussian

    My undergraduate student Angel Piotrowski continued her work, started with annual volatility 1928-2023. First, she updated the annual realized volatility for 2024. The resulting series 1928-2024 is still well modeled by log Heston model, see another post. The research in this post is done in GitHub/asarantsev repository.

    Then she computed annual returns of S&P 500 (and its predecessor, S&P 90) 1928-2024 in four versions:

    1. nominal (not adjusted for inflation) or real (adjusted for inflation);
    2. price (due only to price changes) or total (including dividends paid).

    We take nominal annual dividend:  D(t) and December Consumer Price Index  C(t) We take the price  S(t) at the close of the last trading day of the year  t. Price returns are computed as  \ln\frac{S(t)}{S(t-1)} and total returns are  \ln\frac{S(t) + D(t)}{S(t-1)} for nominal versions. But for real versions, we need to subtract  \ln\frac{C(t)}{C(t-1)} from each of these. You see that all returns are logarithmic (geometric), so there is no problem of compound interest. If wealth at end of year  t is  \mathcal W(t) then  \mathcal W(t) = \exp(Q(1) + \ldots + Q(t))\mathcal W(0) where  Q(t) is returns during year  t.

    In each of these four cases, returns are IID but not normal. However, dividing them by volatility keeps them IID but makes them normal. Just to illustrate, let us take real price returns  Q(t)

    The autocorrelation function for  Q(t) (left panel) and for  |Q(t)| (right panel) show these are close to zero. So it is reasonable to model these as independent identically distributed random variables. However, the below quantile-quantile plot versus the normal distribution shows these are not normal.

    Next, repeat this analysis for normalized

    And see that  Q(t)/V(t) are also well modeled as independent identically distributed. But unlike the previous example, the quantile-quantile plot shows that  Q(t)/V(t) is Gaussian:

    This is confirmed by results of two statistical tests for normality: Shapiro-Wilk and Jarque-Bera. See their p-values below. One can clearly see we reject normality hypothesis for original but not normalized returns, for all four versions of returns, and for each of two tests.

    ReturnsOriginal Total RealNormd
    Total Real
    Original Total NominalNormd Total NominalOriginal Price RealNormd Price RealOriginal Price NominalNormd Price Nominal
    Shapiro-Wilk p0.0016420%0.0002711%0.0008728%0.0000911%
    Jarque-Bera p0.0062763%0.0000560%0.0021786%0.0000060%
    February 19, 2025

  • Dividing annual earning growth by volatility makes them Gaussian white noise

    This work was done by my undergraduate student Ian Anderson, using the volatility data computed by my other undergraduate student Angel Piotrowski, see the previous post.

    Ian took 1927-2023 net earnings of Standard & Poor 500 (since 1957; or its predecessor Standard & Poor 90)  E(t) and compute annual growth  G(t) = \ln(E(t)/E(t-1)). We do this first for nominal earnings, without adjustment for inflation. We analyze  G whether it is Gaussian independent identically distributed. We make the quantile-quantile (QQ) plot versus the normal distribution.

    And we plot the autocorrelation function for  G and another plot for the autocorrelation function for  |G|.

    We see from the QQ plot that, unfortunately, earnings growth terms are not Gaussian. The autocorrelation function for earnings growth corresponds to white noise: It shows that  G(t) and  G(t-k) are uncorrelated. But with absolute values, this is not true. There is a significant autocorrelation of lag 1:  \mathrm{corr}(|G(t)|, |G(t-1)|).

    Then divide the earnings growth by annual volatility and get  G(t)/V(t). Does this division improve these terms to make them closer to Gaussian independent identically distributed? In fact, yes!

    We see that now both autocorrelation plots show lack of significant autocorrelations. And the quantile-quantile plot is much closer to linear. Thus it makes sense to model  G(t)/V(t) as independent identically distributed Gaussian.

    The same happens if we consider real earnings (inflation-adjusted) instead of nominal earnings, using December data for the Consumer Price Index. Thus we have joint model for earnings and volatility, annual 1927-2023:

     \ln V(t) = \alpha + \beta \ln V(t-1) + W(t)  \ln\frac{E(t)}{E(t-1)} = V(t)(Z(t) + g)

     (W(t), Z(t)) \sim \mathcal N_2([0, 0], \Sigma) are independent identically distributed bivariate normal, with mean zero and covariance 2×2 matrix  \Sigma This works for both nominal and real annual earnings, but not for dividends.

    January 24, 2025
    autoregression, volatility

  • Does dividing by volatility improve fit for bond spread?

    This work is of my undergraduate student Ian Anderson using the annual January data for American bond rate spreads. It uses annual volatility data complied by my other undergraduate student Angel Piotrowski. See the previous post.

    Consider the spread between 10-year Treasury rate and 1-year Treasury rate. Usually, long-term rates are higher than short-term rates, since investors want extra compensation for committing their money for a long time. Another way to express this is that long-term bonds are more exposed to interest rate risk: If bond rates rise then bond prices fall. The coefficient is called the duration, and it is higher for long-term bonds.

    But sometimes, long-term rates are lower than short-term rates. This is usually not a good sign, and a harbinger of a recession. Expecting a recession soon, investors anticipate short-term interest rate cuts by the Federal Reserve. This influences current long-term rates, which incorporate current and expected future short-term rates.

    Denote this spread by  S(t). Apply an autoregression of order 1:  S(t) = a + bS(t-1) + Z(t). We expect mean reversion for this spread, which is true for  a \in (0, 1). This is different from  a = 1 where this process is a random walk, when future movements are independent of the past.

    Let us now analyze the innovations, otherwise called regression residuals:  Z(t). Apply the quantile-quantile plot versus the normal distribution. Next, divide these innovations by annual volatility and make the quantile-quantile plot again.

    We do not see much difference… Does not seem to be normal. But let us apply the autocorrelation function to these innovations  Z(t). Next, divide by the volatility and plot the autocorrelation function again, now for  Z(t)/V(t). Both plots seem to be for white noise, no autocorrelation.

    Finally, apply the autocorrelation function to the absolute value  |Z(t)| and see whether it is truly independent? It is not. Next, divide by volatility and apply the autocorrelation function to  |Z(t)/V(t)| to see whether it improves the result. It does not, in fact!

    But now let us fit the same for the bond spread between AAA Moody’s rate and 10-year Treasury rate. The AAA rating is the highest reserved for corporate and municipal bonds, which have the lowest default risk.

    QQ plot of innovations before normalization, close to normal but not quite. QQ plot of innovations after normalizations: Much closer to normal!

    Autocorrelation function for innovations before and after normalizing: Good!

    Autocorrelation function for absolute values of innovations before and after normalizing: Not so good, but acceptable. We need some further white noise testing.

    The AAA-1YTR spread results are the same as for 10YTR-1YTR. Here TR stands for Treasury Rate.

    Thus, the answer to the question in the title: Using volatility improves autoregression of order 1 for credit risk spread AAA-10YTR but not for long-short term spread 10YTR-1YTR and not for combined spread AAA-1YTR.

    January 23, 2025
    autoregression, bond-spread, volatility

  • Annual Volatility for Standard & Poor 500

    My undergraduate student Angel Piotrowski computed annual volatility for Standard & Poor 500 (and its predecessor, Standard & Poor 90). For each year 1928 — 2023, she took daily index values  S(t) with day  t in this year, and computed log returns  \ln(S(t)/S(t-1)). Then she computed standard deviation of these log returns for day  t in any given year. Let  V(s) be this standard deviation, usually called volatility, for year  s. The data is available on my web page.

    Next, Angel created a time series model for this volatility: Autoregression of order 1 on the log scale. The motivation comes from plotting the autocorrelation function for  X(t) = \ln V(t). It is defined as  k \mapsto \rho(X(t), X(t-k)). This looks like an autocorrelation function for an autoregression of order 1.

    Here is the equation for this autoregression:  \ln V(s) = \alpha + \beta \ln V(s-1) + W(s) with  \alpha = 0.62 and  \beta = -1.776. Let us test whether the innovations  W(s) are Gaussian. Apply the quantile-quantile plot versus the normal distribution. This looks like pretty close to a Gaussian law!

    Next, plot the autocorrelation function for innovations  W(s) and another plot of an autocorrelation function for their absolute values  |W(s)| to see that they correspond to white noise.

    Thus the model fits well:  \ln V(s) = \alpha + \beta \ln V(s-1) + W(s). We have numerical estimates  \beta = 0.620, \alpha = -1.775. Therefore, this autoregression has a stationary distribution, or an invariant probability measure,  \Pi such that  \ln V(t) \sim \Pi \Rightarrow \ln V(t+1) \sim \Pi. Angel has computed that mean and variance of this stationary distribution.

    See updates here.

    January 22, 2025

Previous Page Next Page

Blog at WordPress.com.

 

Loading Comments...
 

    • Subscribe Subscribed
      • My Finance
      • Already have a WordPress.com account? Log in now.
      • My Finance
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar