Improved Six-Equation Model Selection

The valuation measure. Continuing the previous post, let us include the valuation measure  H(t) based on one-year dividend, described in previous posts:  H(t) = \ln W_1(t) - \ln D(t) - ct. The regression which defines such measure is an autoregression of order 1 with a linear trend:  H(t) = a + bH(t-1) + Z(t). Its innovations  Z(t) do not satisfy the assumptions 1-5 from the previous post. Therefore, we reject the regression as the model. But we accept this model:

 H(t) = \alpha + \beta H(t-1) + \gamma V(t) + V(t)Z_H(t)

The values are  \alpha = 0.1699, \beta = 0.8262, \gamma = -0.0129. The  \beta is significantly different from one. There is no unit root.

Domestic returns. Now add the new valuation measure as a factor for domestic returns. We accept this model:  Q_1(t) = a_1 - d_1(R(t) - R(t-1)) + b_1V(t) - c_1H(t-1) + V(t)Z_k(t). The values are  a_1 = 0.2637, d_1 = 0.0553, c_1 = 0.1303, b_1 = -0.0129. All coefficients are significantly different from zero. Also,  R^2 = 48\% which is very high! Usually, annual stock returns are not very predictable.

International returns. Add the new valuation measure for international returns, although they are made for domestic returns. The regression model is accepted, and  R^2 = 39.1\% and all coefficients are significant except the valuation measure. So it is not needed, after all. Without the new valuation measure, the regression has  R^2 = 37.8\%. We do not need this!

Covariance and correlation. In order  Z_1, Z_2, Z_0, Z_V, Z_R, Z_H and we can treat them as multivariate Gaussian independent identically distributed with covariance matrix (times 10000):

2.026369 0.847703 -0.153965 -3.626994 0.208415 2.063568
0.847703 2.975298 0.036818 -5.399599 -0.000237 0.863375
-0.153965 0.036818 1.935904 14.068448 -0.050806 -0.844710
-3.626994 -5.399599 14.068448 1338.685234 2.754026 -5.842714
0.208415 -0.000237 -0.050806 2.754026 0.113497 0.261286
2.063568 0.863375 -0.844710 -5.842714 0.261286 3.152435

and the correlation matrix

1.000000 0.380257 -0.077736 -0.070715 0.466564 0.816463
0.380257 1.000000 0.014113 -0.083611 -0.000410 0.306855
-0.077736 0.014113 1.000000 0.275118 -0.097698 -0.341934
-0.070715 -0.083611 0.275118 1.000000 0.217726 -0.090182
0.466564 -0.000410 -0.097698 0.217726 1.000000 0.469910
0.816463 0.306855 -0.341934 -0.090182 0.469910 1.000000

See the data and code in GitHub repository which verify the research here.

Published by


Leave a comment