Friday, December 30, 2011

Monetary Policy and Credit Easing pt. 6: Empirical Estimation and Methodology

IT is now appropriate to lay out our two regression models in full for empirical estimation over our two separate time periods. The first estimation is from 4/1/71 to 7/1/97 and the second is from 4/1/01 to 4/1/11. The methodology employed in the estimation of these two models is a procedure using Generalized Least Squares with a Cochrane-Orcutt, style iterated residual value.  For those that wish to perform the same regressions at home I have provided the following links to my data.  This is for the estimation period from 1971 to 1997 and this one is for the 2001 to 2011 estimations. The following four steps were taken for each estimation:

1. Estimate the OLS regression

2. Fit OLS residual to an AR(p) process using the Yule-Walker Method and find the value of p.

3.  Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the  estimated p from 2, as the order for your correlation residual term.

4. Fit the GLS estimated residuals to an AR(p) process using the Yule-Walker Method and use the estimated p's as the final parameter estimates for the error term.

The end goal of the above procedure is to have our models be asymptotically BLUE or the Best Linear Unbiased Estimators.  The implies that they have the smallest variance out of all the models that meet the rest of the Gauss-Markov assumptions.  A little background behind the methodology is in order.  First we will perform the standard Ordinary Least Squares (OLS) regression on our dependent variables.  This will give us at the very least unbiased estimators.  Second we take the residuals from the first step and fit them to an AR(p) process as selected by the Yule-Walker Method.  This method selects the optimal lag that characterizes the autocorrelation in the residuals.  We automatically take this step, due to the well known fact that most time-series suffer from autocorrelation problems as verified by our correlograms.  Then we re-estimate the regression using Generalized Least Squares which adjusts each term and divides them by their individual variances, while also incorporating the AR(p) lag that we discovered during the previous step.  The final step, which fits our GLS models residuals to an AR(p) process is what leads to our asymptoticly efficient results. We lay out our first estimation period models below.

First Estimation:  4/1/71 to 7/1/97 

1. Monetary Policies Impact On Short-term Risk Premiums

Our first model which seeks to answer how monetary policy impacts the risk premia on short-term commercial paper as estimated over our first time period 4/1/71 to 7/1/97 is as follows:

SR^{premium}_{t}=ß_{0}+ß_{1}*FedBalance^{size}_{t}+ß_{2}*Support_{t} + ß_{3}*UGAP_{t}+ ß_{4}*FF_{t}+ ß_{5}*ER_{t}+ ß_{6}*YC_{t}+ ß_{7}*Default^{spread}_{t}+ ß_{8}*CP_{t}+ ß_{9}*OGAP_{t} + ß_{10}*FedGDP_{t}+ ß_{11}*govcredit_{t}+ ß_{12}*ForeignDebt_{t} +  µ_{t}

Where,

SR^{premium}_{t} = Short-term Risk Premium at time, t

Support_{t}= Fed's funds at depository institutions as a percentage of their main financing streams at time, t

FedBalance^{size}_{t}=  The Fed's credit market asset holdings as percentage of the total credit market assets at time, t

FF_{t}= Federal Funds rate at time, t

ER_{t}= Excess Reserves of Depository Institutions at time, t

YC_{t}= Yield curve at time, t

Default^{spread}_{t}= Default Spread between BAA_{t} & AAA_{t} rated bonds at time, t

CP_{t} = Corporate Profits After Tax at time, t

FedGDP_{t}= Fed's holdings of total public debt as a percentage of GDP at time, t

govcredit_{t}= Government Holdings Of Domestic Credit Market Debt As A Percentage Of The Total at time, t

ForeignDebt_{t}= Foreign Holdings of Federal Debt As A Percentage Of The Total at time, t

UGAP_{t} = Unemployment gap at time, t

OGAP_{t} = Output gap at time, t

µ_{t}= error term at time, t

R DATA WORK

So now it is time for the long awaited econometrics work in R. First thing you'll want to do is read the data into R from your data file which is in this case is the Earlreg file.

> earl<- read.csv("/Users/stevensabol/Desktop/R/earlreg.csv",header = TRUE, sep = ",")

Then you define your variable names so you can easily manipulate your data in R. So when you open the official .csv data file take a look at the variable names and rename them using the following procedure.  

>yc1<-earl[,"yc"]

After you define what you call everything you're then free to go crazy and run regressions.  Below is how you run the standard Ordinary Least Squares regression.  The lm function enables you to run linear regressions:

1. Estimate the OLS regression

>srp1.lm=lm(srp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1)

In order to get the output you have to use the summary function:

> summary(srp1.lm)

Call:
lm(formula = srp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + 
    FedGDP1 + FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + 
    OGAP1)

Residuals:
     Min       1Q   Median       3Q      Max 
-1.04289 -0.20145 -0.04041  0.15230  1.21044 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -2.7591194  1.0359966  -2.663  0.00912 ** 
yc1           0.1320996  0.0580500   2.276  0.02516 *  
CP1          -0.0022773  0.0073773  -0.309  0.75825    
FF1           0.1699788  0.0340654   4.990 2.81e-06 ***
default1      0.4382965  0.1876685   2.335  0.02167 *  
Support1      2.2383850  0.6660140   3.361  0.00113 ** 
ER1           0.3351508  0.3017644   1.111  0.26959    
FedGDP1       0.3031938  0.2558144   1.185  0.23895    
FedBalance1   0.4014920  0.3477547   1.155  0.25124    
govcredit1   -0.0928817  0.0401603  -2.313  0.02294 *  
ForeignDebt1 -0.0068900  0.0215393  -0.320  0.74977    
UGAP1        -0.0912273  0.0520491  -1.753  0.08295 .  
OGAP1         0.0006669  0.0014895   0.448  0.65536    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 0.3789 on 93 degrees of freedom
Multiple R-squared: 0.6474, Adjusted R-squared: 0.6019 
F-statistic: 14.23 on 12 and 93 DF,  p-value: 2.642e-16 

Next we perform step 2 which is:

2. Fit OLS residual to an AR(p) process using the Yule-Walker Method and find the value of p.

> srp.lmfit<-ar.yw(srp1.lm$res)
> srp.lmfit

Call:
ar.yw.default(x = srp1.lm$res)

Coefficients:
     1  
0.2535  

Order selected 1  sigma^2 estimated as  0.1201 

So the Yule-Walker methodology fits the residual series to an AR(1) process.

3.  Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the  estimated p from 2, as the order for your correlation residual term.

In order to run a GLS regression your going to need to load the nlme package:

> library(nlme)

Then you can go crazy:

>srp1.gls=gls(srp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1, corr=corARMA(p=1,q=0),method="ML")

> summary(srp1.gls)

The following output is produced:

Generalized least squares fit by maximum likelihood
  Model: srp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + OGAP1 
  Data: NULL 
       AIC      BIC    logLik
  100.7905 140.7421 -35.39526

Correlation Structure: AR(1)
 Formula: ~1 
 Parameter estimate(s):
      Phi 
0.3696665 

Coefficients:
                  Value Std.Error   t-value p-value
(Intercept)  -3.0219486 1.2595942 -2.399145  0.0184
yc1           0.1929605 0.0640627  3.012054  0.0033
CP1          -0.0060642 0.0071791 -0.844700  0.4004
FF1           0.1918066 0.0362894  5.285466  0.0000
default1      0.5292204 0.2060591  2.568293  0.0118
Support1      2.1086204 0.7405128  2.847514  0.0054
ER1           0.5651430 0.2770125  2.040135  0.0442
FedGDP1       0.1028773 0.3143122  0.327309  0.7442
FedBalance1   0.7845392 0.4130914  1.899190  0.0606
govcredit1   -0.1240196 0.0524191 -2.365922  0.0201
ForeignDebt1  0.0009822 0.0278623  0.035252  0.9720
UGAP1        -0.1266050 0.0657633 -1.925161  0.0573
OGAP1        -0.0014094 0.0014328 -0.983623  0.3279

 Correlation: 
             (Intr) yc1    CP1    FF1    deflt1 Spprt1 ER1    FdGDP1 FdBln1 gvcrd1
yc1          -0.267                                                               
CP1           0.054 -0.062                                                        
FF1          -0.308  0.726 -0.012                                                 
default1      0.081 -0.208  0.235 -0.342                                          
Support1     -0.005 -0.109 -0.107 -0.419  0.137                                   
ER1          -0.208  0.077 -0.081  0.067 -0.180  0.028                            
FedGDP1      -0.728 -0.059 -0.048  0.083 -0.057 -0.002 -0.308                     
FedBalance1   0.461  0.250  0.020  0.208  0.036 -0.081  0.445 -0.887              
govcredit1   -0.570 -0.233 -0.095 -0.291 -0.261  0.072  0.068  0.666 -0.784       
ForeignDebt1 -0.475  0.132  0.006 -0.092  0.093  0.219  0.057  0.059 -0.045  0.227
UGAP1        -0.048 -0.193 -0.062  0.085 -0.447  0.150  0.090  0.045  0.064 -0.053
OGAP1        -0.029  0.092  0.295  0.062 -0.208 -0.024  0.053 -0.021  0.013  0.056
             FrgnD1 UGAP1 
yc1                       
CP1                       
FF1                       
default1                  
Support1                  
ER1                       
FedGDP1                   
FedBalance1               
govcredit1                
ForeignDebt1              
UGAP1        -0.016       
OGAP1         0.064  0.041

Standardized residuals:
        Min          Q1         Med          Q3         Max 
-3.08026826 -0.62589269 -0.08409222  0.39781537  3.24233325 

Residual standard error: 0.3634024 
Degrees of freedom: 106 total; 93 residual

After you perform this step you have to refit the residuals in order to get serially uncorrelated terms.  

4. Fit the GLS estimated residuals to an AR(p) process using the Yule-Walker Method and use the estimated p's as the final parameter estimates for the error term.  

> s1glsres.ar<-ar.yw(srp1.gls$res)
> s1glsres.ar

Call:
ar.yw.default(x = srp1.gls$res)

Coefficients:
     1  
0.3718  

Order selected 1  sigma^2 estimated as  0.1163 

In order to see the results of these actions please refer to image below:
The Ljung-box Q is a test for autocorrelation between the lags in the error terms. Ideally in order to meet the BLUE criteria we have to reject the the null hypothesis for autocorrelation at each lag.  We can see that both our original OLS  and GLS estimations fail to pass the Ljung-Box Q. However when we readjust the error terms the final time we get residuals that are serially uncorrelated with each other.  

In order to get the above graph you have to first load the package that will allow you to perform the Ljung-Box Q plot:

> library(FitAR)

Then you can proceed from there and define how many plots should be in one picture.  In the above image we have 9 therefore:

> par(mfrow=c(3,3))

Then you can start adding in your plots.  Below is the code for producing the plots for the fitted GLS residuals.

> acf(s1glsres.ar$res[-(1)])
> pacf(s1glsres.ar$res[-(1)])
> LBQPlot(s1glsres.ar$res[-(1)])

We include the [-(1)] because exclude the first observation since we have an AR(1) process.

The same steps above can be applied to any time-series regression model.  In the next post we will discuss how to get some summary statistics.  Please keep dancin'

Steven J.





5 comments:

  1. Hi Steven,

    Thanks for a great post! One step I didn't follow: why do you need to re-estimate the AR(p) process on the residuals? In step 3, you jointly estimated the residual correlations and the linear regression, so I would think you got optimal estimates there.


    - Chris

    ReplyDelete
  2. Chris,

    I understand the weirdness of the situation. We already adjusted for them so what's the deal? I am not 100% sure about this but I think the GLS estimation just seems to adjust the independent variables by normalizing them by their respective variances, but leaves the residuals (in the above case) pretty freakin' correlated. Well the original correction in the residuals as you can notice causes us to fail the ljung-box test for white noise. The ljung-box works like this: H0: No Autocorrelation is present in the residual values vs. Ha: Autocorrelation is present in the residual values. When we have a p-value <.05 we reject H0 and accept that autocorrelation is present. This is the case for the first two estimations of the model. The re-estimation corrects for this autocorrelation which leaves us with better behaving residuals aka ones that are at least now uncorrelated with each other. If we didn't do it their would be a better model out there and we would truly fail to have the Best Linear Unbiased Estimate. Hope that helps. Confusing I know.

    Steven J.

    ReplyDelete
  3. Other things you'll usually notice after the re-estimation is a lower variance!

    ReplyDelete
  4. perfekt günstig uhren, combining elegant style and cutting-edge technology, a variety of styles of perfekt günstig omega uhren, the pointer walks between your exclusive taste style.

    ReplyDelete