Friday, February 3, 2012

Employment Situation: An Inside Look

Check out the divergence between those unemployed from ages 20-24 and those aged 25-34. Currently there are about 1,000,000 more people that are unemployed in the 25-34 age bracket than in the 20-24 age bracket.




These values used to coincide but now the spread between them is widening which suggests that younger people may have an advantage when it comes to getting hired.  They're less picky and more willing to accept anything that has some dollar signs attached to it versus the older and more demanding unemployed member of society. In the graph below the blue line represents those aged 25-34 that are unemployed and the black line is for 20-24 age range.
























Check out the below graph which shows the unemployment rate for those over 25 with a bachelors  degree, those without one and those that didn't graduate high school.  Getting a bachelors is not a sufficient condition for employment but as the  graph below shows, you do have greater job security.


Additionally, check out the spread between those with a bachelors and those who graduated high school. Notice how this spread has widened.  This suggests that those with bachelors are certainly less susceptible to economic fluctuations and that getting through those student loans is definitely worth it.



The unemployment rate for those who graduated high school is currently 4% higher than those that got their bachelors. Lesson of the day: Read more books and get some education. I got to keep dancin' and so should you!

Steven J.

Thursday, February 2, 2012

Unemployment Insurance Claims: Healing Slowly but Surely.

The following graph compares the 4 week moving average of initial claims from the 2007 peak with the one from the 1981 recession.  As you can see, jobless claims have been much more persistent than in previous recessions.
























The blue line is from the 2007 and the dashed red line is from 1981.  The x-axis is in weeks after the peak. As the graph suggests- we still have a ways to go before we get back to normal levels but what it also shows is that much healing has already taken place.  This is undoubtedly good news for fridays Employment Situation.

Keep Dancin'

Steven J.

Friday, January 13, 2012

Just keeps falling doesn't it?

Check out the following graph which charts how much the average sales price of a new home 27 months from the peak in economic activity moves.  As you will notice ( this is for recessions from 1980 on), the average has now fallen more than any other recession previous (well from the 1980's on) as indicated by the dashed black line of death.




The next graph truly depicts the demoralizing collapse in new home prices, while also capturing how over inflated prices really were. Its a sobering picture to say the least.  Notice how prices gave the false sense of rebound and then just kept on falling.  Depression Economics people- if you haven't figured it out already- just assume the worst and your probably right.  This is the type of thing that makes Roubini so freakin popular and prophetic sounding, although any proper student of financial crisis would already know to expect such things. Readers of this blog should have definitely learned to expect such things.




Keep Dancin'

Steven J.

Monday, January 9, 2012

Consumer Sentiment: WOMP.

Consumer Sentiment is a measure of how people feel about their wealth and how happy they are.  What the below graph reveals is that people feel worse than average 27 months after the peak in economic activity as defined by the National Bureau of Economic Research (NBER).


The graph below reveals that the general populus aren't feeling that their situation is all that hunky-dory .  This may be due to a bunch of things: their net worth has fallen with respect to their home values, they are in serious debt, their spouse left them for another, younger version of themselves and maybe there even unemployed.  But whatever the reason the bottom line is that people still feel like doo doo.


Consumer sentiment is low. Thats undoubtable.  My guess is that they'll feel significantly better if jobs were more readily available and their real disposible incomes were higher, maybe even significantly higher.  As mentioned in a previous post- wage growth has been rather stagnant.

Keep dancin'

Steven J.

Saturday, January 7, 2012

Might as well buy a crib...oh wait I'm broke.

The National Association of Realtors has put out the Housing Affordability Index since the early 1980's. The higher the index value the more able a household earning the median income is able to purchase and make mortgage payments on a home.


Measures the degree to which a typical family can afford the monthly mortgage payments on a typical home. 
Value of 100 means that a family with the median income has exactly enough income to qualify for a mortgage on a median-priced home. An index above 100 signifies that family earning the median income has more than enough income to qualify for a mortgage loan on a median-priced home, assuming a 20 percent down payment. For example, a composite housing affordability index (COMPHAI) of 120.0 means a family earning the median family income has 120% of the income necessary to qualify for a conventional loan covering 80 percent of a median-priced existing single-family home. An increase in the COMPHAI then shows that this family is more able to afford the median priced home.
As you can see now, the 2007 Recession has clearly brought homes to their all time most affordable levels.  If I had cash I might buy a home right now, but i don't and the 14 million unemployed or so don't either.  Womp.



What a value of around 195 means is that a family earning a median income has 195% of the income necessary to qualify for a conventional loan covering 80 percent of a median-priced existing single-family home.  By historical standards never has it been the best and worst time to buy a home. The best opportunity, under the worst circumstances.  

Keep dancin'

Steven J.


Friday, January 6, 2012

Civilian Unemployment: Persistence

Today the Civilian Unemployment numbers were released and all they did was verify one thing- that recoveries in the United States since 1990 have been jobless ones.  Check out the following graph which takes every post WW2 recession and averages their numbers from peak to 27 months out. Notice that this one has the greatest persistence of all of them in terms of a high unemployment rate.



Additionally check out the next graph which just uses the recessions from 1990 and beyond. These are all the recessions that have been characterized by jobless recoveries. The thing to notice here is not the level of the unemployment rate, but that in these recoveries the unemployment rate also failed to drop significantly 27 months or so after the peak.  This recession has been deeper (a cyclical factor) which explains why unemployment is so freakin' high, but unemployment being persistent has nothing to do with cyclical factors- yet it seems more structural reforms may be necessary.  This isn't house lock people this is something more than that. This is the structure of unemployment benefits and the nature of profit seeking firms, that want to please shareholders.  Being lean and mean is attractive for companies that face constant uncertainty, especially when growth would be a miracle occurrence.



The last graph shows that while unemployment does still remain stubbornly high at least it is falling.  Although this may be because people are just plain dropping out of the labor force.  A closer look into the Employment Situation would be necessary to reveal the details.

keep dancin'

Steven J.

Thursday, January 5, 2012

Real Hourly Wages Per Hour: Poo Poo Platter Performance

The past recession brought about wages that fell to their 2005 wages and caused serious squeezes to the personal balance sheet.  As the graph below shows real compensation per hour growth is slower now than it was three years after the start of any previous recession.  That is undoubtedly, by historical standards, a terrible thing.  
The graph below shows that we have indeed had some wage growth, however, the above graph helps to remind us that by historical standards (Post WW2) it has been pathetic.   


Keep dancin'

Steven J.

Wednesday, January 4, 2012

Detroit Unemployment

Todays graph will be highlighting how much this recession has impacted unemployment in Detroit versus the recessions of 2001 and 1990.  As you can see unemployment has risen and remains unholy.
























Detroit has been hit particularly hard due to the already struggling automotive industry, so the recession just brought about more reasons of layoffs.  The last four unemployment numbers have been very positive however.




Keep dancin'

Steven J.

Tuesday, January 3, 2012

Graphical Representations of Recessionary Woes: ISM PMI

Today, The Institute of Suppy Management Purchasing Managers Index was released. As you can see from the graph, we are finally above the average PMI for 27 quarters after a recession begins.  Thats something to be cheery about.



Overall though the number aren't that spectacular and remain, well, average. The readings for todays release are clearly above 50 which means more managers plan on expanding their operations versus contracting them. Overall this number casts good news over the U.S. sput sputtering economy.
























Please Keep Dancin'

Steven J.

Graphical Representations of Recessionary Woes: Oil Prices

Today on the Dancing Economist we will exploit a graphical tool in FRED that allows us to benchmark movements in an time-series with their historical recessionary past.  Todays graph is one of the West Texas Intermediate Spot Oil Price.




Notice how in the past recessionary oil prices didn't get as high as they historically have gotten and that they have also fallen more and rebounded with less vigor than the usual. In fact as the above graph shows, they are lower now than any point in time after any previous recession. The recessionary periods I have chosen to include in the min and max calculations are all but the 1973 one as the artificial price ceiling employed then distorts the numbers. Keep dancin' and I'll keep posting,

Steven J.

Monday, January 2, 2012

Monetary Policy & Credit Easing pt. 8: Econometrics Tests in R

Hello, folks its time to cover some important econometrics tests you can do in R.

The Akaike information criterion is a measure of the relative goodness of fit of a statistical model.  If you have 10 models and order them by AIC, the one with the smallest AIC is your best model, ceterus paribus.
The following code can figure the AIC and a similar version called BIC:



> AIC(srp1.gls)
[1] 100.7905


> BIC(srp1.gls)
[1] 140.7421

Say we wish to see if our model has an error term that follows a relatively normal distribution. For this we can perform the Jarque-Bera which tests kurtosis as well as skewness. This function requires that you load the FitAR package.


> JarqueBeraTest(srp1.gls$res[-(1)])
$LM
[1] 19.2033


$pvalue
[1] 6.761719e-05

To see if the mean of the residual values is 0 and to see the standard deviation the following code works:


> mean(srp1.gls$res[-(1)])
[1] 0.003354243
> sd(srp1.gls$res[-(1)])
[1] 0.3666269

Other tests like the Breusch-Pagan and Goldfeld-Quandt provide facts like wether autocorrelation is present and give us a hint as to wether our residual variance is stable or not. In order for these to work you have to load the lmtest package. Also you can only run these for the lm objects or for your Ordinary Least Squares Regressions for any Generalized Least Squares regressions you'll have to perform these test manually, and if you know of an easier or softer way please share.


> bptest(srp1.lm)


studentized Breusch-Pagan test


data:  srp1.lm 
BP = 48.495, df = 12, p-value = 2.563e-06


> gqtest(srp1.lm)


Goldfeld-Quandt test


data:  srp1.lm 
GQ = 0.1998, df1 = 40, df2 = 40, p-value = 1


You can also use the Durbin-Watson to test for first order autocorrelation:


> dwtest(srp1.lm)


Durbin-Watson test


data:  srp1.lm 
DW = 1.4862, p-value = 0.0001955
alternative hypothesis: true autocorrelation is greater than 0 

Wish to get confidence intervals for your parameter estimates? Then use the confint() function as shown below for the Generalized Least Squares regression on long-term risk premia from 2001-2011.


> confint(p2lrp.gls)
                                    2.5 %                               97.5 %
yc                                -0.1455727340         0.1498852728
default                      0.2994818014             1.0640354237
Volatility                   0.0336077958             0.0617798767
CorporateProfit      -0.0010916473             0.0006628209
FF                               -0.1788624533             0.0931406285
ER                              0.0001539035                        0.0016060804
Fedmbs                      -0.0061554994             0.0085638593
Support                     -0.1499342096             0.1615652273
FedComm                     -0.0108567077             0.0750407328
FedGdp                      -0.1347070955             0.2528217710
ForeignDebt                 -0.0441198164             0.1042805549
govcredit                    0.1090847204             0.6796839003
FedBalance                  -2.0940925835             0.0370114069
UGAP                        -0.4821566147             0.3188891550
OGAP                        -0.2239749029             0.1073611677

Another nice feature is finding the log-likelihood of your estimation:

> logLik(lrp2.lm)
'log Lik.' 23.05106 (df=17)

Want to see if you have a unit-root in your residual values? Then perform the augmented Dickey-Fuller. For this you'll have to load the 'tseries' package.

> adf.test(lrp2.gls$res[-(1:4)])

                              Augmented Dickey-Fuller Test

data:  lrp2.gls$res[-(1:4)]
Dickey-Fuller = -7.4503, Lag order = 3, p-value = 0.01
alternative hypothesis: stationary

Warning message:
In adf.test(lrp2.gls$res[-(1:4)]) : p-value smaller than printed p-value
> adf.test(lrp2.lm$res)




I hope this mini-series has been informative to all that tuned in. For more info on anything you see here please don't be shy to comment and keep dancin',

Steven J.

Sunday, January 1, 2012

Monetary Policy & Credit Easing pt. 7: R Econometrics Tests

In post 6 we introduced some econometrics code that will help those working with time-series to gain asymptoticly efficient results.  In this post we look at the different commands and libraries necessary for testing our assumptions and such.

Testing our Assumptions and Meeting the Gauss-Markov Theorem

In this section we will seek to test and verify the assumptions of the simple linear regression model.  These assumptions are laid out as follows and are extracted from Hill, Griffiths and Lim 2008:

SR1. The value of y, for each value of x, is
y= ß_{1}+ß_{2}x+µ
SR2. The expected value of the random error µ is
E(µ)=0
which is equivalent to assuming
E(y)= ß_{1}+ß_{2}x
SR3. The variance of the random error µ is
var(µ)=sigma^2 = var(y)
The random variables y and µ have the same variance because they differ only by a constant.
SR4. The covariance between any pair of random errors µ_{i} and µ_{j} is
cov(µ_{i}, µ_{j})=cov(y_{i},y_{j})=0
SR5. The variable x is not random and must take at least two different values.
SR6. The values of µ are normally distributed about their mean
µ ~ N(0, sigma^2)
if the y values are normally distributed and vice-versa 
Central to this topics objective is meeting the conditions set forth by the Gauss-Markov Theorem.  The Gauss-Markov Theorem states that if the error term is stationary and has no serial correlation, then the OLS parameter estimate is the Best Linear Unbiased Estimate or BLUE, which implies that all other linear unbiased estimates will have a larger variance. An estimator that has the smallest possible variance is called an "efficient" estimator.  In essence, the Gauss-Markov theorem states that the error term must have no structure; the residual levels must exhibit no trend and the variance must be constant through time.
When the error term in the regression does not satisfy the assumptions set forth by Gauss-Markov, OLS is still unbiased, but fails to be BLUE as it fails to give the most efficient parameter estimates. In this scenario, a strategy which transforms the regressions variables so that the error has no structure is in order. In time-series analysis, the problem of autocorrelation between the residual values is a common one.  There are several ways to approach the transformations necessary to ensure BLUE estimates, and the previous post used the following method to gain asymptotic efficiency and improve our estimates:
1. Estimate the OLS regression

2. Fit OLS residual to an AR(p) process using the Yule-Walker Method and find the value of p.

3.  Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the  estimated from 2, as the order for your correlation residual term.

4. Fit the GLS estimated residuals to an AR(p) process and use the estimated p's as the final parameter estimates for the error term.  

What have we done?  First we have to find out what the error term autocorrelation process is. What order is p? In order to find this out we fit the OLS residuals to an AR(p) using the Yule-Walker method. Then we take the order p of our estimated error term and run a GLS regression with an AR(p) error term.  This will give us better estimates for our model.  Research has shown that GLS estimators are asymptotically more efficient than OLS estimates almost one-hundred percent of the time. If you notice in every single regression, the GLS estimator with a twice iterated AR(p) error terms consistently results in a lower standard deviation of the residual value. Therefore the model has gained efficiency which translates into improved confidence intervals.  Additionally, by fitting the GLS residuals to an AR(p) we remove any autocorrelation(or structure) that may have been present in the residual.  

Testing For Model Miss-specification and Omitted Variable Bias

The Ramsey RESET test (Regression Specification Error Test) is designed to detect omitted variable bias and incorrect functional form. Rejection of H_{0} implies that the original model is inadequate and can be improved.  A failure to reject H_{0} conveys that the test has not been able to detect any miss-specification.

Unfortunately our models of short-term risk premia over both estimation periods reject the null hypothesis, and thus suggest that a better model is out there somewhere.  Correcting for this functional miss-specification or omitted variable bias will not be pursued here, but we must keep in mind that our model can be improved upon and is thus not BLUE.  

In R you can run the Ramsey Reset test for standard lm functions using the library lmtest:

>library(lmtest)

> resettest(srp1.lm)

RESET test

data:  srp1.lm 
RESET = 9.7397, df1 = 2, df2 = 91, p-value = 0.0001469

For GLS objects however you'll need to do it manually and that procedure will not be outline here.  Although if you really want to know please feel free to email or leave a comment below.  

Addressing Multicollinearity

In the original formulation of the model there existed an independent variable called CreditMarketSupport, that was very similar to our FedBalance variable.  Both variables are percentages and shared the same numerator while also having very similar denominators.  As a result we had suffered from a condition called exact collinearity as the correlation between these two variables was nearly one.

> cor(FedBalance1,CreditMarketSupport1)

0.9994248

With exact collinearity we were unable to obtain a least squares estimate of our ß coefficients and these variables were behaving opposite of what we were expecting.  This violated one of our least squares assumptions SR5 which states that values of x_{ik} are not exact linear functions of the other explanatory variables.  To remedy this problem, we removed CreditMarketSupport from the models and we are able to achieve BLUE estimates.

 Suspected Endogeniety

In our estimation of long-term risk premia over the first time period we suspect endogeniety in the cyclical variable Output Gap.  In order to remedy this situation we replace it with an instrumental variable - the percentage change in S&P 500 and perform the Hausman Test which is laid out as follows:

H_{0}: delta = 0 (no correlation between x_{i} and µ_{i})

H_{1}: delta ≠ 0 (correlation between x_{i} and µ_{i})

When we perform the Hausman Test using S&P 500 as our instrumental variable our delta ≠ 0 and is statistically significant.  This means that our Output Gap variable is indeed endogenous and correlated with the residual term.  If you want to learn more about the Hausman Test and how to perform it in R please leave a comment or email me and i'll make sure to get the code over to you.  When we perform the Two Stage Least Squares Regression to correct for this not a single term is significant.  This can be reasonably be attributed to the problem of weak instruments.  The 2 Stage Least Squares Estimation is provided below. Since, the percentage change in the S&P500 was only correlated with the Output Gap 0.110954, there is strong reason to suspect that weak instruments are the source of the problem.  We will choose to not locate a proper instrumental variable to emulate the Output Gap, instead we will keep in mind that we have an endogenous variable when interpreting our coefficient estimates which will now end up being slightly biased. 

Below is how to perform a two-stage least squares regression in R when your replacing an endogenous variable with an exogenous one. First you'll need to load the library sem into R. In the below regression the first part includes all the variables from the original model and the second part lists all of our exogenous and instrumental variables which in this case is just the percentage change in the S&P 500.

> tSLRP1<-tsls(lrp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1,~ yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+sp500ch+OGAP1 )

> summary(tSLRP1)

 2SLS Estimates

Model Formula: lrp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + 
    FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + OGAP1

Instruments: ~yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + FedBalance1 + 
    govcredit1 + ForeignDebt1 + sp500ch + OGAP1

Residuals:
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 -9.030  -1.870   0.021   0.000   2.230   7.310 

             Estimate Std. Error  t value Pr(>|t|)
(Intercept)  -5.28137   44.06906 -0.11984   0.9049
yc1          -1.48564   10.60827 -0.14005   0.8889
CP1          -0.01584    0.09206 -0.17204   0.8638
FF1           0.20998    2.43849  0.08611   0.9316
default1     -7.16622   65.35728 -0.10965   0.9129
Support1      6.39893   47.72244  0.13409   0.8936
ER1           4.56290   35.91837  0.12704   0.8992
FedGDP1       1.86392    9.16081  0.20347   0.8392
FedBalance1   0.73087   12.96474  0.05637   0.9552
govcredit1    0.17051    0.89452  0.19062   0.8492
ForeignDebt1 -0.22396    1.41749 -0.15799   0.8748
UGAP1         4.55897   35.33446  0.12902   0.8976
OGAP1         0.01331    0.09347  0.14235   0.8871

Residual standard error: 3.3664 on 93 degrees of freedom

Notice that our model now doesn't have any significant terms.  This is why we will choose to ignore the endogeniety of our Output Gap and probably Unemployment Gap variables.  Correcting for endogeniety does more harm than good in this case.

Results and Concluding Thoughts

As this paper hopefully shows, the Feds actions did directly impact the easing of broader credit conditions in the financial markets.  

Over our first estimation period from 1971 to 1997 we find that the Fed's support of Depository Institutions as a percentage of savings and time deposits is positively related to the short-term risk premia. Specifically we find that a 1 percentage point increase in Support leads to a 2.1 percent increase in short-term risk premia. This was as expected because Depository Institutions would only borrow from the Fed if no other options existed. We also find that a 1 percentage point increase in the federal funds rate leads to a .19 percentage increase in short-term risk premia.  This is consistent with our original hypothesis as an increased FF puts positive pressure on short-term rates like the 3 month commercial paper rate, thus resulting in an widened spread.  With respect to long-term risk premia, we find that a 1 percentage point increase in FF leads the long-term risk premia to decrease by .66 percentage points and a 1 percent increase in the federal funds rate leads to a .07 decrease in the long-term risk premia.

Over our second estimation period the composition of the Feds balance sheet is considered.  We see that the CCLF  did decrease short-term risk premiums, with every one percent increase translating to a decrease in short-term risk premia by .1145 percentage points.  Another important result is that Fed purchases of Agency Debt and Agency MBS did have a significant, although almost negligible effect on short-term risk premia.  One surprising result with the estimation of the long-term risk premia is that our Fed balance sheet size variable has a sign that is opposite of what we expected and its significance is particularly surprising.  This may be expected since this period is largely characterized by both a shrinking balance sheet and narrowing risk premia as investments were considered relatively safe.  However towards the end of the period risk premiums shot up and only after did the size of the balance sheet also increase, thus the sample period may place too much weight towards the beginning of the time period and not enough towards the end. This is a reasonable assumption given that our estimate of the balance sheet size showed a large negative impact on risk premia over our longer estimation period.  


Please people keep dancing and we'll delve further into some additional econometrics tests next week.