tag:blogger.com,1999:blog-54278489543225933572018-06-28T02:12:03.981-07:00The Dancing EconomistThis is the official blog of Steven Sabol. The Dancing Economist.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.comBlogger129125tag:blogger.com,1999:blog-5427848954322593357.post-50545249500653612792012-02-03T09:53:00.000-08:002012-02-03T09:53:38.273-08:00Employment Situation: An Inside LookCheck out the divergence between those unemployed from ages 20-24 and those aged 25-34. Currently there are about 1,000,000 more people that are unemployed in the 25-34 age bracket than in the 20-24 age bracket.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gCwSUvEqZTU/TywXy1IOgLI/AAAAAAAAAM0/i96UkTi_nog/s1600/spreadunr.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-gCwSUvEqZTU/TywXy1IOgLI/AAAAAAAAAM0/i96UkTi_nog/s640/spreadunr.png" width="640" /></a></div><br /><br /><br />These values used to coincide but now the spread between them is widening which suggests that younger people may have an advantage when it comes to getting hired. They're less picky and more willing to accept anything that has some dollar signs attached to it versus the older and more demanding unemployed member of society. In the graph below the blue line represents those aged 25-34 that are unemployed and the black line is for 20-24 age range.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-wf_zrNfnyO4/TywY9uX8kNI/AAAAAAAAAM8/TuEbpPMSnR8/s1600/gogoo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-wf_zrNfnyO4/TywY9uX8kNI/AAAAAAAAAM8/TuEbpPMSnR8/s640/gogoo.png" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Check out the below graph which shows the unemployment rate for those over 25 with a bachelors degree, those without one and those that didn't graduate high school. Getting a bachelors is not a sufficient condition for employment but as the graph below shows, you do have greater job security.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-g_ctiyoKL3g/TywdKMrkUfI/AAAAAAAAANM/nWhT0rLiHPU/s1600/unrate1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-g_ctiyoKL3g/TywdKMrkUfI/AAAAAAAAANM/nWhT0rLiHPU/s640/unrate1.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>Additionally, check out the spread between those with a bachelors and those who graduated high school. Notice how this spread has widened. This suggests that those with bachelors are certainly less susceptible to economic fluctuations and that getting through those student loans is definitely worth it.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-jqOnvSsG_Bc/TyweP14UmOI/AAAAAAAAANU/e01Fron5uhI/s1600/unrate2.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-jqOnvSsG_Bc/TyweP14UmOI/AAAAAAAAANU/e01Fron5uhI/s640/unrate2.png" width="640" /></a></div><br /><br /><br />The unemployment rate for those who graduated high school is currently 4% higher than those that got their bachelors. Lesson of the day: Read more books and get some education. I got to keep dancin' and so should you!<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com13tag:blogger.com,1999:blog-5427848954322593357.post-44482049898393849362012-02-02T11:12:00.000-08:002012-02-02T11:12:01.125-08:00Unemployment Insurance Claims: Healing Slowly but Surely.The following graph compares the 4 week moving average of initial claims from the 2007 peak with the one from the 1981 recession. As you can see, jobless claims have been much more persistent than in previous recessions.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-VCKCrgWf7ak/TyrcnUDOSnI/AAAAAAAAAMs/XGPSK87VUbw/s1600/4wma.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://4.bp.blogspot.com/-VCKCrgWf7ak/TyrcnUDOSnI/AAAAAAAAAMs/XGPSK87VUbw/s640/4wma.png" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />The blue line is from the 2007 and the dashed red line is from 1981. The x-axis is in weeks after the peak. As the graph suggests- we still have a ways to go before we get back to normal levels but what it also shows is that much healing has already taken place. This is undoubtedly good news for fridays Employment Situation.<br /><br />Keep Dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-35808057515538804632012-01-13T00:24:00.000-08:002012-01-13T00:29:35.491-08:00Just keeps falling doesn't it?Check out the following graph which charts how much the average sales price of a new home 27 months from the peak in economic activity moves. As you will notice ( this is for recessions from 1980 on), the average has now fallen more than any other recession previous (well from the 1980's on) as indicated by the dashed black line of death.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-PVUhVWD8E6s/Tw_of8e0GnI/AAAAAAAAAMc/1K4VWdSpHCQ/s1600/avgsp.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://1.bp.blogspot.com/-PVUhVWD8E6s/Tw_of8e0GnI/AAAAAAAAAMc/1K4VWdSpHCQ/s640/avgsp.png" width="640" /></a></div><br /><br />The next graph truly depicts the demoralizing collapse in new home prices, while also capturing how over inflated prices really were. Its a sobering picture to say the least. Notice how prices gave the false sense of rebound and then just kept on falling. Depression Economics people- if you haven't figured it out already- just assume the worst and your probably right. This is the type of thing that makes Roubini so freakin popular and prophetic sounding, although any proper student of financial crisis would already know to expect such things. Readers of this blog should have definitely learned to expect such things.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-pHMl7GzsaqY/Tw_osr22SkI/AAAAAAAAAMk/0I4IepypACw/s1600/aspnh1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://1.bp.blogspot.com/-pHMl7GzsaqY/Tw_osr22SkI/AAAAAAAAAMk/0I4IepypACw/s640/aspnh1.png" width="640" /></a></div><br /><br /><br />Keep Dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com2tag:blogger.com,1999:blog-5427848954322593357.post-45689762632930903282012-01-09T11:49:00.000-08:002012-01-09T11:49:16.114-08:00Consumer Sentiment: WOMP.Consumer Sentiment is a measure of how people feel about their wealth and how happy they are. What the below graph reveals is that people feel worse than average 27 months after the peak in economic activity as defined by the National Bureau of Economic Research (NBER).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-LV8beKwEjgI/TwpvxoJqs1I/AAAAAAAAAMM/A0Me1ksT0PY/s1600/sent.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-LV8beKwEjgI/TwpvxoJqs1I/AAAAAAAAAMM/A0Me1ksT0PY/s640/sent.png" width="640" /></a></div><br />The graph below reveals that the general populus aren't feeling that their situation is all that hunky-dory . This may be due to a bunch of things: their net worth has fallen with respect to their home values, they are in serious debt, their spouse left them for another, younger version of themselves and maybe there even unemployed. But whatever the reason the bottom line is that people still feel like doo doo.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-zdIG7SIzVj4/Twpvx1C-NmI/AAAAAAAAAMU/oxOmw_v7hzU/s1600/sent1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-zdIG7SIzVj4/Twpvx1C-NmI/AAAAAAAAAMU/oxOmw_v7hzU/s640/sent1.png" width="640" /></a></div><br />Consumer sentiment is low. Thats undoubtable. My guess is that they'll feel significantly better if jobs were more readily available and their real disposible incomes were higher, maybe even significantly higher. As mentioned in a previous post- wage growth has been rather stagnant.<br /><br />Keep dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-19327964544059352732012-01-07T07:32:00.000-08:002012-01-07T07:32:05.593-08:00Might as well buy a crib...oh wait I'm broke.<div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;">The National Association of Realtors has put out the Housing Affordability Index since the early 1980's. The higher the index value the more able a household earning the median income is able to purchase and make mortgage payments on a home.</span></span></div><span style="background-color: white; color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif; font-size: 12px; text-align: left;"><br /></span><br /><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><a href="http://research.stlouisfed.org/fred2/series/COMPHAI">Fred provides the following description of the series:</a></span></span></div><blockquote class="tr_bq"><span style="background-color: white; color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif; font-size: 12px; text-align: left;">Measures the degree to which a typical family can afford the monthly mortgage payments on a typical home. </span><br style="background-color: white; color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif; font-size: 12px; text-align: left;" /><span style="background-color: white; color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif; font-size: 12px; text-align: left;">Value of 100 means that a family with the median income has exactly enough income to qualify for a mortgage on a median-priced home. An index above 100 signifies that family earning the median income has more than enough income to qualify for a mortgage loan on a median-priced home, assuming a 20 percent down payment. For example, a composite housing affordability index (COMPHAI) of 120.0 means a family earning the median family income has 120% of the income necessary to qualify for a conventional loan covering 80 percent of a median-priced existing single-family home. An increase in the COMPHAI then shows that this family is more able to afford the median priced home.</span></blockquote><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;">As you can see now, the 2007 Recession has clearly brought homes to their all time most affordable levels. If I had cash I might buy a home right now, but i don't and the 14 million unemployed or so don't either. Womp.</span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-nTtcPmKtqcw/TwhkczfJxWI/AAAAAAAAAL8/VM3uFnoZbEo/s1600/HAI.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-nTtcPmKtqcw/TwhkczfJxWI/AAAAAAAAAL8/VM3uFnoZbEo/s640/HAI.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-V3Sgu0QyRqA/TwhkdL6NixI/AAAAAAAAAME/azlaeaWq2hM/s1600/HAI1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="http://1.bp.blogspot.com/-V3Sgu0QyRqA/TwhkdL6NixI/AAAAAAAAAME/azlaeaWq2hM/s640/HAI1.png" width="640" /></a></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;">What a value of around 195 means is that a family earning a median income has 195% of the income necessary to qualify </span></span><span style="background-color: white; color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif; font-size: 12px;">for a conventional loan covering 80 percent of a median-priced existing single-family home. By historical standards never has it been the best and worst time to buy a home. The best opportunity, under the worst circumstances. </span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;">Keep dancin'</span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;">Steven J.</span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div><div style="text-align: left;"><span style="color: #333333; font-family: 'Lucida Grande', Lucida, verdana, arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com11tag:blogger.com,1999:blog-5427848954322593357.post-80140291034468361192012-01-06T09:18:00.000-08:002012-01-06T09:18:26.058-08:00Civilian Unemployment: PersistenceToday the Civilian Unemployment numbers were released and all they did was verify one thing- that recoveries in the United States since 1990 have been jobless ones. Check out the following graph which takes every post WW2 recession and averages their numbers from peak to 27 months out. Notice that this one has the greatest persistence of all of them in terms of a high unemployment rate.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-x1GgJUy1yVQ/Twcq12tVAVI/AAAAAAAAALs/4Xel8wUkIBI/s1600/unrate1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://4.bp.blogspot.com/-x1GgJUy1yVQ/Twcq12tVAVI/AAAAAAAAALs/4Xel8wUkIBI/s640/unrate1.png" width="640" /></a></div><br /><br /><br />Additionally check out the next graph which just uses the recessions from 1990 and beyond. These are all the recessions that have been characterized by jobless recoveries. The thing to notice here is not the level of the unemployment rate, but that in these recoveries the unemployment rate also failed to drop significantly 27 months or so after the peak. This recession has been deeper (a cyclical factor) which explains why unemployment is so freakin' high, but unemployment being persistent has nothing to do with cyclical factors- yet it seems more structural reforms may be necessary. This isn't house lock people this is something more than that. This is the structure of unemployment benefits and the nature of profit seeking firms, that want to please shareholders. Being lean and mean is attractive for companies that face constant uncertainty, especially when growth would be a miracle occurrence. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-KnRUSx4IDco/TwcsqlngArI/AAAAAAAAAL0/Z8L5Bh3I04U/s1600/unrate2.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-KnRUSx4IDco/TwcsqlngArI/AAAAAAAAAL0/Z8L5Bh3I04U/s640/unrate2.png" width="640" /></a></div><br /><br /><br />The last graph shows that while unemployment does still remain stubbornly high at least it is falling. Although this may be because people are just plain dropping out of the labor force. A closer look into the Employment Situation would be necessary to reveal the details.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-bxm410B7IK8/Twcq1giNQDI/AAAAAAAAALk/oOMUiLl3H2c/s1600/unrate.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-bxm410B7IK8/Twcq1giNQDI/AAAAAAAAALk/oOMUiLl3H2c/s640/unrate.png" width="640" /></a></div><br />keep dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com20tag:blogger.com,1999:blog-5427848954322593357.post-63168309388655706122012-01-05T09:00:00.000-08:002012-01-05T09:00:18.753-08:00Real Hourly Wages Per Hour: Poo Poo Platter PerformanceThe past recession brought about wages that fell to their 2005 wages and caused serious squeezes to the personal balance sheet. As the graph below shows real compensation per hour growth is slower now than it was three years after the start of any previous recession. That is undoubtedly, by historical standards, a terrible thing. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-pWOTCpsr_cU/TwXW1wboDlI/AAAAAAAAALU/wVRe8aA8BiY/s1600/RCPH.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-pWOTCpsr_cU/TwXW1wboDlI/AAAAAAAAALU/wVRe8aA8BiY/s640/RCPH.png" width="640" /></a></div><div>The graph below shows that we have indeed had some wage growth, however, the above graph helps to remind us that by historical standards (Post WW2) it has been pathetic. </div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-3wjBh4YMTcc/TwXW2NxXtQI/AAAAAAAAALc/s1YqeITY_Bk/s1600/RCPH2.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-3wjBh4YMTcc/TwXW2NxXtQI/AAAAAAAAALc/s1YqeITY_Bk/s640/RCPH2.png" width="640" /></a></div><div><br /></div><div>Keep dancin'</div><div><br /></div><div>Steven J.</div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com2tag:blogger.com,1999:blog-5427848954322593357.post-34718740371250817992012-01-04T17:03:00.000-08:002012-01-04T17:04:40.552-08:00Detroit UnemploymentTodays graph will be highlighting how much this recession has impacted unemployment in Detroit versus the recessions of 2001 and 1990. As you can see unemployment has risen and remains unholy.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-JlMlLhbu4a8/TwT1jTtJjvI/AAAAAAAAAK8/o6dwm2jaK58/s1600/DetU.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-JlMlLhbu4a8/TwT1jTtJjvI/AAAAAAAAAK8/o6dwm2jaK58/s640/DetU.png" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Detroit has been hit particularly hard due to the already struggling automotive industry, so the recession just brought about more reasons of layoffs. The last four unemployment numbers have been very positive however.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-3SojQpg755g/TwT21_AWniI/AAAAAAAAALI/wM2mpi0QKjY/s1600/DetU2.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://1.bp.blogspot.com/-3SojQpg755g/TwT21_AWniI/AAAAAAAAALI/wM2mpi0QKjY/s640/DetU2.png" width="640" /></a></div><br /><br /><br />Keep dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-43518530816750814222012-01-03T11:22:00.000-08:002012-01-03T11:22:56.475-08:00Graphical Representations of Recessionary Woes: ISM PMIToday, The Institute of Suppy Management Purchasing Managers Index was released. As you can see from the graph, we are finally above the average PMI for 27 quarters after a recession begins. Thats something to be cheery about.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-48jtVjgzT_E/TwNVA4gEh6I/AAAAAAAAAKk/zOUVE78yIQY/s1600/ISMPMI.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-48jtVjgzT_E/TwNVA4gEh6I/AAAAAAAAAKk/zOUVE78yIQY/s640/ISMPMI.png" width="640" /></a></div><br /><br /><br />Overall though the number aren't that spectacular and remain, well, average. The readings for todays release are clearly above 50 which means more managers plan on expanding their operations versus contracting them. Overall this number casts good news over the U.S. sput sputtering economy. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-I6qN17RnYuA/TwNVKU2-4NI/AAAAAAAAAKw/KQoggX366ok/s1600/iSMPMI2.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://3.bp.blogspot.com/-I6qN17RnYuA/TwNVKU2-4NI/AAAAAAAAAKw/KQoggX366ok/s640/iSMPMI2.png" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Please Keep Dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-44814913180849775102012-01-03T10:55:00.000-08:002012-01-03T10:55:00.029-08:00Graphical Representations of Recessionary Woes: Oil PricesToday on the Dancing Economist we will exploit a graphical tool in FRED that allows us to benchmark movements in an time-series with their historical recessionary past. Todays graph is one of the West Texas Intermediate Spot Oil Price.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-GB-scdQJHxY/TwJlOK_X1gI/AAAAAAAAAKY/dkgqQGoClYU/s1600/OILPRICE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="384" src="http://2.bp.blogspot.com/-GB-scdQJHxY/TwJlOK_X1gI/AAAAAAAAAKY/dkgqQGoClYU/s640/OILPRICE.png" width="640" /></a></div><br /><br /><br />Notice how in the past recessionary oil prices didn't get as high as they historically have gotten and that they have also fallen more and rebounded with less vigor than the usual. In fact as the above graph shows, they are lower now than any point in time after any previous recession. The recessionary periods I have chosen to include in the min and max calculations are all but the 1973 one as the artificial price ceiling employed then distorts the numbers. Keep dancin' and I'll keep posting,<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com7tag:blogger.com,1999:blog-5427848954322593357.post-90065308947926626972012-01-02T14:16:00.000-08:002012-01-02T14:16:22.273-08:00Monetary Policy & Credit Easing pt. 8: Econometrics Tests in RHello, folks its time to cover some important econometrics tests you can do in R.<br /><br /><span style="font-family: inherit;"><span style="background-color: white; line-height: 19px;">The </span><span style="background-color: white; line-height: 19px;">Akaike information criterion</span><span style="background-color: white; line-height: 19px;"> is a measure of the relative goodness of fit of a statistical model. If you have 10 models and order them by AIC, the one with the smallest AIC is your best model, ceterus paribus.</span></span><br />The following code can figure the AIC and a similar version called BIC:<br /><i><br /></i><br /><br /><i>> AIC(srp1.gls)</i><br /><i>[1] 100.7905</i><br /><i><br /></i><br /><i>> BIC(srp1.gls)</i><br /><i>[1] 140.7421</i><br /><br />Say we wish to see if our model has an error term that follows a relatively normal distribution. For this we can perform the Jarque-Bera which tests kurtosis as well as skewness. This function requires that you load the FitAR package.<br /><br /><br /><i>> JarqueBeraTest(srp1.gls$res[-(1)])</i><br /><i>$LM</i><br /><i>[1] 19.2033</i><br /><i><br /></i><br /><i>$pvalue</i><br /><i>[1] 6.761719e-05</i><br /><br />To see if the mean of the residual values is 0 and to see the standard deviation the following code works:<br /><br /><br /><i>> mean(srp1.gls$res[-(1)])</i><br /><i>[1] 0.003354243</i><br /><i>> sd(srp1.gls$res[-(1)])</i><br /><i>[1] 0.3666269</i><br /><br />Other tests like the Breusch-Pagan and Goldfeld-Quandt provide facts like wether autocorrelation is present and give us a hint as to wether our residual variance is stable or not. In order for these to work you have to load the lmtest package. Also you can only run these for the lm objects or for your Ordinary Least Squares Regressions for any Generalized Least Squares regressions you'll have to perform these test manually, and if you know of an easier or softer way please share.<br /><br /><br /><i>> bptest(srp1.lm)</i><br /><i><br /></i><br /><i><span class="Apple-tab-span" style="white-space: pre;"> </span>studentized Breusch-Pagan test</i><br /><i><br /></i><br /><i>data: srp1.lm </i><br /><i>BP = 48.495, df = 12, p-value = 2.563e-06</i><br /><i><br /></i><br /><i>> gqtest(srp1.lm)</i><br /><i><br /></i><br /><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Goldfeld-Quandt test</i><br /><i><br /></i><br /><i>data: srp1.lm </i><br /><i>GQ = 0.1998, df1 = 40, df2 = 40, p-value = 1</i><br /><br /><br />You can also use the Durbin-Watson to test for first order autocorrelation:<br /><br /><br /><i>> dwtest(srp1.lm)</i><br /><i><br /></i><br /><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Durbin-Watson test</i><br /><i><br /></i><br /><i>data: srp1.lm </i><br /><i>DW = 1.4862, p-value = 0.0001955</i><br /><i>alternative hypothesis: true autocorrelation is greater than 0 </i><br /><br />Wish to get confidence intervals for your parameter estimates? Then use the confint() function as shown below for the Generalized Least Squares regression on long-term risk premia from 2001-2011.<br /><br /><br /><div class="MsoNormal"><span style="font-family: Didot;"><i>> confint(p2lrp.gls)<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i> 2.5 % 97.5 %<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>yc -0.1455727340 0.1498852728<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>default 0.2994818014 1.0640354237<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>Volatility 0.0336077958 0.0617798767<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>CorporateProfit -0.0010916473 0.0006628209<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>FF -0.1788624533 0.0931406285<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>ER 0.0001539035 0.0016060804<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>Fedmbs -0.0061554994 0.0085638593<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>Support -0.1499342096 0.1615652273<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>FedComm -0.0108567077 0.0750407328<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>FedGdp -0.1347070955 0.2528217710<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>ForeignDebt -0.0441198164 0.1042805549<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>govcredit 0.1090847204 0.6796839003<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>FedBalance -2.0940925835 0.0370114069<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>UGAP -0.4821566147 0.3188891550<o:p></o:p></i></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i>OGAP -0.2239749029 0.1073611677</i><o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Didot;"><i><br /></i></span></div><div class="MsoNormal"><span style="font-family: inherit;">Another nice feature is finding the log-</span>likelihood<span style="font-family: inherit;"> of your estimation:</span></div><div class="MsoNormal"><span style="font-family: inherit;"><br /></span></div><div class="MsoNormal"></div><div class="MsoNormal"><span style="font-family: inherit;"><i>> logLik(lrp2.lm)</i></span></div><div class="MsoNormal"><span style="font-family: inherit;"><i>'log Lik.' 23.05106 (df=17)</i></span></div><div class="MsoNormal" style="font-family: Didot;"><br /></div><div class="MsoNormal"><span style="font-family: inherit;">Want to see if you have a unit-root in your residual values? Then perform the augmented Dickey-Fuller. For this you'll have to load the 'tseries' package.</span></div><div class="MsoNormal"><span style="font-family: inherit;"><br /></span></div><div class="MsoNormal"></div><div class="MsoNormal"><i><span style="font-family: inherit;">> adf.test(lrp2.gls$res[-(1:4)])</span><span style="font-family: Didot;"><o:p></o:p></span></i></div><div class="MsoNormal" style="font-family: Didot;"><br /></div><div class="MsoNormal" style="font-family: Didot;"><i> Augmented Dickey-Fuller Test<o:p></o:p></i></div><div class="MsoNormal" style="font-family: Didot;"><br /></div><div class="MsoNormal" style="font-family: Didot;"><i>data: lrp2.gls$res[-(1:4)] <o:p></o:p></i></div><div class="MsoNormal" style="font-family: Didot;"><i>Dickey-Fuller = -7.4503, Lag order = 3, p-value = 0.01<o:p></o:p></i></div><div class="MsoNormal" style="font-family: Didot;"><i>alternative hypothesis: stationary <o:p></o:p></i></div><div class="MsoNormal" style="font-family: Didot;"><br /></div><div class="MsoNormal" style="font-family: Didot;"><i>Warning message:<o:p></o:p></i></div><div class="MsoNormal" style="font-family: Didot;"><i>In adf.test(lrp2.gls$res[-(1:4)]) : p-value smaller than printed p-value<o:p></o:p></i></div><span style="font-family: Didot; font-size: 12pt;"><i>> adf.test(lrp2.lm$res)</i></span><!--EndFragment--> <br /><br /><div class="MsoNormal"><span style="font-family: Didot;"><i><br /></i></span></div><!--EndFragment--><br /><br />I hope this mini-series has been informative to all that tuned in. For more info on anything you see here please don't be shy to comment and keep dancin',<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-76220702083825952262012-01-01T07:04:00.000-08:002012-01-01T07:04:29.471-08:00Monetary Policy & Credit Easing pt. 7: R Econometrics TestsIn post 6 we introduced some econometrics code that will help those working with time-series to gain asymptoticly efficient results. In this post we look at the different commands and libraries necessary for testing our assumptions and such. <br /><div><br /></div><div><div style="text-align: center;"><b><span style="font-size: large;">Testing our Assumptions and Meeting the Gauss-Markov Theorem</span></b></div><div><br /></div><div>In this section we will seek to test and verify the assumptions of the simple linear regression model. These assumptions are laid out as follows and are extracted from Hill, Griffiths and Lim 2008:</div><div><br /></div><blockquote>SR1. The value of y, for each value of x, is<br />y= ß_{1}+ß_{2}x+µ<br />SR2. The expected value of the random error µ is<br />E(µ)=0<br />which is equivalent to assuming<br />E(y)= ß_{1}+ß_{2}x<br />SR3. The variance of the random error µ is <br />var(µ)=sigma^2 = var(y)<br />The random variables y and µ have the same variance because they differ only by a constant.<br />SR4. The covariance between any pair of random errors µ_{i} and µ_{j} is<br />cov(µ_{i}, µ_{j})=cov(y_{i},y_{j})=0<br />SR5. The variable x is not random and must take at least two different values.<br />SR6. The values of µ are normally distributed about their mean<br />µ ~ N(0, sigma^2)<br />if the y values are normally distributed and vice-versa </blockquote><div><div>Central to this topics objective is meeting the conditions set forth by the Gauss-Markov Theorem. The Gauss-Markov Theorem states that if the error term is stationary and has no serial correlation, then the OLS parameter estimate is the Best Linear Unbiased Estimate or BLUE, which implies that all other linear unbiased estimates will have a larger variance. An estimator that has the smallest possible variance is called an "efficient" estimator. In essence, the Gauss-Markov theorem states that the error term must have no structure; the residual levels must exhibit no trend and the variance must be constant through time.</div><div><span class="Apple-tab-span" style="white-space: pre;"> </span>When the error term in the regression does not satisfy the assumptions set forth by Gauss-Markov, OLS is still unbiased, but fails to be BLUE as it fails to give the most efficient parameter estimates. In this scenario, a strategy which transforms the regressions variables so that the error has no structure is in order. In time-series analysis, the problem of autocorrelation between the residual values is a common one. There are several ways to approach the transformations necessary to ensure BLUE estimates, and the previous post used the following method to gain asymptotic efficiency and improve our estimates:</div><div><span class="Apple-tab-span" style="white-space: pre;"> </span></div><div>1. Estimate the OLS regression</div><div><br /></div><div>2. Fit OLS residual to an AR(<i>p</i>) process using the Yule-Walker Method and find the value of <i>p</i>.</div><div><br /></div><div>3. Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the estimated <i>p </i>from 2, as the order for your correlation residual term.</div><div><br /></div><div>4. Fit the GLS estimated residuals to an AR(<i>p</i>) process and use the estimated <i>p</i>'s as the final parameter estimates for the error term. </div><div><br /></div><div>What have we done? First we have to find out what the error term autocorrelation process is. What order is <i>p</i>? In order to find this out we fit the OLS residuals to an AR(<i>p</i>) using the Yule-Walker method. Then we take the order <i>p</i> of our estimated error term and run a GLS regression with an AR(<i>p</i>) error term. This will give us better estimates for our model. Research has shown that GLS estimators are asymptotically more efficient than OLS estimates almost one-hundred percent of the time. If you notice in every single regression, the GLS estimator with a twice iterated AR(<i>p</i>) error terms consistently results in a lower standard deviation of the residual value. Therefore the model has gained efficiency which translates into improved confidence intervals. Additionally, by fitting the GLS residuals to an AR(<i>p</i>) we remove any autocorrelation(or structure) that may have been present in the residual. </div><div><br /></div></div></div><div><div style="text-align: center;"><b>Testing For Model Miss-specification and Omitted Variable Bias</b></div><div><br /></div><div>The Ramsey RESET test (Regression Specification Error Test) is designed to detect omitted variable bias and incorrect functional form. Rejection of H_{0} implies that the original model is inadequate and can be improved. A failure to reject H_{0} conveys that the test has not been able to detect any miss-specification.</div><div><br /></div><div>Unfortunately our models of short-term risk premia over both estimation periods reject the null hypothesis, and thus suggest that a better model is out there somewhere. Correcting for this functional miss-specification or omitted variable bias will not be pursued here, but we must keep in mind that our model can be improved upon and is thus not BLUE. </div></div><div><br /></div><div>In R you can run the Ramsey Reset test for standard lm functions using the library <i>lmtest:</i></div><div><br /></div><div><i>>library(lmtest)</i></div><div><i><br /></i></div><div><div><i>> resettest(srp1.lm)</i></div><div><i><br /></i></div><div><i><span class="Apple-tab-span" style="white-space: pre;"> </span>RESET test</i></div><div><i><br /></i></div><div><i>data: srp1.lm </i></div><div><i>RESET = 9.7397, df1 = 2, df2 = 91, p-value = 0.0001469</i></div></div><div><br /></div><div>For GLS objects however you'll need to do it manually and that procedure will not be outline here. Although if you really want to know please feel free to email or leave a comment below. </div><div><br /></div><div><div style="text-align: center;"><b>Addressing Multicollinearity</b></div><div><br /></div><div>In the original formulation of the model there existed an independent variable called CreditMarketSupport, that was very similar to our FedBalance variable. Both variables are percentages and shared the same numerator while also having very similar denominators. As a result we had suffered from a condition called exact collinearity as the correlation between these two variables was nearly one.</div><div><i><br /></i></div><div><i>> cor(FedBalance1,CreditMarketSupport1)</i></div><div><i><br /></i></div><div><i>0.9994248</i></div><div><br /></div><div>With exact collinearity we were unable to obtain a least squares estimate of our ß coefficients and these variables were behaving opposite of what we were expecting. This violated one of our least squares assumptions SR5 which states that values of x_{ik} are not exact linear functions of the other explanatory variables. To remedy this problem, we removed CreditMarketSupport from the models and we are able to achieve BLUE estimates.</div></div><div><br /></div><div style="text-align: center;"><b> Suspected Endogeniety</b></div><div><br /></div><div>In our estimation of long-term risk premia over the first time period we suspect endogeniety in the cyclical variable Output Gap. In order to remedy this situation we replace it with an instrumental variable - the percentage change in S&P 500 and perform the Hausman Test which is laid out as follows:</div><div><br /></div><div>H_{0}: delta = 0 (no correlation between x_{i} and µ_{i})</div><div><br /></div><div>H_{1}: delta ≠ 0 (correlation between x_{i} and µ_{i})</div><div><br /></div><div>When we perform the Hausman Test using S&P 500 as our instrumental variable our delta ≠ 0 and is statistically significant. This means that our Output Gap variable is indeed endogenous and correlated with the residual term. If you want to learn more about the Hausman Test and how to perform it in R please leave a comment or email me and i'll make sure to get the code over to you. When we perform the Two Stage Least Squares Regression to correct for this not a single term is significant. This can be reasonably be attributed to the problem of weak instruments. The 2 Stage Least Squares Estimation is provided below. Since, the percentage change in the S&P500 was only correlated with the Output Gap 0.110954, there is strong reason to suspect that weak instruments are the source of the problem. We will choose to not locate a proper instrumental variable to emulate the Output Gap, instead we will keep in mind that we have an endogenous variable when interpreting our coefficient estimates which will now end up being slightly biased. </div><div><br /></div><div>Below is how to perform a two-stage least squares regression in R when your replacing an endogenous variable with an exogenous one. First you'll need to load the library <i>sem</i> into R. In the below regression the first part includes all the variables from the original model and the second part lists all of our exogenous and instrumental variables which in this case is just the percentage change in the S&P 500.</div><div><br /></div><div><div><i>> tSLRP1<-tsls(lrp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1,~ yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+sp500ch+OGAP1 )</i></div><div><i><br /></i></div><div><i>> summary(tSLRP1)</i></div><div><i><br /></i></div><div><i> 2SLS Estimates</i></div><div><i><br /></i></div><div><i>Model Formula: lrp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + </i></div><div><i> FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + OGAP1</i></div><div><i><br /></i></div><div><i>Instruments: ~yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + FedBalance1 + </i></div><div><i> govcredit1 + ForeignDebt1 + sp500ch + OGAP1</i></div><div><i><br /></i></div><div><i>Residuals:</i></div><div><i> Min. 1st Qu. Median Mean 3rd Qu. Max. </i></div><div><i> -9.030 -1.870 0.021 0.000 2.230 7.310 </i></div><div><i><br /></i></div><div><i> Estimate Std. Error t value Pr(>|t|)</i></div><div><i>(Intercept) -5.28137 44.06906 -0.11984 0.9049</i></div><div><i>yc1 -1.48564 10.60827 -0.14005 0.8889</i></div><div><i>CP1 -0.01584 0.09206 -0.17204 0.8638</i></div><div><i>FF1 0.20998 2.43849 0.08611 0.9316</i></div><div><i>default1 -7.16622 65.35728 -0.10965 0.9129</i></div><div><i>Support1 6.39893 47.72244 0.13409 0.8936</i></div><div><i>ER1 4.56290 35.91837 0.12704 0.8992</i></div><div><i>FedGDP1 1.86392 9.16081 0.20347 0.8392</i></div><div><i>FedBalance1 0.73087 12.96474 0.05637 0.9552</i></div><div><i>govcredit1 0.17051 0.89452 0.19062 0.8492</i></div><div><i>ForeignDebt1 -0.22396 1.41749 -0.15799 0.8748</i></div><div><i>UGAP1 4.55897 35.33446 0.12902 0.8976</i></div><div><i>OGAP1 0.01331 0.09347 0.14235 0.8871</i></div><div><i><br /></i></div><div><i>Residual standard error: 3.3664 on 93 degrees of freedom</i></div></div><div><br /></div><div>Notice that our model now doesn't have any significant terms. This is why we will choose to ignore the endogeniety of our Output Gap and probably Unemployment Gap variables. Correcting for endogeniety does more harm than good in this case.</div><div><br /></div><div><div style="text-align: center;"><b><span style="font-size: large;">Results and Concluding Thoughts</span></b></div><div><br /></div><div>As this paper hopefully shows, the Feds actions did directly impact the easing of broader credit conditions in the financial markets. </div><div><br /></div><div>Over our first estimation period from 1971 to 1997 we find that the Fed's support of Depository Institutions as a percentage of savings and time deposits is positively related to the short-term risk premia. Specifically we find that a 1 percentage point increase in Support leads to a 2.1 percent increase in short-term risk premia. This was as expected because Depository Institutions would only borrow from the Fed if no other options existed. We also find that a 1 percentage point increase in the federal funds rate leads to a .19 percentage increase in short-term risk premia. This is consistent with our original hypothesis as an increased FF puts positive pressure on short-term rates like the 3 month commercial paper rate, thus resulting in an widened spread. With respect to long-term risk premia, we find that a 1 percentage point increase in FF leads the long-term risk premia to decrease by .66 percentage points and a 1 percent increase in the federal funds rate leads to a .07 decrease in the long-term risk premia.</div><div><br /></div><div>Over our second estimation period the composition of the Feds balance sheet is considered. We see that the CCLF did decrease short-term risk premiums, with every one percent increase translating to a decrease in short-term risk premia by .1145 percentage points. Another important result is that Fed purchases of Agency Debt and Agency MBS did have a significant, although almost negligible effect on short-term risk premia. One surprising result with the estimation of the long-term risk premia is that our Fed balance sheet size variable has a sign that is opposite of what we expected and its significance is particularly surprising. This may be expected since this period is largely characterized by both a shrinking balance sheet and narrowing risk premia as investments were considered relatively safe. However towards the end of the period risk premiums shot up and only after did the size of the balance sheet also increase, thus the sample period may place too much weight towards the beginning of the time period and not enough towards the end. This is a reasonable assumption given that our estimate of the balance sheet size showed a large negative impact on risk premia over our longer estimation period. </div></div><div><br /></div><div>A <a href="https://files.me.com/stevensabol/4mhsas">complete version of the paper can be located here</a>.</div><div><br /></div><div>Please people keep dancing and we'll delve further into some additional econometrics tests next week. </div><div><br /></div><div><br /></div><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-38882433752948080582011-12-30T09:08:00.000-08:002011-12-30T09:08:54.277-08:00Monetary Policy and Credit Easing pt. 6: Empirical Estimation and Methodology<b><span style="font-size: large;">IT</span></b> is now appropriate to lay out our two regression models in full for empirical estimation over our two separate time periods. The first estimation is from 4/1/71 to 7/1/97 and the second is from 4/1/01 to 4/1/11. The methodology employed in the estimation of these two models is a procedure using Generalized Least Squares with a Cochrane-Orcutt, style iterated residual value. For those that wish to perform the same regressions at home I have provided the following links to my data. This is for the <a href="https://files.me.com/stevensabol/txy7b7">estimation period from 1971 to 1997</a> and this <a href="https://files.me.com/stevensabol/rnk7cw">one is for the 2001 to 2011 estimations</a>. The following four steps were taken for each estimation:<br /><br />1. Estimate the OLS regression<br /><br />2. Fit OLS residual to an AR(<i>p</i>) process using the Yule-Walker Method and find the value of <i>p.</i><br /><br />3. Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the estimated <i>p</i> from 2, as the order for your correlation residual term.<br /><br />4. Fit the GLS estimated residuals to an AR(<i>p</i>) process using the Yule-Walker Method and use the estimated <i>p</i>'s as the final parameter estimates for the error term. <br /><br />The end goal of the above procedure is to have our models be asymptotically BLUE or the Best Linear Unbiased Estimators. The implies that they have the smallest variance out of all the models that meet the rest of the Gauss-Markov assumptions. A little background behind the methodology is in order. First we will perform the standard Ordinary Least Squares (OLS) regression on our dependent variables. This will give us at the very least unbiased estimators. Second we take the residuals from the first step and fit them to an AR(<i>p</i>) process as selected by the Yule-Walker Method. This method selects the optimal lag that characterizes the autocorrelation in the residuals. We automatically take this step, due to the well known fact that most time-series suffer from autocorrelation problems as verified by our correlograms. Then we re-estimate the regression using Generalized Least Squares which adjusts each term and divides them by their individual variances, while also incorporating the AR(<i>p</i>) lag that we discovered during the previous step. The final step, which fits our GLS models residuals to an AR(<i>p</i>) process is what leads to our asymptoticly efficient results. We lay out our first estimation period models below.<br /><br /><b>First Estimation: 4/1/71 to 7/1/97 </b><br /><br /><b>1. Monetary Policies Impact On Short-term Risk Premiums</b><br /><br />Our first model which seeks to answer how monetary policy impacts the risk premia on short-term commercial paper as estimated over our first time period 4/1/71 to 7/1/97 is as follows:<br /><br />SR^{premium}_{t}=ß_{0}+ß_{1}*FedBalance^{size}_{t}+ß_{2}*Support_{t} + ß_{3}*UGAP_{t}+ ß_{4}*FF_{t}+ ß_{5}*ER_{t}+ ß_{6}*YC_{t}+ ß_{7}*Default^{spread}_{t}+ ß_{8}*CP_{t}+ ß_{9}*OGAP_{t} + ß_{10}*FedGDP_{t}+ ß_{11}*govcredit_{t}+ ß_{12}*ForeignDebt_{t} + µ_{t}<br /><br />Where,<br /><br />SR^{premium}_{t} = Short-term Risk Premium at time, t<br /><br />Support_{t}= Fed's funds at depository institutions as a percentage of their main financing streams at time, t<br /><br />FedBalance^{size}_{t}= The Fed's credit market asset holdings as percentage of the total credit market assets at time, t<br /><br />FF_{t}= Federal Funds rate at time, t<br /><br />ER_{t}= Excess Reserves of Depository Institutions at time, t<br /><br />YC_{t}= Yield curve at time, t<br /><br />Default^{spread}_{t}= Default Spread between BAA_{t} & AAA_{t} rated bonds at time, t<br /><br />CP_{t} = Corporate Profits After Tax at time, t<br /><br />FedGDP_{t}= Fed's holdings of total public debt as a percentage of GDP at time, t<br /><br />govcredit_{t}= Government Holdings Of Domestic Credit Market Debt As A Percentage Of The Total at time, t<br /><br />ForeignDebt_{t}= Foreign Holdings of Federal Debt As A Percentage Of The Total at time, t<br /><br />UGAP_{t} = Unemployment gap at time, t<br /><br />OGAP_{t} = Output gap at time, t<br /><br />µ_{t}= error term at time, t<br /><div><br /></div><div style="text-align: center;"><b>R DATA WORK</b></div><div><br /></div><div>So now it is time for the long awaited econometrics work in R. First thing you'll want to do is read the data into R from your data file which is in this case is the Earlreg file.</div><div><br /></div><div><i>> earl<- read.csv("/Users/stevensabol/Desktop/R/earlreg.csv",header = TRUE, sep = ",")</i></div><div><i><br /></i></div><div>Then you define your variable names so you can easily manipulate your data in R. So when you open the official .csv data file take a look at the variable names and rename them using the following procedure. </div><div><br /></div><div><i>>yc1<-earl[,"yc"]</i></div><div><i><br /></i></div><div><div>After you define what you call everything you're then free to go crazy and run regressions. Below is how you run the standard Ordinary Least Squares regression. The lm function enables you to run linear regressions:</div><div><br /></div><div><b>1. Estimate the OLS regression</b></div><div><br /></div><div><i>>srp1.lm=lm(srp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1)</i></div><div><br /></div><div>In order to get the output you have to use the summary function:</div><div><br /></div><div><i>> summary(srp1.lm)</i></div><div><i><br /></i></div><div><i>Call:</i></div><div><i>lm(formula = srp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + </i></div><div><i> FedGDP1 + FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + </i></div><div><i> OGAP1)</i></div><div><i><br /></i></div><div><i>Residuals:</i></div><div><i> Min 1Q Median 3Q Max </i></div><div><i>-1.04289 -0.20145 -0.04041 0.15230 1.21044 </i></div><div><i><br /></i></div><div><i>Coefficients:</i></div><div><i> Estimate Std. Error t value Pr(>|t|) </i></div><div><i>(Intercept) -2.7591194 1.0359966 -2.663 0.00912 ** </i></div><div><i>yc1 0.1320996 0.0580500 2.276 0.02516 * </i></div><div><i>CP1 -0.0022773 0.0073773 -0.309 0.75825 </i></div><div><i>FF1 0.1699788 0.0340654 4.990 2.81e-06 ***</i></div><div><i>default1 0.4382965 0.1876685 2.335 0.02167 * </i></div><div><i>Support1 2.2383850 0.6660140 3.361 0.00113 ** </i></div><div><i>ER1 0.3351508 0.3017644 1.111 0.26959 </i></div><div><i>FedGDP1 0.3031938 0.2558144 1.185 0.23895 </i></div><div><i>FedBalance1 0.4014920 0.3477547 1.155 0.25124 </i></div><div><i>govcredit1 -0.0928817 0.0401603 -2.313 0.02294 * </i></div><div><i>ForeignDebt1 -0.0068900 0.0215393 -0.320 0.74977 </i></div><div><i>UGAP1 -0.0912273 0.0520491 -1.753 0.08295 . </i></div><div><i>OGAP1 0.0006669 0.0014895 0.448 0.65536 </i></div><div><i>---</i></div><div><i>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 </i></div><div><i><br /></i></div><div><i>Residual standard error: 0.3789 on 93 degrees of freedom</i></div><div><i>Multiple R-squared: 0.6474,<span class="Apple-tab-span" style="white-space: pre;"> </span>Adjusted R-squared: 0.6019 </i></div><div><i>F-statistic: 14.23 on 12 and 93 DF, p-value: 2.642e-16 </i></div></div><div><i><br /></i></div><div>Next we perform step 2 which is:</div><div><br /></div><div><b>2. Fit OLS residual to an AR(<i>p</i>) process using the Yule-Walker Method and find the value of <i>p.</i></b></div><div><i><br /></i></div><div><div style="font-style: italic;">> srp.lmfit<-ar.yw(srp1.lm$res)</div><div style="font-style: italic;">> srp.lmfit</div><div style="font-style: italic;"><br /></div><div style="font-style: italic;">Call:</div><div style="font-style: italic;">ar.yw.default(x = srp1.lm$res)</div><div style="font-style: italic;"><br /></div><div style="font-style: italic;">Coefficients:</div><div style="font-style: italic;"> 1 </div><div style="font-style: italic;">0.2535 </div><div style="font-style: italic;"><br /></div><div style="font-style: italic;">Order selected 1 sigma^2 estimated as 0.1201 </div><div style="font-style: italic;"><br /></div><div>So the Yule-Walker methodology fits the residual series to an AR(1) process.</div></div><div><br /></div><div><b>3. Re-estimate model using Generalized Least Squares fit by Maximum Likelihood estimation, using the estimated <i>p</i> from 2, as the order for your correlation residual term.</b></div><div><b><br /></b></div><div>In order to run a GLS regression your going to need to load the nlme package:</div><div><br /></div><div><i>> library(nlme)</i></div><div><br /></div><div>Then you can go crazy:</div><div><br /></div><div><i>>srp1.gls=gls(srp1~yc1+CP1+FF1+default1+Support1+ER1+FedGDP1+FedBalance1+govcredit1+ForeignDebt1+UGAP1+OGAP1, corr=corARMA(p=1,q=0),method="ML")</i></div><div><br /></div><div><div><i>> summary(srp1.gls)</i></div><div><br /></div><div>The following output is produced:</div><div><br /></div><div><i>Generalized least squares fit by maximum likelihood</i></div><div><i> Model: srp1 ~ yc1 + CP1 + FF1 + default1 + Support1 + ER1 + FedGDP1 + FedBalance1 + govcredit1 + ForeignDebt1 + UGAP1 + OGAP1 </i></div><div><i> Data: NULL </i></div><div><i> AIC BIC logLik</i></div><div><i> 100.7905 140.7421 -35.39526</i></div><div><i><br /></i></div><div><i>Correlation Structure: AR(1)</i></div><div><i> Formula: ~1 </i></div><div><i> Parameter estimate(s):</i></div><div><i> Phi </i></div><div><i>0.3696665 </i></div><div><i><br /></i></div><div><i>Coefficients:</i></div><div><i> Value Std.Error t-value p-value</i></div><div><i>(Intercept) -3.0219486 1.2595942 -2.399145 0.0184</i></div><div><i>yc1 0.1929605 0.0640627 3.012054 0.0033</i></div><div><i>CP1 -0.0060642 0.0071791 -0.844700 0.4004</i></div><div><i>FF1 0.1918066 0.0362894 5.285466 0.0000</i></div><div><i>default1 0.5292204 0.2060591 2.568293 0.0118</i></div><div><i>Support1 2.1086204 0.7405128 2.847514 0.0054</i></div><div><i>ER1 0.5651430 0.2770125 2.040135 0.0442</i></div><div><i>FedGDP1 0.1028773 0.3143122 0.327309 0.7442</i></div><div><i>FedBalance1 0.7845392 0.4130914 1.899190 0.0606</i></div><div><i>govcredit1 -0.1240196 0.0524191 -2.365922 0.0201</i></div><div><i>ForeignDebt1 0.0009822 0.0278623 0.035252 0.9720</i></div><div><i>UGAP1 -0.1266050 0.0657633 -1.925161 0.0573</i></div><div><i>OGAP1 -0.0014094 0.0014328 -0.983623 0.3279</i></div><div><i><br /></i></div><div><i> Correlation: </i></div><div><i> (Intr) yc1 CP1 FF1 deflt1 Spprt1 ER1 FdGDP1 FdBln1 gvcrd1</i></div><div><i>yc1 -0.267 </i></div><div><i>CP1 0.054 -0.062 </i></div><div><i>FF1 -0.308 0.726 -0.012 </i></div><div><i>default1 0.081 -0.208 0.235 -0.342 </i></div><div><i>Support1 -0.005 -0.109 -0.107 -0.419 0.137 </i></div><div><i>ER1 -0.208 0.077 -0.081 0.067 -0.180 0.028 </i></div><div><i>FedGDP1 -0.728 -0.059 -0.048 0.083 -0.057 -0.002 -0.308 </i></div><div><i>FedBalance1 0.461 0.250 0.020 0.208 0.036 -0.081 0.445 -0.887 </i></div><div><i>govcredit1 -0.570 -0.233 -0.095 -0.291 -0.261 0.072 0.068 0.666 -0.784 </i></div><div><i>ForeignDebt1 -0.475 0.132 0.006 -0.092 0.093 0.219 0.057 0.059 -0.045 0.227</i></div><div><i>UGAP1 -0.048 -0.193 -0.062 0.085 -0.447 0.150 0.090 0.045 0.064 -0.053</i></div><div><i>OGAP1 -0.029 0.092 0.295 0.062 -0.208 -0.024 0.053 -0.021 0.013 0.056</i></div><div><i> FrgnD1 UGAP1 </i></div><div><i>yc1 </i></div><div><i>CP1 </i></div><div><i>FF1 </i></div><div><i>default1 </i></div><div><i>Support1 </i></div><div><i>ER1 </i></div><div><i>FedGDP1 </i></div><div><i>FedBalance1 </i></div><div><i>govcredit1 </i></div><div><i>ForeignDebt1 </i></div><div><i>UGAP1 -0.016 </i></div><div><i>OGAP1 0.064 0.041</i></div><div><i><br /></i></div><div><i>Standardized residuals:</i></div><div><i> Min Q1 Med Q3 Max </i></div><div><i>-3.08026826 -0.62589269 -0.08409222 0.39781537 3.24233325 </i></div><div><i><br /></i></div><div><i>Residual standard error: 0.3634024 </i></div><div><i>Degrees of freedom: 106 total; 93 residual</i></div></div><div><br /></div><div>After you perform this step you have to refit the residuals in order to get serially uncorrelated terms. </div><div><br /></div><div><b>4. Fit the GLS estimated residuals to an AR(<i>p</i>) process using the Yule-Walker Method and use the estimated <i>p</i>'s as the final parameter estimates for the error term. </b></div><div><br /></div><div><div><i>> s1glsres.ar<-ar.yw(srp1.gls$res)</i></div><div><i>> s1glsres.ar</i></div><div><i><br /></i></div><div><i>Call:</i></div><div><i>ar.yw.default(x = srp1.gls$res)</i></div><div><i><br /></i></div><div><i>Coefficients:</i></div><div><i> 1 </i></div><div><i>0.3718 </i></div><div><i><br /></i></div><div><i>Order selected 1 sigma^2 estimated as 0.1163 </i></div></div><div><br /></div><div>In order to see the results of these actions please refer to image below:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-0XB6OSGCl38/Tvt-EhRgLoI/AAAAAAAAAKM/DYMNqX7IMEc/s1600/srp1.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="504" src="http://4.bp.blogspot.com/-0XB6OSGCl38/Tvt-EhRgLoI/AAAAAAAAAKM/DYMNqX7IMEc/s640/srp1.jpeg" width="640" /></a></div><div>The Ljung-box Q is a test for autocorrelation between the lags in the error terms. Ideally in order to meet the BLUE criteria we have to reject the the null hypothesis for autocorrelation at each lag. We can see that both our original OLS and GLS estimations fail to pass the Ljung-Box Q. However when we readjust the error terms the final time we get residuals that are serially uncorrelated with each other. </div><div><br /></div><div>In order to get the above graph you have to first load the package that will allow you to perform the Ljung-Box Q plot:</div><div><br /></div><div><i>> library(FitAR)</i></div><div><br /></div><div>Then you can proceed from there and define how many plots should be in one picture. In the above image we have 9 therefore:</div><div><br /></div><div><i>> par(mfrow=c(3,3))</i></div><div><br /></div><div>Then you can start adding in your plots. Below is the code for producing the plots for the fitted GLS residuals.</div><div><br /></div><div><div><i>> acf(s1glsres.ar$res[-(1)])</i></div><div><i>> pacf(s1glsres.ar$res[-(1)])</i></div><div><i>> LBQPlot(s1glsres.ar$res[-(1)])</i></div></div><div><br /></div><div>We include the [-(1)] because exclude the first observation since we have an AR(1) process.</div><div><br /></div><div>The same steps above can be applied to any time-series regression model. In the next post we will discuss how to get some summary statistics. Please keep dancin'</div><div><br /></div><div>Steven J.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com4tag:blogger.com,1999:blog-5427848954322593357.post-63620385424126993592011-12-30T09:03:00.000-08:002011-12-30T09:03:21.402-08:00Monetary Policy & Credit Easing pt. 5: Explanatory Variables Continued...<div style="text-align: center;"><span style="font-size: large;">Capturing Treasury Supply Effects</span></div><br /><b><span style="font-size: large;">WE </span></b>will need to account for things other than the Fed that influenced risk premia as they relate to Treasury supply. The following three variables are meant to accomplish such a thing:<br /><br />1. Federal Reserves holdings of total public debt as a percentage of GDP<br /><br />2. Total government holdings of domestic credit market debt as a percentage of the total<br /><br />3. Foreign holdings of government debt as a percentage of total public debt<br /><br /><u>1. Fed's holdings of total public debt as a percentage of GDP</u><br /><br />Federal Reserve holdings of total public debt as a percentage of GDP is important because it controls for how much Federal Government support the Fed is accounting for. It is especially pertinent to our second estimation as the Feds holdings of total public debt relative to GDP increased sharply. Operationally, we define this variable as:<br /><br />FedGDP_{t}= {GovDebt_{t}^{Fed}\ GDP_{t}}x 100<br /><br />Where,<br /><br />GovDebt_{t}^{Fed}=Federal Debt Held by Federal Reserve Banks (FDHBFRBN) at time, t<br /><br />GDP_{t} = Gross Domestic Product, 1 Decimal (GDP) at time, t<br /><br />We expect that this variable will move in line with both short-term and long-term risk premiums. Therefore:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br /><u>Data Issues</u><br /><br />The time-series necessary for this variable is provided by FRED and the details are as listed:<br /><br /><span style="font-size: x-small;">(a) Federal Debt Held by Federal Reserve Banks (FDHBFRBN), Quarterly, End of Period, Not Seasonally Adjusted, 1970-01-01 to 2011-0</span><br /><span style="font-size: x-small;">(b) Gross Domestic Product, 1 Decimal (GDP), Quarterly, Seasonally Adjusted Annual Rate, 1947-01-01 to 2011-07-01</span><br /><br />2. <u>Government Holdings Of Domestic Credit Market Debt As A Percentage Of The Total</u><br /><br />It would be wise to include a variable that account for fiscal policies support of the financial markets. This we can define as Federal Government holdings of credit market assets as a percentage of the total outstanding. To account for total government support of the financial markets we will use the following variable: <i>govcredit.</i><br /><br />govcredit_{t}={ CAssets_{t}^{Gov}\ CAssets_{t}^{Total} }x 100<br /><br />Where,<br /><br />CAssets_{t}^{Gov} = Total Credit Market Assets Held by Domestic Nonfiancial Sectors - Federal Government (FGTCMAHDNS) at time, t<br /><br />CAssets_{t}^{Total} = Total Credit Market Assets Held by Domestic Nonfiancial Sectors (TCMAHDNS) at time, $t$<br /><br />We expect that this variable will reduce both short-term and long-term risk premiums. Therefore:<br /><br />H<span style="font-size: xx-small;">0</span>: ß ≥ 0 vs. H<span style="font-size: xx-small;">a</span>: ß < 0<br /><br /><u>Data Issues</u><br /><br />The time-series necessary for this variable is provided by FRED and the details are as listed:<br /><br /><span style="font-size: x-small;">(a) Total Credit Market Assets Held by Domestic Nonfiancial Sectors - Federal Government (FGTCMAHDNS), Quarterly, End of Period, Not Seasonally Adjusted, 1949-10-01 to 2011-04-01</span><br /><span style="font-size: x-small;">(b) Total Credit Market Assets Held by Domestic Nonfiancial Sectors (TCMAHDNS), Quarterly, End of Period, Not Seasonally Adjusted, 1949-10-01 to 2011-04-01</span><br /><br /><u>3. Foreign Holdings of Federal Debt As A Percentage Of The Total</u><br /><br />This variable labeled ForeignDebt_{t} seeks to capture the impact that foreign holdings of United States government debt have on both short-term and long-term risk premia. Theory would suggest that as foreign holdings go up risk-premia would go down. Operationally this variable is defined as follows:<br /><br />ForeignDebt_{t} = {GovDebt_{t}^{Foreign}\ total public debt_{t}} x 100<br /><br />Where,<br /><br />GovDebt_{t}^{Foreign}= Federal Debt Held by Foreign & International Investors (FDHBFIN) at time, t<br /><br />total public debt_{t}= Federal Government Debt: Total Public Debt (GFDEBTN) at time, t<br /><br />Our ß coefficient on this variable is expected to be negative for both short-term and long-term risk premia and therefore:<br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß < 0<br /><br /><br /><u>Data Issues</u><br /><br />The following data comes from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) Federal Debt Held by Foreign & International Investors (FDHBFIN), Quarterly, End of Period, Not Seasonally Adjusted, 1970-01-01 to 2011-04-01</span><br /><span style="font-size: x-small;">(b) Federal Government Debt: Total Public Debt (GFDEBTN), Quarterly, End of Period, Not Seasonally Adjusted, 1966-01-01 to 2011-04-01</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">Accounting For Cyclicality</span></b></div><br />We include two variables to help account for cyclicality in the overall economy. Both are relevant as the Fed uses these variables in its decision making process. For example in setting the federal funds rate, the Fed is said to have used a Taylor Rule that incorporated both the output gap and unemployment gap in its objective function. Thus incorporating these variables may present a problem of endogeneity over a short-part of our sample(like when a taylor-rule was said to be used), but these effects we will choose to ignore. The two cyclical variables we shall use are the output gap and the unemployment gap. The output gap is defined,<br /><br />OGAP_{t}= Potential GDP_{t} – GDP_{t} at time, t<br /><br />Where,<br /><br />Potential GDP_{t}=Nominal Potential Gross Domestic Product (NGDPPOT) at time, t<br /><br />GDP_{t}=Gross Domestic Product, 1 Decimal (GDP) at time, t<br /><br />Our unemployment gap is defined in a similar fashion:<br /><br />UGAP_{t}= NROU_{t} – UNRATE_{t} at time, t<br /><br />Where,<br /><br />NROU_{t}= Natural Rate of Unemployment (NROU) at time, t<br /><br />UNRATE_{t}= Civilian Unemployment Rate (UNRATE) at time, t<br /><br />Theoretically we assume that over the long-run as both of these variables increase the long-term risk premia increase. Over the short-run regressions we would expect these variables to have almost no significant effect as that time period is cluttered with many short-term things impacting risk-premia. Additionally for the short-term risk premia we would expect either a negative relationship or no relationship. This is because many things that impact the long-term risk premia one way have an opposite sign with respect to the short-term risk premia. <br /><br /><u>Data Issues</u><br />The data for these cyclical variables is provided by FRED and their details are laid out as follows:<br /><br /><span style="font-size: x-small;">(a) Civilian Unemployment Rate (UNRATE), Monthly, Seasonally Adjusted, 1948-01-01 to 2011-10-01 </span><br /><span style="font-size: x-small;">(b) Natural Rate of Unemployment (NROU), Quarterly, 1949-01-01 to 2021-10-01 </span><br /><span style="font-size: x-small;">(c) Nominal Potential Gross Domestic Product (NGDPPOT), Quarterly, 1949-01-01 to 2021-10-01 </span><br /><span style="font-size: x-small;">(d) Gross Domestic Product, 1 Decimal (GDP), Quarterly, Seasonally Adjusted Annual Rate, 1947-01-01 to 2011- 07-01</span><br /><div><br />The next post gets into the R analysis and lays out our model in full.</div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-91084101489703767092011-12-29T12:30:00.000-08:002011-12-29T12:30:19.576-08:00Monetary Policy & Credit Easing pt. 4: More Independent Variable Definitions<div style="text-align: center;"><b><span style="font-size: large;">Support for Depositary Institutions</span></b></div><br />This variable will account for the Federal Reserves support of Depository Institutions through direct lending to these institutions. Support will be measured by how much the Fed made up for any shortfalls in Depository Institutions main source of cash- time and savings deposits. Federal Reserve support for our first estimation period is operationalized as follows:<br /><br />Support_{t}={TotalBorrowingFed^{DI}_{t}\ Total Time & Savings Deposits_{t}^{DI}}x 100<br /><br />Where,<br />Support_{t}= Fed funds at depository institutions as a percentage of their main financing streams (total savings and time deposits) at time, t<br /><br />TotalBorrowingFed^{DI}_{t}= Total Borrowings of Depository Institutions from the Federal Reserve (BORROW) at time, t<br /><br />Total Time & Savings Deposits_{t}^{DI}= Total Time and Savings Deposits at All Depository Institutions (TOTTDP) at time, t<br /><br />For our second estimation period from 4/1/01 to 4/1/11 we will use a different variable that excludes time-deposits as the series that we would ideally like to use above was discontinued in 2006. <br /><br />Support_{t}={TotalBorrowingFed^{DI}_{t}\ Total Savings Deposits_{t}^{DI}}x 100<br /><br />Where,<br />Support_{t}= Fed funds at depository institutions as a percentage of their main financing stream (total saving deposits) at time, t<br /><br />TotalBorrowingFed^{DI}_{t}= Total Borrowings of Depository Institutions from the Federal Reserve (BORROW) at time, t<br /><br />Total Savings Deposits_{t}^{DI}= Total Savings Deposits at all Depository Institutions (WSAVNS) at time, t<br /><br />The expected beta coefficient should be positively related to short-term risk premia, as tighter credit conditions require Depository Institutions to go to the Fed for help. Only after the risk-premia goes up and these institutions have no where else to go do they borrow from the Fed at the discount rate. <br /><br />We expect the effect of lending support to depository institutions to be positively related to short-term risk premia, therefore:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br />Furthermore, we expect Fed support to depository institutions to have a negative effect on long-term risk premia because of the expectations component. As the Fed steps in with lending support, markets calm their fears about the future. This is directly the opposite of short-term risk premia as the support in that situation is in direct response to the risk premia. Therefore,<br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß < 0<br /><br /><u>Data Issues</u><br /><br />The following time-series data were provided by FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) Total Borrowings of Depository Institutions from the Federal Reserve (BORROW), Monthly, Not Seasonally Adjusted, 1919-01-01 to 2011-10-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(b) Total Savings Deposits at all Depository Institutions (WSAVNS), Weekly, Ending Monday, Not Seasonally Adjusted, 1980-11-03 to 2011-10-17</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(c) Total Time and Savings Deposits at All Depository Institutions (DISCONTINUED SERIES) (TOTTDP), Monthly, Seasonally Adjusted, 1959-01-01 to 2006-02-01</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">The Federal Funds Rate</span></b></div><br />The motivation behind putting the Federal Funds rate into our regression model is simple. This is the main policy tool that the Fed has used to manipulate short-term credit conditions and influence the rate of inflation. The Fed controls this rate in hopes of influencing other rates such as the Prime Bank Loan Rate along with other short term credit instruments like Commercial Paper. Additionally, this variable is very easy to account for because it requires virtually zero manipulation. Notationally we will define it in the following manner:<br /><br />FF_{t}= Federal Funds rate at time, t<br /><br />The expected beta coefficient should be positively related to the short-term risk premia and negatively related to long-term risk premia. They theory is that by lowering the federal funds rate, or the rate at which banks lend to each other, the Fed is encouraging banks to lend and thus ease credit conditions. When the Fed feels that tightening is appropriate, maybe the result of a jump in inflation expectations or an deep acceleration in the economy, they respond by raising the federal funds rate. This tightens credit conditions and thus theoretically at least, should result in an increase in short-term risk premia. The opposite holds true for the federal funds rate effect on long-term risk premia. Since long-term rates are a function of shorter-term rates, investors are inclined to sell Treasuries which decreases the spread between Aaa and 10 year nominal treasuries, therefore decreasing long-term risk premia.<br /><br />For short-term risk premiums we expect the federal funds rate to be positively related:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br />For long-term risk premiums we expect the federal funds rate to be negativity related:<br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß < 0<br /><br /><u>Data Issues</u><br /><br />We get the series for the federal funds rate from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) Effective Federal Funds Rate (FF), Weekly, Ending Wednesday, 1954-07-07 to 2011-10-19</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">Interest Paid On Excess Reserves</span></b></div><br />This variable is also easy to account for because the introduction of interest paid on reserves has a direct impact on the physical quantity of excess reserves of depository institutions held at the Federal Reserve. The motivation behind this variable is that banking institutions would not hold excess reserves with the Fed without some compensation i.e. that opportunity cost has to be greater than zero. That is where the interest paid on excess reserves come in. Without it there is an opportunity cost of letting the reserves sit with the fed instead of seeking more profitable safe havens for their cash. That is why we will be using the quantity of excess reserves as our explanatory variable and notationally defining it as follows:<br /><br />ER_{t}= Excess Reserves of Depository Institutions at time, t<br /><br />When the Fed initiated its policy of paying interest on reserves it created an incentive for banks to shore up their finances with the Fed. It gave the Fed a way to conduct large scale asset purchases without suffering inflationary consequences. As long as the reserves are held with the Fed, they cannot be inflationary therefore an increase in excess reserves at the Fed is contractionary. Additionally, since the Fed had not initiated the policy of paying interest on reserves until October of 2008 there was no incentive for banks to hold any excess reserve balances before then. That is why our two estimation periods must have two different hypothesis tests for this variable:<br /><br />For our estimation covering the 4/1/71 to 7/1/97 time period the interest paid on excess reserves policy was non-existent and therefore:<br /><br />H_{0}: ß = 0 vs. H_{a}: ß ≠ 0<br /><br />In other words, we are looking to not reject the null hypothesis that beta is equal to zero.<br /><br />For our second estimation covering the 4/1/01 to 4/1/11 time period the interest paid on excess reserves policy was in effect, if only for a short-time before the end of the sample and therefore:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br /><u>Data Issues</u><br /><br />We get the series from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) Excess Reserves of Depository Institutions (EXCRESNS), Monthly, Not Seasonally Adjusted, 1959-01-01 to 2011-09-01</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">Control Variables: Accounting For Factors Outside of Monetary Policy</span></b></div><br />We must account for changes in the risk premia that aren't necessarily related to monetary policy. These include things like market fear, corporate default risk and controlling for changes due to underlying fundamentals like Corporate Profits After Tax.<br /><br /><div style="text-align: center;"><b><span style="font-size: large;">The Yield Curve</span></b></div><br />The motivation behind including the yield curve is due to its known predictive power of economic growth and recessionary risk. As recessionary risk increase investors are more likely to put their funds into the safest assets with the highest return. Historically, this involves the purchase of longer-term Treasuries as they have zero default risk. When a bond is purchased its price goes up, and its effective yield decreases so the slope of the yield curve or the spread between the most liquid and shortest maturity bond and the most liquid, longest maturity bond decreases or flattens. As this slope flattens we expect risk premia to increase for longer maturity and less liquid debt instruments. As the yield curve flattens we expect T-bills to be sold and longer-term Treasuries to be purchased. This puts upward pressure on T-bill rates, thus narrowing the spread between Commercial Paper and T-bills and reducing the short-term risk premia.<br />The yield curve as we define it is the spread between the 10-Year Nominal Treasury Note rate and the 3-Month Nominal Secondary Market Treasury Bill rate. Notationally,<br /><br />YC_{t}=GS10_{t} – TB3MS_{t}<br /><br />Where,<br /><br />YC_{t}= Yield curve at time, t<br /><br />GS10_{t}= 10-Year Treasury Constant Maturity Rate at time, t<br /><br />TB3MS_{t}= 3-Month Treasury Bill: Secondary Market Rate at time, t<br /><br />The beta coefficient for both of our estimation periods is expected to be negatively related to long-term risk premia. Therefore:<br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß ≤ 0<br /><br />The beta coefficient for both of our estimation periods is expected to be positively related to short-term risk premia. Therefore:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br /><u>Data Issues</u><br /><br />We get the two times-series data from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) 10-Year Treasury Constant Maturity Rate (GS10), Monthly, 1953-04-01 to 2011-09-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(b) 3-Month Treasury Bill: Secondary Market Rate (TB3MS), Monthly, 1934-01-01 to 2011-09-01</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">Stock Market Volatility</span></b></div><br />We will control for market fear by including a measure of stock market volatility into our regression model. We define volatility as follows:<br /><br /> Volatility_{t}= CBOE DJIA Volatility Index (VXDCLS) at time, t<br /><br />The motivation behind this control variable is that the returns from owning stocks become more volatile in times of fear. Thus the risk premia on assets that aren't risk free like corporate bonds may increase in response to this market volatility. <br /><br />This variables is literally only relevant in the regression on long-term premiums for our second estimation period. We cannot reasonably assume it has any effect on short-term risk premia because stock market volatility signals the fear of financial markets which leads to flight to safety into longer term treasuries not short-term commercial paper. The reason is that an investor probably couldn't even have the capital to buy commercial paper to begin with and secondly when fear strikes investors tend to pour their capital into longer term treasuries because they can pick up some extra yield. This would widen the spread between Aaa and Treasuries thus increasing the risk premium. Therefore we include this variable only in our second estimation and its only real effect will be on the long-term risk premia.<br /><br />For the regression over our second estimation period, increased volatility is expected to increase the long-term risk premium:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br />For the regression over our second estimation period, volatility is not expected to have an effect on short-term risk premia, therefore:<br /><br />H_{0}: ß = 0 vs. H_{a}: ß ≠ 0<br /><br />In other words, we are looking to not reject the null hypothesis that $\beta$ is equal to zero.<br /><br /><u>Data Issues</u><br /><br />We get the times-series data from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) CBOE DJIA Volatility Index (VXDCLS), Daily, Close, 1997-10-07 to 2011-11-02</span><br /><br /><div style="text-align: center;"><span style="font-size: large;"><b>Corporate Bond Default Risk</b></span></div><br />It would be prudent to control for the perceived credit default risk of the corporate bonds market. To do this we use the spread between the AAA and BAA rates bonds which would theoretically correspond to increased compensation for the risk of a default. The reason being we want to see how much the Fed's actions influence the risk premia and factor out movements in the spread that may incorporate other things like flat out default risk. Ideally we would like to use a Credit Default Swap Index to control for default risk as it would help us control directly for default risk and not other things like liquidity risk, but given the sample length data limitations we are forced to stick with what we've got.<br /><br />The control variable we use for corporate default risk is the spread between Moody's rated Baa's and Aaa's. This is used because data exists for the full length of our desired samples and therefore is operationalized as follows:<br /><br />Default^{spread}_{t}=BAA_{t} – AAA_{t}<br /><br />Where,<br /><br />BAA_{t}= Moody's Seasoned Baa Corporate Bond Yield at time, t<br /><br />AAA_{t}= Moody's Seasoned Aaa Corporate Bond Yield at time, t<br /><br />The beta coefficient on our corporate default control variable is expected to be positive in our regression on long-term rates since an increase in the spread between BAA and AAA would indicate that these bonds expected default risk would increase:<br /><br />H_{0}: ß ≤ 0 vs. H_{a}: ß > 0<br /><br />In our regressions on short-term risk premia, the expected effect of this control variable is effectively zero as this variable deals with longer-term interest rates not exactly pertinent to short-term financing like commercial paper or Treasury Bills:<br /><br />H_{0}: ß = 0 vs. H_{a}: ß ≠ 0<br /><br />In other words, we are looking to not reject the null hypothesis that $\beta$ is equal to zero.<br /><br /><u>Data Issues</u><br /><br />For the AAA and BAA data we use FRED:<br /><br /><span style="font-size: x-small;">(a) Moody's Seasoned Aaa Corporate Bond Yield (AAA), Monthly, 1919-01-01 to 2011-09-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(b) Moody's Seasoned Baa Corporate Bond Yield (BAA), Monthly, 1919-01-01 to 2011-09-01</span><br /><br /><div style="text-align: center;"><b><span style="font-size: large;">Corporate Profits After Tax</span></b></div><br />As corporate profits after tax increase the risk premium on corporate bonds decrease. This fundamentally negative relationship should be controlled for in our regression model. Operationally, this is defined as:<br /><br />CP_{t} = Corporate Profits After Tax at time, t<br /><br />The beta coefficient on our CP control variable should be negatively related to our dependent variables. This is because as corporate profits after tax increase the risk that they will renege on their debt obligations will decrease. This gives us the following test. <br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß < 0<br /><br /><u>Data Issues</u><br /><br />We get the times-series data from FRED and the details are as follows:<br /><br /><span style="font-size: x-small;">(a) Corporate Profits After Tax (CP), Quarterly, Seasonally Adjusted Annual Rate, 1947-01-01 to 2011-04-01</span><br /><br />Keep dancin'<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com26tag:blogger.com,1999:blog-5427848954322593357.post-55061607217065045422011-12-28T11:28:00.000-08:002011-12-28T11:28:06.592-08:00Monetary Policy & Credit Easing pt. 3: Accounting For The Composition of The Fed's Balance Sheet & Credit Easing<div class="p1"></div><div class="p1"><b><span style="font-size: large;">C</span></b>redit Easing shifts the composition of the balance sheet away from default-free assets towards assets with credit risk. An example of Credit Easing which is pertinent to our testing the effects of monetary policy on commercial paper is the Commercial Paper Funding Facility. Implementation of this facility involved the U.S. central bank selling T-bills and purchasing commercial paper of similar maturity. This shift in composition leaves the size and average maturity of bank assets on the Fed's balance sheet unchanged. When the Fed purchases an asset like commercial paper, it lowers the supply of this asset to private investors. This scarcity has the effect of boosting its price and pushing down its yield. In the absence of private demand for the risky asset, the Feds purchase makes credit available where no alternative existed. The composition effect will be captured by our second time period estimation (from 4/1/01 to 4/1/11) of Monetary Policy's effects as all of the credit easing policies employed by the Fed occurred over this time period. A little background on the implementation of these polices is introduced below.</div><div class="p1"><br /></div><div class="p1" style="text-align: center;"><b><span style="font-size: large;">Implementation of Credit Easing and Large Scale Asset Purchases*</span></b></div><div class="p1">*This section draws heavily from Sack 2010</div><div class="p1"><br /></div><div class="p1">The Federal Reserve holds the assets it purchases in the open market in its System Open Market Account (SOMA). Historically, SOMA holdings have consisted of nearly all Treasury securities, although small amounts of agency debt have been held. Purchases and sales of SOMA assets are called outright open market operations (OMOs). Outright OMOs, in conjunction with repurchase agreements and reverse repurchase agreements, traditionally were used to alter the supply of bank reserves in order to influence the federal funds rate. Most of the higher-frequency adjustments to reserve supply were accomplished through repurchase and reverse repurchase agreements, with outright OMOs conducted periodically to accommodate trend growth in reserve demand. OMOs were designed to have a minimal effect on the prices of securities included in its operations. It is the Fed's way of not distorting prices on debt instruments and thus protecting its independence from political pressure. To this end, OMOs tended to be small in relation to the markets for Treasury bills and Treasury coupon securities. Large Scale Asset Purchases, however aimed to have a noticeable impact on the interest rates being purchased as well as on other assets with similar characteristics. In order to lower market interest rates, Large Scale Asset Purchases were designed to be large relative to the markets for these assets. As mentioned in Gagnon, Raskin, Remache and Sack 2010:</div><blockquote class="tr_bq">Between December 2008 and March 2010, the Federal Reserve will have purchased more than $1.7 trillion in assets. This represents 22 percent of the $7.7 trillion stock of longer-term agency debt, fixed-rate agency MBS, and Treasury securities outstanding at the beginning of the LSAPs.</blockquote><div class="p1">In the following discussion of the independent variables selected to capture this effect please note that they are all defined as Federal Reserves holdings as a percentage of the total market value outstanding. In this way we can quantify how much the Fed's holdings relative to the total market supply of these assets impacted market risk premia.</div><div class="p1"><br /></div><div class="p1">Large Scale Asset Purchases were focused on four main securities:</div><div class="p1"><br /></div><div class="p1">1. Agency Debt</div><div class="p1"><br /></div><div class="p1">2. Mortgage Backed Securities</div><div class="p1"><br /></div><div class="p1">3. Treasury Securities</div><div class="p1"><br /></div><div class="p1">4. Commercial Paper</div><div class="p1"><br /></div><div class="p1">Although we do not explicitly account for these Treasury Purchases, we rely on our main balance sheet variable to capture their effects. The first asset to account for which is especially pertinent to our short-term risk premia variable is commercial paper.</div><div class="p1"><br /></div><div class="p1" style="text-align: center;"><b><span style="font-size: large;">Commercial Paper</span></b></div><div class="p1"><br /></div><div class="p1">Accounting for commercial paper and the Commercial Paper Funding Facility LLC, we will use the Fed's holdings as a percentage of the total commercial paper outstanding. The Commercial Paper Funding Facility LLC, like all of the Fed's Credit Easing tools was only functional during our second estimation period (4/1/01 to 4/1/11). That is why it will only be used as a variable over that estimation period. Operationally:</div><div class="p1"><br /></div><div class="p1">Commercial Paper^{Fed}_{t}={CPaper^{Fed}_{t}\ CPaper_{t}^{total}}x 100</div><div class="p1"><br /></div><div class="p1">Where,</div><div class="p1">Commercial Paper^{Fed}_{t}= the percentage of the total commercial paper outstanding the Fed owns at time, t</div><div class="p1"><br /></div><div class="p1">CPaper^{Fed}_{t}= Net Portfolio Holdings of Commercial Paper Funding Facility LLC (WACPFFL) at time, t</div><div class="p1"><br /></div><div class="p1">CPaper_{t}^{total}= Commercial Paper Outstanding (COMPOUT) at time, t</div><div class="p1"><br /></div><div class="p1">We expect this variable to be negatively related to short-term risk premia over our estimation period. The reason being that increased Fed support in this market should have directly reduced the spread between commercial paper and Treasury bills. Especially if the Fed sold T-bills to purchase short-term commercial paper and asset backed commercial paper. Therefore the following hypothesis test is appropriate:</div><div class="p1"><br /></div><div class="p1">H_{0}:ß ≥ 0 vs. H_{a}: ß < 0 </div><div class="p1"><br /></div><div class="p1">With respect to the long-term risk premia, we should expect this monetary policy action to have a negligible effect. This is because this policy was aimed at impacting short-term commercial paper and not longer-term rates:</div><div class="p1"><br /></div><div class="p1">H_{0}: ß = 0 vs. H_{a}: ß ≠ 0</div><div class="p1"><br /></div><div class="p1"><u>Data Issues</u></div><div class="p1"><br /></div><div class="p1">The following data sets are pulled from FRED and their details are as follows:</div><div class="p1"><br /></div><div class="p1"><span style="font-size: x-small;">(a) Assets - Net Portfolio Holdings of Commercial Paper Funding Facility LLC (DISCONTINUED SERIES) (WACPFFL), Weekly, As of Wednesday, Not Seasonally Adjusted, 2002-12-18 to 2010-08-25</span></div><div class="p1"><span style="font-size: x-small;"><br /></span></div><div class="p1"><span style="font-size: x-small;">(b) Commercial Paper Outstanding (COMPOUT), Weekly, Ending Wednesday, Seasonally Adjusted, 2001-01-03 to 2011-10-26</span></div><div class="p1"><br /></div><div class="p1">This required the following data transformation within FRED:</div><div class="p1"><br /></div><div class="p1">{(WACPFFL\1000)\ COMPOUT}x100</div><div class="p1"><br /></div><div class="p1" style="text-align: center;"><b><span style="font-size: large;">Mortgage-Backed Securities & Agency Debt</span></b></div><div class="p1"><br /></div><div class="p1">In order to account for the Feds holdings Agency Debt and Mortgage Backed Securities as a percentage of the total outstanding we use the following variable:</div><div class="p1"><br /></div><div class="p1">Agency Debt & MBS^{Fed}_{t} = {FADS^{Fed}_{t} + MBS^{Fed}_{t}\ DomesticFinancial_{t}^{Total}}x 100</div><div class="p1"><br /></div><div class="p1">Where, </div><div class="p1">Agency Debt & MBS^{Fed}_{t}= Feds holdings of agency debt and Mortgage-Backed Securities as a percentage of the total outstanding at time, t</div><div class="p1"><br /></div><div class="p1">FADS^{Fed}_{t}= Fed's holdings of Federal Agency Debt Securities (WFEDSEC) at time, t</div><div class="p1"><br /></div><div class="p1">MBS^{Fed}_{t}= Fed's holdings of Mortgage-Backed Securities (WMBSEC) at time, t</div><div class="p1"><br /></div><div class="p1">DomesticFinancial_{t}^{Total}= Domestic Financial Sectors holdings of Agency- and GSE-Backed Mortgage Pools (AGSEBMPTCMAHDFS) at time, t</div><div class="p1"><br /></div><div class="p1">This variable, theoretically should have almost no impact on both long-term and short-term risk premiums. The reason is Agency Debt and MBS are not highly correlated with either of our dependent variables, in fact it wasn't meant to impact these measures. It was however meant to influence 30 year mortgage rates which much research has shown it did in fact help ease. We include this variable only because it was a major part of the Fed's credit easing policy and that future models with measures of housing affordability as their dependent variable would be able to use the variables listed in this paper to show Fed support of the housing market. </div><div class="p1">The beta coefficient in front of this independent variable is therefore expected to have no significant relation to either long-term or short-term risk premiums as defined in this paper:</div><div class="p1"><br /></div><div class="p1">H_{0}: ß = 0 vs. H_{a}: ß ≠ 0</div><div class="p1"><br /></div><div class="p1">We fully expect to not reject the null hypothesis for both of our models.</div><div class="p1"><br /></div><div class="p1"><u>Data Issues</u></div><div class="p1"><br /></div><div class="p1">The data for the above variables comes from the following financial time-series from FRED:</div><div class="p1"><br /></div><div class="p1"><span style="font-size: x-small;">(a) Total Credit Market Assets Held by Domestic Financial Sectors - Agency- and GSE-Backed Mortgage Pools (AGSEBMPTCMAHDFS), Quarterly, End of Period, Not Seasonally Adjusted, 1949-10-01 to 2011-04-01</span></div><div class="p1"><span style="font-size: x-small;"><br /></span></div><div class="p1"><span style="font-size: x-small;">(b) Reserve Bank Credit - Securities Held Outright - Federal Agency Debt Securities (WFEDSEC), Weekly, Ending Wednesday, Not Seasonally Adjusted, 2002-12-18 to 2011-10-26</span></div><div class="p1"><span style="font-size: x-small;"><br /></span></div><div class="p1"><span style="font-size: x-small;">(c) Reserve Bank Credit - Securities Held Outright - Mortgage-Backed Securities (WMBSEC), Weekly, Ending Wednesday, Not Seasonally Adjusted, 2009-01-14 to 2011-10-26</span></div><div><br /></div><div>Please keep dancing and wait for our next post which finishes defining our independent variables,</div><div><br /></div><div>Steven J. </div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-55066428222614167172011-12-28T11:25:00.000-08:002011-12-28T11:25:02.128-08:00Monetary Policy & Credit Easing pt. 2: Defining Our Variables<b><span style="font-size: large;">IN</span></b> order to get a more complete picture of how monetary policy influences credit conditions we will estimate its effects on both long-term and short-term risk premia. Our first dependent variable is the short-term risk premia and our second is the long-term risk premium. We will be testing the effects of monetary policy on both risk premias over two separate time periods. The first is from 4/1/71to 7/1/97 and the second is from 4/1/01 to 4/1/11. We use two different time periods to more strongly capture the influence of the different monetary policy tools that were prevalent to each respective time period. For example, the first time period was a time characterized by the Federal Reserve's indirect manipulation of the federal funds rate to influence other short-term rates like the prime bank loan rate and rates on short-term commercial paper. In direct contrast, the second time periods estimation recognizes the Fed's manipulation of both the size and composition of its balance sheet as well as it's use of the federal funds rate to influence short-term market rates.<br /><br /><b><span style="font-size: large;">First Dependent Variable: Short-term Risk Premium & Commercial Paper</span></b><br /><br />Commercial Paper is an unsecured promissory note with a fixed maturity of 1 to 270 days. We will be focusing on 90 day Commercial Paper. Commercial Paper is a money-market security issued by large banks and corporations to get money to meet short term debt obligations, and is only backed by an issuing bank or corporation's promise to pay the face amount on the maturity date specified on the note. Since it is not backed by collateral, only firms with excellent credit ratings from a recognized rating agency will be able to sell their Commercial Paper at a reasonable price. Additionally, Commercial Paper rates increase with maturity so they also have a duration risk associated with the price they fetch in the market place. Since this type of security is typically considered pretty risk free and has virtually zero rollover risk its deviation from the three-month Treasury bill rate seems like an appropriate measure of the short-term risk premium. The 3 Month T-bill is used as our risk-free asset because it is considered to have zero default risk and is highly liquid. Moreover, T-bills are used for short-term financing purposes which makes its use very similar to that of Commercial Paper. The short-term risk premium is thus operationalized as follows:<br /><br />SR^{premium}_{t}= CP3M_{t} – TB3MS_{t}<br /><br />Where,<br /><br />SR^{premium}_{t} = Short-term Risk Premium at time, t<br /><br />CP3M_{t}= 3-Month Commercial Paper Rate at time, t<br /><br />TB3MS_{t}=3-Month Treasury Bill: Secondary Market Rate at time, t<br /><br /><u>Data Issues</u><br /><br />For the 3-Month Treasury Bill series we use the following from FRED:<br /><br /><span style="font-size: x-small;">(a) 3-Month Treasury Bill: Secondary Market Rate (TB3MS), Monthly, 1934-01-01 to 2011-09-01</span><br /><br />The 3-Month Commercial paper series is unfortunately not so easy to deal with. For one the series stops in 1997 and breaks off into two separate time series:<br /><br /><span style="font-size: x-small;">(b) 3-Month Commercial Paper Rate (DISCONTINUED SERIES) (CP3M), Monthly, 1971-04-01 to 1997-08-01</span><br /><br />The two separate series include the financial commercial paper rate and the non-financial commercial paper rate:<br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(c) 3-Month AA Financial Commercial Paper Rate (CPF3M), Monthly, 1997-01-01 to 2011-09-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(d) 3-Month AA Nonfinancial Commercial Paper Rate (CPN3M), Monthly, 1997-01-01 to 2011-09-01</span><br /><br />To reconcile these issues, we take the average of the two and use them for the estimation of the Fed's policies over the second time period.<br /><br /><b><span style="font-size: large;">Second Dependent Variable: Long-term Risk Premium For Corporate Debt</span></b><br /><br />For our long-term risk premium we choose to employ the 10 year Treasury Note rate as our risk-free rate because it shares the full promise of repayment by the United States Government. Moody's Aaa rated securities aren't so lucky and therefore carry a risk premium associated with them. Although, the risk-premium for longer-term securities includes several things that are more acute under stress than our counterpart short-term risk premium. These include a heightened duration risk, liquidity risk and default risk. We would expect our estimation of monetary policy effects on this variable to be more accurate as it theoretically should fluctuate more in response to actions taken by the Federal Reserve. The long-term risk premium is defined as follows:<br /><br />LR^{premium}_{t}= BAA_{t} – GS10_{t}<br /><br />where,<br /><br />LR^{premium}_{t} =Long-term Risk Premium at time, t<br /><br />BAA_{t} =Moody's Seasoned Baa Corporate Bond Yield at time, t<br /><br />GS10_{t} =10-Year Treasury Constant Maturity Rate at time, t<br /><br /><u>Data Issues</u><br /><br />All of the data here comes from FRED and there series details are listed as follows:<br /><br /><span style="font-size: x-small;">(a) Moody's Seasoned Baa Corporate Bond Yield (BAA), Monthly, 1919-01-01 to 2011-09-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(b) 10-Year Treasury Constant Maturity Rate (GS10), Monthly, 1953-04-01 to 2011-09-01</span><br /><br /><b><span style="font-size: large;">Independent Variables: The Federal Reserve's Monetary Policy Toolbox</span></b><br /><br />Our independent variables seek to capture the many tools the Federal Reserve can and has employed throughout its history. This includes capturing the effects of traditionally unorthodox tools such as the manipulation of both the size (known as quantitative easing) and the composition (known as credit easing) of the Fed's balance sheet as well as capturing the effect from our more well known tools like changing the federal funds rate. We will also seek to determine the effects of interest paid on reserves.<br /><br /><br /><b><span style="font-size: large;">Accounting For The Size Of The Fed's Balance Sheet & Quantitative Easing</span></b><br /><br />Our first and in the authors opinion most important independent variable seeks to capture the Fed's balance sheet effects on risk premiums. It will be defined as the Feds holdings of credit market assets as a percentage of the total amount of assets held. The more the Fed supports credit markets the larger this percentage will be. It captures the balance sheets size as a percentage of the total market balance sheet. It is available over both our sample time periods and is therefore of pinnacle convenience to our analysis. One special component of the balance sheet has been the holding of Treasury Securities. Before November of 2008, the Federal Reserve maintained a relatively small portfolio of between $700 billion and $800 billion in Treasury securities- an amount largely determined by the volume of dollar currency that was in circulation. In late November 2008 the Federal Reserve announced that it would purchase up to $600 billion of agency debt and agency mortgage-backed securities (MBS). In March 2009, it enlarged the program to include cumulative purchases of up to $1.75 trillion of agency debt, agency MBS, and longer-term Treasury securities. As mentioned previously, the use of the balance sheet for financial easing was initiated because the Federal Reserves main policy instrument, the federal funds rate had effectively reached the zero lower bound in late 2008.<br /><br />Operationally we define this variables as:<br /><br />FedBalance^{size}_{t}={CreditAssets^{Fed}_{t}\ CreditAssets_{t}^{total}}x 100<br /><br />where,<br /><br />FedBalance^{size}_{t}= the percentage of the total credit market assets the Fed owns at time, t<br /><br />CreditAssets^{Fed}_{t}= Total Credit Market Assets Held by Domestic Financial Sectors - Monetary Authority (MATCMAHDFS) at time, t<br /><br />CreditAssets_{t}^{total}= Total Credit Market Assets Held (TCMAH) at time, t<br /><br />This variable is the percentage of the total credit market assets that the Fed holds. Its coefficient is meant to be negative so that as it increases market interest rate risk premiums decrease. It accounts for the effects of the size of the Fed's balance sheet. We expect this variable to have a negative effect on both short-term and long-term risk premia and therefore:<br /><br />H_{0}: ß ≥ 0 vs. H_{a}: ß ≤ 0<br /><br /><u>Data issues</u><br /><br />The data for this variable is available for extraction from FRED and are detailed as follows:<br /><br /><span style="font-size: x-small;">(a) Total Credit Market Assets Held (TCMAH), Quarterly, End of Period, Not Seasonally Adjusted, 1949-10-01 to 2011-04-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;">(b) Total Credit Market Assets Held by Domestic Financial Sectors - Monetary Authority (MATCMAHDFS), Quarterly, End of Period, Not Seasonally Adjusted, 1949-10-01 to 2011-04-01</span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;"><br /></span><br /><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com3tag:blogger.com,1999:blog-5427848954322593357.post-35941773158223681282011-12-27T08:19:00.000-08:002011-12-27T08:19:07.291-08:00Monetary Policy & Credit Easing pt. 1: Background & Theoretical Considerations<div><b><span style="font-size: large;">An Introduction & Literary Review</span></b></div><div><br /></div><div><b><span style="font-size: large;">M</span></b>onetary Policy in the United States has traditionally been set to meet two objectives as defined in Federal Reserve Act; price stability and maximum employment. In order to meet these goals the Federal Reserve manipulates the federal funds rate (FF) through a process called Open Market Operations (OMOs). Unfortunately, when a recession is brought about by financial crisis this tool lose its potency and the economy enters into a "Liquidity Trap". In a liquidity trap the FF is effectively at zero, and additional support is necessary to blunt the fall in asset prices and reduce measures of heightened financial stress. The Federal Reserve has recently enlisted a range of tools that are meant to provide further accommodation when their primary tool, the FF hits the lower bound. These include manipulation of both the size and composition of its balance sheet, informational easing and paying interest on excess reserves. We seek to formally investigate how these tools impact two important measures of financial stress, the long-term and short-term risk premia. </div><div><br /></div><div>There have been a slew of recent studies which seek to estimate the effects of Large Scale Asset Purchases (LSAP's) on Treasury Rates. Using an event-study methodology that exploits both daily and intra-day data, Krishnamurthy and Vissing-Jorgensen 2011 estimate the effects of both Quantitative Easing 1 and 2. They find a large and significant drop in nominal interest rates on long-term safe assets (Treasuries, Agency bonds, and highly-rated corporate bonds). </div><div><br /></div><div>Sack, Gagnon, Raskin and Remache 2011 estimate the effects of large-scale asset purchases on the 10-year term preimium. They use both an event-study methodology and a Dynamic OLS regression with Newey-West standard errors. They present evidence that the purchases led to economically meaningful and long-lasting reductions in longer-term interest rates on a range of securities, including securities that were not included in the purchase programs. Importantly, they find that these reductions in interest rates primarily reflect lower risk premiums, including term premiums, rather than lower expectations of future short-term interest rates. </div><div><br /></div><div>In 1966 Franco Modigliani and Richard Sutch wrote a seminal piece on Monetary Policy titled ``Innovations in Interest Rate Policy." In the paper the authors estimate the effects of ``Operation Twist", a policy by the Federal Reserve and the Kennedy Administration aimed at affecting the term structure of the yield curve. In summary they find that the targeting of longer maturities has a rather minimum effect on the spread between short-term and long-term government debt securities. </div><div><br /></div><div>Bernanke, Reinhart and Sack 2004, estimate the effects of ``non-standard policies" when the Federal Funds Rate hits the lower bound. They find that the communications policy can be used to effectively lower long-term yields when short-term interest rates are trapped at zero. They also find evidence supporting the view that asset purchases in large volume by a central bank would be able to affect the price or yield of the targeted asset. This research was most likely the basis for the Feds actions taken over the course of the latest U.S. financial crises. </div><div><br /></div><div><br /></div><div> <div class="p1"><b><span style="font-size: large;">Theoretical Model, Assumptions & Further Details</span></b></div><div class="p1"><b><span style="font-size: large;"><br /></span></b></div><div class="p1"></div><div class="p1">A risk premium is the amount a debt issuer has to pay in order to borrow above the interest rates on the safest of assets for a given maturity, <i>m</i>. By comparing interest rates on debt with the same maturity we are able to isolate the part of the spread that stems from duration risk from the other factors that influence the risk of default. Additionally, by using only nominal debt instruments we remove elements in the spread that stem from inflation compensation. </div><div class="p1"><br /></div><div class="p1">Risk premiums are thus defined as follows:</div><div class="p1"><br /></div><div class="p1">r_{<i>m</i>}^{premium} = r^{RR}_{<i>m</i>} - r^{Rf}_{<i>m</i>} </div><div class="p1"><br /></div><div class="p1">Where,</div><div class="p1"><br /></div><div class="p1"> r_{<i>m</i>}^{premium} = Risk Premium for time till maturity, m</div><div class="p1"><br /></div><div class="p1">r^{RR}_{<i>m</i>}= Risky interest rate on nominal debt, for time till maturity, <i>m</i></div><div class="p1"><br /></div><div class="p1"> r^{Rf}_{<i>m</i>} = Risk free interest rate for nominal debt, for time till maturity, <i>m</i></div><div class="p1"><br /></div><div class="p1">In order to see what factors influence r_{<i>m</i>}^{premium} we have to analyze what moves the interest paid on the risk free interest rate, r^{Rf}_{<i>m</i>}, which is usually defined as some sort of United States Government debt, and the interest rate that carries risk, r^{RR}_{<i>m</i>}. </div><div class="p1"><br /></div><div class="p1">Uncertainty and financial stress go hand in hand as well documented in Charles P. Kindleberger's "Manias, Panics and Financial Crisis". Historically during periods of high uncertainty, asset prices fluctuate wildly as more cautious investors cling to the safest assets (known as flight to safety) and the more bold investors bargain shop. Investors sell assets that carry r^{RR}_{<i>m</i>} and purchase those that carry r^{Rf}_{<i>m</i>}. This causes the r_{<i>m</i>}^{premium} to increase dramatically and it becomes relatively more expensive for firms to access the capital markets to meet their funding needs. There is a shortage of credit or credit crunch as debt issuers struggle to find buyers of their debt. </div><div class="p1"><br /></div><div class="p1">In expansionary times the two interest rates that determine the risk premium move towards each other thus decreasing the risk premia. Investors feel more confident and become hungry for yield, this leads to movement away from the risk-less lower interest carrying assets into riskier assets with a higher yield. This pushes down the yield on the riskier assets and pushes the yield on the riskless assets up, thus making the return on these assets similar. </div><div class="p1"><br /></div><div class="p1"><b><span style="font-size: large;">Room For Policy</span></b></div><div class="p1"><br /></div><div class="p1">During periods of financial stress the Federal Reserve can reduce the risk premia and thus ease credit conditions by moving either r^{Rf}_{<i>m</i>} or r^{RR}_{<i>m</i>}. The Fed has relied on the "portfolio balance channel" in order to reduce the financial stress felt by credit worthy firms. As the Fed purchases Treasuries, yield hungry and "crowded out" investors may purchase assets with similar credit ratings (like bonds with a AAA rating) in order to capture that increased yield differential thus lowing the yield on these assets. </div><div class="p1">Brian P. Sack, Executive Vice President of the Federal Reserve, provided a great description of the Portfolio Balance Channel in a 2010 speech given at the CFA Institute Fixed Income Management Conference:</div><blockquote class="tr_bq">Under that view (portfolio balance channel view), our (the Fed) asset holdings keep longer-term interest rates lower than otherwise by reducing the aggregate amount of risk that the private markets have to bear. In particular, by purchasing longer-term securities, the Federal Reserve removes duration risk from the market, which should help to reduce the term premium that investors demand for holding longer-term securities. That effect should in turn boost other asset prices, as those investors displaced by the Fed’s purchases would likely seek to hold alternative types of securities.</blockquote><div class="p1">All other things being equal the risk premia should decrease because the U.S. Treasury market is the most liquid market on earth. So the decrease in Treasury yields should be less than that of the less-liquid and risk-bearing assets. </div><div class="p1"><br /></div><div class="p1">The Fed can also influence the risk premia by purchasing the risk bearing asset directly. Examples of this include its implementation of the Commercial Paper Funding Facility (CCFF) and Agency Mortgage-Backed Securities Purchase Program (ABMBSPP).</div><div class="p1"><br /></div><div class="p1">Credit Easing is another channel the Federal Reserve has looked to exploit. Credit Easing policies involve changing the composition of the Fed's Balance Sheet from risk-less assets to riskier ones, all while keeping its size constant. Operationally it involves selling risk-free assets like 3 month T-bills to finance the purchase of risk-bearing assets like 3 month Commercial Paper. These assets have the same maturity <i>m</i> and the goal of the operation is accomplished as it circumvents the need to reduce the size of balance sheet. These policies lead to lower risk premiums as they increase the rate r^{Rf}_{<i>m</i>} on the risk-free asset being sold and decrease the r^{RR}_{<i>m</i>}, or interest rate on the risky asset being bought in the risk-free assets place. This leads to additional easing as investors feel more certain that the market value of these assets will be supported by the Fed's holdings. Removing the uncertainty leads to these riskier assets being transformed into less risky ones, thus increasing the appeal for them in periods of tumultuous financial stress.</div><div class="p1"><br /></div><div class="p1">In the next post we will delve into defining our dependent variables which seek to explicitly capture in risk premia while also looking at a few of our independent variables. </div><div class="p1"><br /></div><div class="p1">Please people keep dancing into the new year,</div><div class="p1"><br /></div><div class="p1">Steven J.</div><div style="font-size: x-large; font-weight: bold;"><br /></div><div style="font-size: x-large; font-weight: bold;"><br /></div><div style="font-size: x-large; font-weight: bold;"><br /></div><br /><div class="p1"><br /></div></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-53374610486452886032011-12-26T12:21:00.000-08:002011-12-26T12:21:18.156-08:00Monetary Policy and Credit EasingHere at the dancing economist, we wish to educate our followers on the finer points of economics and this includes econometrics and using R. R as mentioned previously is a free statistical software that enables regular people like us to do high end economics research. Recently, I wrote a paper on how the Federal Reserves actions have impacted both short-term and long-term risk premiums. In the next few blog posts I will be posting sections of the paper along with the R code necessary to perform the statistical analysis involved. One interesting result is that the Feds balance sheet although not previously manipulated was heavily involved in reducing long-term risk premia over the period from 1971 to 1997. The methodology in the paper involved performing a Generalized Least Squares procedure and accounting for residual correlation to achieve the assumptions as stated by the Gauss-Markov Theorem. More will follow,<br /><br /><br />Keep Dancing,<br /><br />Steven J.Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com2tag:blogger.com,1999:blog-5427848954322593357.post-7154610430677269682011-09-04T08:31:00.000-07:002011-09-04T08:38:41.348-07:00Ladies and Gents: GDP has finally gotten its long awaited forecastToday we will be finally creating our long awaited GDP forecast. In order to create this forecast we have to combine both the forecast from our deterministic trend model and the forecast from our de-trended GDP model. <br /><br />Our model for the trend is:<br /><br />trendyx= 892.656210 + -30.365580*x + 0.335586*x2<br /><br />where x2=x^2<br />and the vector length we will make out to the 278th observation:<br /><br />> x=c(1:278)<br /><br />and our model for the cyclical de-trended series is from an AR(10) process:<br /><br />GDP.fit<-arima(dt,order=c(10,0,0),include.mean=FALSE)<br /><br />So lets say we want to predict GDP 21 periods into the future. Type in the following for the cyclical forecast:<br /><br />> GDP.pred<-predict(GDP.fit,n.ahead=21)<br /><br />Now when we produce our forecast we can't just add trendyx + GDP.pred$pred because the vector lengths won't match. To see this use the length() function:<br /><br /><br />> length(trendyx)<br />[1] 278<br />> length(GDP.pred$pred)<br />[1] 21<br /><br />In order to fix this problem we are going to remove the first 258 observations from trendyx so that we only have 21 left:<br /><br /><br />> true.trend<-trendyx[-c(1:257)]<br />> length(true.trend)<br />[1] 21<br /><br />Now we can plot away without any technical difficulties:<br /><br />> plot(GDP,type="l",xlim=c(40,75),ylim=c(5000,18500),main="GDP Predictions")<br /><br />> lines(GDP.pred$pred+true.trend,col="blue")<br />> lines((GDP.pred$pred+true.trend)-2*GDP.pred$se,col="red")<br />> lines((GDP.pred$pred+true.trend)+2*GDP.pred$se,col="red")<br /><br />This code results in the following plot:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-xHbGuf6GmbQ/TmOYt97aRbI/AAAAAAAAAKE/njdRUrM46ic/s1600/GDPPPRED.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-xHbGuf6GmbQ/TmOYt97aRbI/AAAAAAAAAKE/njdRUrM46ic/s1600/GDPPPRED.jpeg" /></a></div>The blue line represents our point forecast and the red lines represent our 95% Confidence Level interval forecast. I feel like the plot could be significantly cooler and therefore at its current appearance receives a 2 out of 10 for style. It's bland, the x-axis doesn't have dates and there's not even any background color. If this plot had a name it would be doodoo. A war must be fought against the army of lame plots. Epic battles will proceed. Plots will be lost. Only one victor will stand.<br /><br /><br />Keep Dancin',<br /><br />Steven J.<br /><br /><br />Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-46987785064071448292011-09-02T08:00:00.000-07:002011-09-02T08:16:34.883-07:00Assessing the Forecasting Ability of Our ModelToday we wish to see how our model would have faired forecasting the past 20 values of GDP. Why? Well ask yourself this: How can you know where your going, if you don't know where you've been? Once you understand please proceed on with the following post.<br /><br />First recall the trend portion that we have already accounted for:<br /><br /><br />> t=(1:258)<br />> t2=t^2<br />> trendy= 892.656210 + -30.365580*t + 0.335586*t2<br /><br />And that the de-trended series is just that- the series minus the trend.<br /><br />dt=GDP-trendy<br /><br /><br />As the following example will demonstrate- If we decide to assess the model with a forecast of the de-trended series alone we may come across some discouraging results:<br /><br /><br />> test.data<-dt[-c(239:258)]<br />> true.data<-dt[-c(1:238)]<br />> forecast.data<-predict(arima(test.data,order=c(10,0,0),include.mean=FALSE),n.ahead=20)$pred<br /><br />Now we want to plot the forecast data vs. the actual values of the forecasted de-trended series to get a sense of whether this is accurate or not.<br /><br />> plot(true.data,forecast.data)<br />> plot(true.data,forecast.data,main="True Data vs. Forecast data")<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-SS709Wj8E3I/TmDm62WVtXI/AAAAAAAAAJ8/uPIslSORXYs/s1600/truevFore.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-SS709Wj8E3I/TmDm62WVtXI/AAAAAAAAAJ8/uPIslSORXYs/s640/truevFore.gif" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Clearly it appears as though there is little to no accuracy with the the forecast of our de-trended model alone. In fact a linear regression of the forecast data on the true data makes this perfectly clear.<br /><br />> reg.model<-lm(true.data~forecast.data)<br />> summary(reg.model)<br /><br />Call:<br />lm(formula = true.data ~ forecast.data)<br /><br />Residuals:<br /> Min 1Q Median 3Q Max<br />-684.0 -449.0 -220.8 549.4 716.8<br /><br />Coefficients:<br /> Estimate Std. Error t value Pr(>|t|)<br />(Intercept) -2244.344 2058.828 -1.090 0.290<br />forecast.data 2.955 2.568 1.151 0.265<br /><br />Residual standard error: 540.6 on 18 degrees of freedom<br />Multiple R-squared: 0.06851,<span class="Apple-tab-span" style="white-space: pre;"> </span>Adjusted R-squared: 0.01676<br />F-statistic: 1.324 on 1 and 18 DF, p-value: 0.265<br /><br /><br />> anova(reg.model)<br />Analysis of Variance Table<br /><br />Response: true.data<br /> Df Sum Sq Mean Sq F value Pr(>F)<br />forecast.data 1 386920 386920 1.3238 0.265<br />Residuals 18 5260913 292273 <br /><br /><br />Now, is a good time to not be discouraged, but rather encouraged to add trend to our forecast. When we run a linear regression of trend on GDP we quickly realize that 99.7 of the variance in GDP can be accounted for by the trend.<br /><br /><br />> reg.model2<-lm(GDP~trendy)<br />> summary(reg.model2)<br /><br />Call:<br />lm(formula = GDP ~ trendy)<br /><br />Residuals:<br /> Min 1Q Median 3Q Max<br />-625.43 -165.76 -36.73 163.04 796.33<br /><br />Coefficients:<br /> Estimate Std. Error t value Pr(>|t|) <br />(Intercept) 0.001371 21.870246 0.0 1 <br />trendy 1.000002 0.003445 290.3 <2e-16 ***<br />---<br />Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br /><br />Residual standard error: 250.6 on 256 degrees of freedom<br />Multiple R-squared: 0.997,<span class="Apple-tab-span" style="white-space: pre;"> </span>Adjusted R-squared: 0.997<br />F-statistic: 8.428e+04 on 1 and 256 DF, p-value: < 2.2e-16<br /><br /><div><br /></div><div>In the end we would have to had accounted for trend anyway so it just makes sense to use it when testing our models accuracy. </div><div><br /></div><div><div>> test.data1<-dt[-c(239:258)] </div><div><br /></div><div># Important note is that the "-c(239:258)" includes everything except those particular 20 observations #</div><div><br /></div><div>> true.data1<-dt[-c(1:238)]</div><div>> true.data2<-trendy[-c(1:238)]</div><div>> forecast.data1<-predict(arima(test.data1,order=c(10,0,0),include.mean=FALSE),n.ahead=20)$pred</div></div><div>> forecast.data2<-(true.data2)</div><div><br /></div><div><div>> forecast.data3<-(forecast.data1+forecast.data2)</div><div>> true.data3<-(true.data1+true.data2)</div></div><div><br /></div><div>Don't forget to plot your data:</div><div><br /></div><div><div>> plot(true.data3,forecast.data3,main="True Values vs. Predicted Values")</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-APVI6TMRKOo/TmDrrVSr_UI/AAAAAAAAAKA/xr9H73_Spuo/s1600/tvpred.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://4.bp.blogspot.com/-APVI6TMRKOo/TmDrrVSr_UI/AAAAAAAAAKA/xr9H73_Spuo/s640/tvpred.gif" width="640" /></a></div><div><br /></div><div><br /></div><div>...and regress the forecasted data on the actual data:</div><div><br /></div><div>> reg.model3<-lm(true.data3~forecast.data3)</div><div>> summary(reg.model3)</div></div><div><br /></div><div><div>Call:</div><div>lm(formula = true.data3 ~ forecast.data3)</div><div><br /></div><div>Residuals:</div><div> Min 1Q Median 3Q Max </div><div>-443.5 -184.2 16.0 228.3 334.8 </div><div><br /></div><div>Coefficients:</div><div> Estimate Std. Error t-value Pr(>|t|) </div><div>(Intercept) 8.104e+03 1.141e+03 7.102 1.28e-06 ***</div><div>forecast.data3 4.098e-01 7.657e-02 5.352 4.37e-05 ***</div><div>---</div><div>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 </div><div><br /></div><div>Residual standard error: 264.8 on 18 degrees of freedom</div><div>Multiple R-squared: 0.6141,<span class="Apple-tab-span" style="white-space: pre;"> </span>Adjusted R-squared: 0.5926 </div><div>F-statistic: 28.64 on 1 and 18 DF, p-value: 4.366e-05 </div></div><div><br /></div><div>Looking at the plot and the regression results, I feel like this model is pretty accurate considering the fact this is a point forecast and not an interval forecast. Next time on the Dancing Economist we will plot the forecasts into the future with 95% confidence intervals. Until then-</div><div><br /></div><div>Keep Dancin'</div><div><br /></div><div>Steven J</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-3938831471823350612011-09-01T08:47:00.000-07:002011-09-01T08:50:20.357-07:00Forecasting In R: A New Hope with AR(10)In our last post we determined that the ARIMA(2,2,2) model was just plain not going to work for us. Although i didn't show its residuals failed to pass the acf and pacf test for white noise and the mean of its residuals was greater than three when it should have been much closer to zero. <br /><div>Today we discover that an AR(10) of the de-trended GDP series may be the best option at hand. Normally when we do model selection we start with the one that has the lowest AIC and then proceed to test the error terms (or residuals) for white noise. Lets take a look at the model specs for the AR(10):</div><div><br /></div><div><br /></div><div><i>> model7<-arima(dt,order=c(10,0,0))</i></div><div><br /></div><div><div><i>> model7</i></div><div><i><br /></i></div><div><i>Call:</i></div><div><i>arima(x = dt, order = c(10, 0, 0))</i></div><div><i><br /></i></div><div><i>Coefficients:</i></div><div><i> ar1 ar2 ar3 ar4 ar5 ar6 ar7 ar8 ar9 ar10</i></div><div><i> 1.5220 -0.4049 -0.2636 0.2360 -0.2132 0.1227 -0.0439 -0.0958 0.3244 -0.2255</i></div><div><i>s.e. 0.0604 0.1105 0.1131 0.1143 0.1154 0.1147 0.1139 0.1127 0.1111 0.0627</i></div><div><i> intercept</i></div><div><i> -21.6308</i></div><div><i>s.e. 57.5709</i></div><div><i><br /></i></div><div><i>sigma^2 estimated as 1452: log likelihood = -1307.76, aic = 2639.52</i></div></div><div><br /></div><div>The Ljung-Box Q test checks out to the 20th lag: </div><div><br /></div><div><div><i>> Box.test(model7$res,lag=20,type="Ljung-Box")</i></div><div><i><br /></i></div><div><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Box-Ljung test</i></div><div><i><br /></i></div><div><i>data: model7$res </i></div><div><i>X-squared = 15.0909, df = 20, p-value = 0.7712</i></div></div><div><div><br /></div></div><div>It even checks out to the 30th lag! I changed the way Iplotted the Ljung-Box Q values because after finding this little function called "LBQPlot" which makes life way easier.</div><div><br /></div><div><i>> LBQPlot(model7$res,lag.max=30,StartLag=1)</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-EniwjqX7d1I/Tl-dNBLs9iI/AAAAAAAAAJw/pJfpMcVfmhs/s1600/LBQ.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="308" src="http://4.bp.blogspot.com/-EniwjqX7d1I/Tl-dNBLs9iI/AAAAAAAAAJw/pJfpMcVfmhs/s640/LBQ.gif" width="640" /></a></div><div>Most importantly both the ACF and the PACF of the residuals check out for the white noise process. In the ARIMA(2,2,2) model these weren't even close to what we wanted them to be.</div><div><br /></div><div><div><i>> par(mfrow=c(2,1))</i></div><div><i>> acf(model7$res)</i></div><div><i>> pacf(model7$res)</i></div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3LjQu9Wm_s8/Tl-eE-ZCqVI/AAAAAAAAAJ0/cW14scJS0OY/s1600/acfresar10.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-3LjQu9Wm_s8/Tl-eE-ZCqVI/AAAAAAAAAJ0/cW14scJS0OY/s640/acfresar10.gif" width="640" /></a></div><div><br /></div><div><br /></div><div>Unfortunately, our residuals continue to fail the formal tests for normality. I don't really know what to do about this or even what the proper explanation is, but I have a feeling that these tests are very very sensitive. </div><div><br /></div><div><div>> jarque.bera.test(model7$res)</div><div><br /></div><div><span class="Apple-tab-span" style="white-space: pre;"> </span>Jarque Bera Test</div><div><br /></div><div>data: model7$res </div><div>X-squared = 7507.325, df = 2, p-value < 2.2e-16</div></div><div><br /></div><div><div>> shapiro.test(model7$res)</div><div><br /></div><div><span class="Apple-tab-span" style="white-space: pre;"> </span>Shapiro-Wilk normality test</div><div><br /></div><div>data: model7$res </div><div>W = 0.7873, p-value < 2.2e-16</div><div><br /></div><div>The mean is also considerably closer to 0 but just not quite there.</div><div><br /></div><div>> mean(model7$res)</div><div>[1] 0.7901055</div></div><div><br /></div><div><div>Take a look at the plot for normality:</div></div><div><br /></div><div><div>> qqnorm(model7$res)</div><div>> qqline(model7$res)</div></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-LZd-exYrluU/Tl-eVAGIOeI/AAAAAAAAAJ4/nxt88GgfWEI/s1600/QQAR10.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-LZd-exYrluU/Tl-eVAGIOeI/AAAAAAAAAJ4/nxt88GgfWEI/s640/QQAR10.gif" width="640" /></a></div><div>In the next post we are going to test how good our model actually is. Today we found our optimal choice in terms of model specs, but we also should see how well it can forecast past values of GDP. In addition to evaluating our models past accuracy we will also practice forecasting into the future. As you continue to read these posts, you should be getting significantly better with R- I know I am! We have covered many new codes that stem from many different libraries. I would like to keep on doing analysis in R so after we finish forecasting GDP I think I may move on to some Econometrics. Please keep forecasting and most certainly keep dancin',</div><div><br /></div><div>Steven J.</div><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-9629927111416282242011-08-31T09:08:00.000-07:002011-08-31T10:09:42.063-07:00Story of the Ljung-Box Blues: Progress Not PerfectionIn the last post we determined that our ARIMA(2,2,2) model failed to pass the Ljung-Box test. In todays post we seek to completely discredit the last posts claim and finally arrive at some needed closure. <br /><div><br /></div><div>The Ljung-Box is first performed on the series at hand, because it means that at least one of the autocorrelation functions is non zero. What does that mean? Well, it means that we can forecast because the values in the series can be used to predict each other. It helps us numerically come to the conclusion that the series itself is not a white noise process and so its movements are not completely random. </div><div><br /></div><div>When we perform the Ljung-Box in R on GDP we get the following results:</div><div><br /></div><div><div><i>> Box.test(GDP,lag=20,type="Ljung-Box")</i></div><div><i><br /></i></div><div><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Box-Ljung test</i></div><div><i><br /></i></div><div><i>data: GDP </i></div><div><i>X-squared = 4086.741, df = 20, p-value < 2.2e-16</i></div></div><div><br /></div><div>What this output is telling us is to reject the null hypothesis that all of the autocorrelation functions out to 20 are zero. At least one of these is non zero. This gives us the green light to use AR, MA or ARMA in our approach towards modeling and forecasting.</div><div><br /></div><div>The second time the Ljung-Box shows up is when we want to test to see if the error terms or residuals are white noise. A good forecasting model will have to have zero correlation between its residuals or else you could forecast them. It naturally follows that if you can forecast the error terms then a better model must exist. </div><div><br /></div><div>Here is the Ljung-Box Q test out to the 26th Lag:</div><div><br /></div><div><div><i>> LjungBoxTest(res,k=2,StartLag=1)</i></div><div><br /></div><div> <span class="Apple-style-span" style="font-family: Verdana, sans-serif;">m Qm p-value:</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 1 0.05 0.82118640</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 2 0.05 0.81838128</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 3 0.72 0.39541957</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 4 0.75 0.68684256</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 5 2.00 0.57224678</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 6 2.41 0.66164894</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 7 3.24 0.66255593</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 8 9.05 0.17070965</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 9 15.14 0.03429650</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 10 15.54 0.04946816</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 11 15.64 0.07487629</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 12 22.14 0.01442010</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 13 22.51 0.02073827</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 14 22.72 0.03020402</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 15 23.24 0.03889525</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 16 23.24 0.05648292</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 17 23.29 0.07809501</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 18 26.81 0.04367819</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 19 30.20 0.02494375</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 20 30.20 0.03554725</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 21 31.56 0.03500150</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 22 32.46 0.03868275</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 23 32.47 0.05241222</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 24 34.14 0.04748629</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 25 35.47 0.04672181</span></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif;"> 26 36.28 0.05151986</span></div><div><br /></div></div><div>As you can see with your very special eyes we fail to reject the null hypothesis out to the 8th lag. So we have no evidence of residual autocorrelation and hence we have no evidence to contradict the assumption that the errors are white noise. Our model checks out people!</div><div><br /></div><div>Now if you want to plot the Ljung-Box just type in the following:</div><div><br /></div><div><i>> x<-LjungBoxTest(res,k=2,StartLag=1)</i></div><div><i>> plot(x[,3],main="Ljung-Box Q Test",ylab="P-values",xlab="Lag")</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-vIZIBVwZxRA/Tl5btNzz_UI/AAAAAAAAAJo/s3ry506vO6I/s1600/LjungBoxQ.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-vIZIBVwZxRA/Tl5btNzz_UI/AAAAAAAAAJo/s3ry506vO6I/s640/LjungBoxQ.gif" width="640" /></a></div><div>The white noise process should also have a normal distribution with a mean of 0. To do a rough test of normality we can run a simple Q-Q plot in R. The values are normal if they rest on a line and aren't all over the place.<br /><br />The following command gives us this plot:<br /><br /><i>qqnorm(res)</i><br /><i>qqline(res)</i><br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-DKMJqQYnjWk/Tl5lxKD4YII/AAAAAAAAAJs/UBHC5uYX01M/s1600/QQPlot.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://4.bp.blogspot.com/-DKMJqQYnjWk/Tl5lxKD4YII/AAAAAAAAAJs/UBHC5uYX01M/s640/QQPlot.gif" width="640" /></a></div><div><br /></div><div><br /></div><div>The Q-Q plot seems to suggest normality- however there are some formal tests we can run in R to verify this assumption. Two formal tests are the Jarque-Bera Test and the Shapiro-Wilk normality test. Both have a null hypothesis that the series follows a normal distribution and therefore a rejection of the null suggests that the series does not follow a normal distribution.</div><div><br /></div><div><div><i>> jarque.bera.test(res)</i></div><div><i><br /></i></div><div><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Jarque Bera Test</i></div><div><i><br /></i></div><div><i>data: res </i></div><div><i>X-squared = 9660.355, df = 2, p-value < 2.2e-16</i></div><div><i><br /></i></div><div><i>> shapiro.test(res)</i></div><div><i><br /></i></div><div><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Shapiro-Wilk normality test</i></div><div><i><br /></i></div><div><i>data: res </i></div><div><i>W = 0.7513, p-value < 2.2e-16</i></div><div><br /></div></div><div>Wow! Both of these test strongly reject the possibility of the white noise process having a normal distribution. </div><div>We can still see if the mean of the residuals is zero by simply typing the following into R:</div><div><br /></div><div><div><i>> mean(model$res)</i></div><div><i>[1] 3.754682</i></div></div><div><br /></div><div>The mean is clearly not zero which implies we have some sort of a problem. In fact, it means that the Ljung-Box was not the proper test because it requires:<br /><br />A. The time series be stationary<br />B. The white noise process has a normal distribution with mean zero.<br /><br />Given that we just determined that the mean is definitely not zero and that both of our formal tests rejected the possibility of our white noise process following a normal distribution, we do indeed face a serious problem. This is a evolving and growing period for us forecasting in R novices. I don't have all the answers (clearly), but strides are made in the right direction every day. The greatest thing about making mistakes and tripping in the forest is getting back up and getting the hell out of there. </div></div><div><br /></div><div>Please keep posted and keep dancin',</div><div><br /></div><div>Steven J. </div><div><br /></div>Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com3tag:blogger.com,1999:blog-5427848954322593357.post-13379373786010577092011-08-27T11:24:00.000-07:002011-08-31T08:23:43.742-07:00Forecasting In R: The Greatest Shortcut That Failed The Ljung-BoxOkay so you want to forecast in R, but don't want to manually find the best model and go through the drudgery of plotting and so on. I have recently found the perfect function for you. Its called auto.arima and it automatically fits the best arima model to your time series. In a word- it is "brilliant". Lets take a look at its brilliance with the following examples. <br /><br />So in our last post the last thing we plotted was de-trended GDP and were hoping to forecast it. R makes this not only really easy, but also hilariously fun.<br /><br />Just type in the following code with dt being your de-trended series.<br /><i><br /></i><br /><br /><i>fitdt<-auto.arima(dt)</i><br /><i><br /></i><br /><i>plot(forecast(fitdt,h=25))</i><br /><br />Here is the marvelous plot:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-wNj6u4y7DtA/TlkvaCqaVSI/AAAAAAAAAJg/fZ1uMVhOxtI/s1600/preddt.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://1.bp.blogspot.com/-wNj6u4y7DtA/TlkvaCqaVSI/AAAAAAAAAJg/fZ1uMVhOxtI/s640/preddt.gif" width="640" /></a></div><br /><br /><br />Here are the summary statistics:<br /><br /><br /><i>Series: dt </i><br /><i>ARIMA(2,0,2)(0,0,1)[4] with zero mean </i><br /><i><br /></i><br /><i>Coefficients:</i><br /><i> ar1 ar2 ma1 ma2 sma1</i><br /><i> 1.4753 -0.5169 0.0519 0.1909 0.1539</i><br /><i>s.e. 0.1201 0.1166 0.1236 0.0909 0.0726</i><br /><i><br /></i><br /><i>sigma^2 estimated as 1528: log likelihood = -1314.11</i><br /><i>AIC = 2640.22 AICc = 2640.55 BIC = 2661.53</i><br /><i><br /></i><br /><i>In-sample error measures:</i><br /><i> ME RMSE MAE MPE MAPE MASE </i><br /><i> -0.02875014 39.08953811 22.33132382 -30.62351536 75.51060679 0.83683534</i><br /><br />So it fitted the de-trended series to a ARMA(2,2). Lets take a look at what it fits regular ol' GDP to.<br /><br /><br /><i>> fitx<-auto.arima(GDP)</i><br /><i>> plot(forecast(fitx,h=25))</i><br /><i>> summary(fitx)</i><br /><i><br /></i><br /><i>Series: GDP </i><br /><i>ARIMA(2,2,2) </i><br /><i><br /></i><br /><i>Coefficients:</i><br /><i> ar1 ar2 ma1 ma2</i><br /><i> -0.078 0.4673 -0.3520 -0.5813</i><br /><i>s.e. 0.158 0.0928 0.1611 0.1525</i><br /><i><br /></i><br /><i>sigma^2 estimated as 1653: log likelihood = -1312.34</i><br /><i>AIC = 2634.69 AICc = 2634.93 BIC = 2652.42</i><br /><i><br /></i><br /><i>In-sample error measures:</i><br /><i> ME RMSE MAE MPE MAPE MASE </i><br /><i> 3.7546824 40.4951109 22.4820107 0.1552383 0.7111309 0.3613070 </i><br /><br /><br />So it fit it also to an ARMA(2,2), but we have that extra 2 there in the middle. That 2 was clearly not there when we fit for the trendless de-trended GDP and shows up to account for the quadratic trend of the regular ol' GDP series.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gHSZ-mDnLIk/TlkxZToKGRI/AAAAAAAAAJk/6_RVO05eRu0/s1600/GDPRED.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-gHSZ-mDnLIk/TlkxZToKGRI/AAAAAAAAAJk/6_RVO05eRu0/s640/GDPRED.gif" width="640" /></a></div><br />Located above is the plot of the h=25 step ahead forecast and below is some code for the actual values in list form<br /><br /><br />> fit7=forecast(fitx,h=20)<br />> list(fit7)<br /><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"> Point Forecast Lo 80 Hi 80 Lo 95 Hi 95</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">65 Q3 15124.35 15072.25 15176.45 15044.67 15204.03</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">65 Q4 15245.80 15148.82 15342.78 15097.48 15394.11</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">66 Q1 15359.95 15215.32 15504.58 15138.76 15581.15</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">66 Q2 15475.10 15285.46 15664.74 15185.07 15765.12</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">66 Q3 15586.75 15352.89 15820.62 15229.09 15944.42</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">66 Q4 15699.15 15423.24 15975.05 15277.19 16121.11</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">67 Q1 15809.85 15493.02 16126.69 15325.30 16294.41</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">67 Q2 15921.03 15564.75 16277.32 15376.14 16465.92</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">67 Q3 16031.39 15636.52 16426.26 15427.49 16635.29</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">67 Q4 16142.03 15709.51 16574.56 15480.54 16803.52</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">68 Q1 16252.27 15782.66 16721.87 15534.07 16970.47</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">68 Q2 16362.67 15856.53 16868.81 15588.59 17136.74</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">68 Q3 16472.86 15930.53 17015.20 15643.44 17302.29</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">68 Q4 16583.15 16004.92 17161.39 15698.82 17467.48</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">69 Q1 16693.34 16079.38 17307.30 15754.37 17632.31</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">69 Q2 16803.58 16154.01 17453.15 15810.15 17797.01</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">69 Q3 16913.77 16228.64 17598.89 15865.96 17961.57</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">69 Q4 17023.98 16303.31 17744.65 15921.81 18126.15</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">70 Q1 17134.17 16377.91 17890.43 15977.57 18290.77</span><br /><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">70 Q2 17244.37 16452.46 18036.29 16033.25 18455.50</span><br /><br /><span class="Apple-style-span" style="font-family: inherit;"><br /></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gHSZ-mDnLIk/TlkxZToKGRI/AAAAAAAAAJk/6_RVO05eRu0/s1600/GDPRED.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><span class="Apple-style-span" style="-webkit-text-decorations-in-effect: none; color: black;"></span></a><a href="http://2.bp.blogspot.com/-gHSZ-mDnLIk/TlkxZToKGRI/AAAAAAAAAJk/6_RVO05eRu0/s1600/GDPRED.gif" imageanchor="1" style="clear: left; display: inline !important; margin-bottom: 1em; margin-right: 1em;"><span class="Apple-style-span" style="-webkit-text-decorations-in-effect: none; color: black; font-family: inherit;">Let's see if this model passes the Ljung-Box test for white noise.</span></a></div><br /><br /><i>> Box.test(fitx$resid,type="Ljung-Box")</i><br /><i><br /></i><br /><i><span class="Apple-tab-span" style="white-space: pre;"> </span>Box-Ljung test</i><br /><i><br /></i><br /><i>data: fitx$resid </i><br /><i>X-squared = 0.0511, df = 1, p-value = 0.8212</i><br /><br /><br />No! It doesn't pass the test! AHHHHHH!!! WHAT ARE WE GOING TO DO!??!?! Find out in the next post or look it up, but whatever you do-<br /><br />Keep Dancin'<br /><br />Steven J.<br /><br />Update: This model does actually pass the "Ljung-Box" test. Please read the next post for details.<br /><br /><br />Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com1tag:blogger.com,1999:blog-5427848954322593357.post-89943150932324895622011-08-25T13:59:00.000-07:002011-08-25T13:59:38.575-07:00Forecasting in R: Modeling GDP and dealing with trend.Okay so we want to forecast GDP. How do we even begin such a burdensome ordeal?<br /><br />Well each time series has 4 components that we wish to deal with and those are seasonality, trend, cyclicality and error. If we deal with seasonally adjusted data we don't have to worry about seasonality which leaves us with only three worries. If we think hard we can also forget about forecasting the error component of the time series because if we can forecast the error we probably can come up with an even better trend or cyclical model. So that leaves just two things: the trend and cyclical components.<br /><br />The following graph is of real GDP and the data for this comes from FRED.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-xpjcdK1ZNXw/Tla21NQSAtI/AAAAAAAAAJc/MgEplZvCDHY/s1600/RGDP.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="392" src="http://4.bp.blogspot.com/-xpjcdK1ZNXw/Tla21NQSAtI/AAAAAAAAAJc/MgEplZvCDHY/s640/RGDP.gif" width="640" /></a></div><br /><br />Notice the absolute rise in the series. Doesn't that strangely look similar to a quadratic equation in the form of:<br /><br />y=B<span class="Apple-style-span" style="font-size: xx-small;">0</span>+ B<span class="Apple-style-span" style="font-size: xx-small;">1</span>*t +B<span class="Apple-style-span" style="font-size: xx-small;">2</span>*t^2 ????<br /><br />I think it does! Lets find out what the coefficients are in R and see if this is actually an appropriate model.<br /><br />First get your data into R as we discussed in the previous posts:<br /><br /><i>> GDP=scan("/Users/stevensabol/Desktop/R/gdp.csv")</i><br /><br />Then turn it into a time series so R can read it:<br /><br /><i>> GDP=ts(GDP,start=1,frequency=4)</i><br /><br />then you have make sure that the number of observations you have in your time series will match up with the "t's"<br /><br />to do this simply type<br /><br /><br /><i>> length(GDP)</i><br />[1] 258<br /><div><br /></div><br />Then set "t" to that number:<br /><br /><i>> t=(1:258)</i><br /><br />Then you have to regress time and time^2 on GDP. To do this type the following:<br /><br /><i>t2=t^2</i><br /><br />and run the regression:<br /><br /><i>reg=lm(y~t+t2) </i><br /><br />to find out the dirty details, use the summary function:<br /><br /><i>summary(reg)</i><br /><br /><br />Call:<br />lm(formula = y ~ t + t2)<br /><br />Residuals:<br /> Min 1Q Median 3Q Max<br />-625.43 -165.76 -36.73 163.04 796.33<br /><br />Coefficients:<br /> Estimate Std. Error t value Pr(>|t|) <br />(Intercept) 892.656210 47.259202 18.89 <2e-16 ***<br />t -30.365580 0.842581 -36.04 <2e-16 ***<br />t2 0.335586 0.003151 106.51 <2e-16 ***<br />---<br />Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br /><br />Residual standard error: 251.1 on 255 degrees of freedom<br />Multiple R-squared: 0.997,<span class="Apple-tab-span" style="white-space: pre;"> </span>Adjusted R-squared: 0.9969<br />F-statistic: 4.197e+04 on 2 and 255 DF, p-value: < 2.2e-16<br /><br />Okay so it appears we have a pretty great fit. With our R^2 almost close to one I think we have a winner!<br /><br />Lets plot the trend and GDP values against each other so we get the picture of what we just accomplished. <br /><br />First you have to write out the equation for trend using the coefficient estimates that R gave us:<br /><br /><i>> trendy= 892.656210 + -30.365580*t + 0.335586*t2</i><br /><br />And for the Plot:<br /><br /><i>> ts.plot(GDP,trendy,col=1:2,ylab="Trend vs. Actual GDP")</i><br /><br />Here is what it should look like:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-YTE9aNnf9T8/Tla1v0G8lMI/AAAAAAAAAJY/HGMEcqV7TvY/s1600/trendvactual.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://1.bp.blogspot.com/-YTE9aNnf9T8/Tla1v0G8lMI/AAAAAAAAAJY/HGMEcqV7TvY/s640/trendvactual.gif" width="640" /></a></div><br /><br />Hey! Congratulations as you have officially tackled and defeated the trend component of this time series. Now we have to deal with the cyclicality of it all..<br /><br />In order to do this we have to only look at the what's left when you remove trend from the equation. we do this by simply subtracting trend from our original GDP series to get what's left- cyclicality.<br /><br />De-trended (or Cyclical component) = GDP - TREND<br /><br />In R we want to write the following and plot it!<br /><br /><br /><i>> dt=GDP-trendy</i><br /><i>> plot(dt)</i><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8HxigXrGvV0/Tla1WHCw9KI/AAAAAAAAAJU/rx_TG2FbMs0/s1600/DetrendedGDP.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-8HxigXrGvV0/Tla1WHCw9KI/AAAAAAAAAJU/rx_TG2FbMs0/s640/DetrendedGDP.gif" width="640" /></a></div><br /><br />Now we want to figure out if this de-trended series is an AR, MA or ARMA process. We do this by looking at the autocorrelation and partial autocorrelation functions. Let's take a look shall we?<br /><br /><br />par(mfrow=c(2,1))<br />> acf(dt,100) *notice that i lag it out to the 100 observation<br />> pacf(dt,100)<br /><br />This gives us the following graph:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-d26zINNUZds/Tla0zL7MQnI/AAAAAAAAAJQ/fUq2WlB7QKg/s1600/acfvp.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://1.bp.blogspot.com/-d26zINNUZds/Tla0zL7MQnI/AAAAAAAAAJQ/fUq2WlB7QKg/s640/acfvp.gif" width="640" /></a></div>Next time we will learn how to select a proper model and hopefully get some actual forecasting in there as well!<br /><br />Keep dancin'<br /><br />Steven J.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Steven Sabolhttp://www.blogger.com/profile/12972882438151975112noreply@blogger.com4