Today on my Twitter account, I posted some tweets while I was watching the webcast of the Heritage Foundation’s event today, titled “Measuring Innovation and Change During Turbulent Economic Times.” It was a good event. I’ll be penning another blog entry from this conference later. Lots of stuff to look into from this conference and that’s a good thing.
Although Assistant Secretary Alan Krueger didn’t attend, there was a speech posted on stress testing which he was going to speak to. I thought I’d post something on it, because it’s both eye-opening and refreshing to see what he has to say on the availability of macroeconomic data. I posted the whole speech on Scribd here:
I’m not going to expound on the whole speech, but needless to say there are some very interesting comments in here, and you don’t have to look far. The first page is far enough. He said in his current job:
I have been constantly surprised at how little quantitative information can be brought to bear on fundamental policy questions, or, alternatively, how difficult it can be to find valid data on important and well-defined economic variables. In part, this reflects a lack of timeliness of certain key statistics; it also reflects the fact that existing data are not useable or sufficiently detailed, or that relevant data simply do not exist.
*GULP!* So at best , he feels like they have relevant data, but it’s not timely (i.e. it’s received on a lag). The alternative is the data that folks may want simply doesn’t exist. There’s always a lot of ranting and raving about economic statistics every month, like the numbers are cooked or the statistic is faulty. This is at least an admission by some people in government they don’t feel like they have the data they need, and to me that’s a lot more plausible than believing the folks who put these things together are trying to lie to you. Don’t get me wrong, I have a lot of doubts about our government, but c’mon. This the gang that can’t shoot straight.
He goes on to talk about what GDP tells us, what it can’t tell us, and why we need other statistics to augment our understanding:
For example, pollution and other negative externalities are not subtracted from GDP, and leisure time is not valued in GDP. Hence, when thinking about the longer-term health of the economy we also turn to statistics like the poverty rate, or the state of consumers’ finances, or the state of the environment, or how people spend and experience their time.
This is a way of saying that any statistic is incomplete. GDP doesn’t say anything in and of itself about market liquidity and interest rates. Models are only approximations of what’s going on in the world and those approximations are rather limited. How much of the world can you capture in an equation with 6 variables, after all?
But this should also point out that in general, any of the statistics that get reported have lots of cross-currents that can affect the number that gets reported and there’s nothing that says these numbers can’t get revised. In fact, his speech goes on to cite the following factoid:
Fifth, even with large samples, some of our timely data can be unreliable in a statistical sense. For example, the absolute annual benchmark revisions to nonfarm payrolls have averaged 0.2 percent over the past decade, with a range from 0.1 percent to 0.6 percent – and this year the preliminary benchmark adjustment was 0.6 percent, suggesting much deeper job loss last year than originally reported. As another example, the annual (absolute) revision to quarterly real GDP growth has averaged 0.4 percentage points in the last few years.
What this tells me is the estimates we see get reported are probably going to be getting revised quite a bit right now. That 3.5% GDP number we saw at the end of last month may very well get revised downward, especially on the back of September’s higher than expected trade deficit.
He goes on to talk about areas where they lack data like in the area of “interconnectedness” to assess systemic risk, counterparty risk in hedge funds, and the hope that maybe – just maybe – those loan mods will start paying off if we just get more data. I’m not going to rehash it here, because it’s mostly political blathering and it promotes bad practices like setting up this non-sensical list of too big to fail (TBTF) firms without letting anyone know who’s on the list. But the market will figure it out because of required SEC disclosures. And moral hazard continues unabated. But that’s another post for another time.
But the projects between the BEA, BLS, and the Federal Reserve are encouraging. Government agencies sharing data to create a more cohesive framework for analyzing the economy is a huge win, and since I do work with this stuff some nowadays, I’d be happy doing more one-stop shopping for my data than playing the proverbial dog that chases its tail to get everything I need.
Because after a while, even dogs get dizzy chasing their own tail and for this dogs, those dizzy spells don’t go away as fast as they used to.