This is a classic case of confirmation bias. The start from a position that they want to prove, and find the data accomodating.
You even demonstrate the bias in your description of the work, with statements like “entirely credible”. How do we know? Now that the specific methodology has been proven flawed, do we really know that larger debt equals poorer growth? Is there any statistical correlation at all? Is there a correlation between limits? Perhaps debt to GDP is a factor, but it could be dwarfed by other more significant factors or local norms.
In Canada’s case (for example), we could carry a high debt ratio but do well if commodity (oil, iron, aluminium, potash, wheat, lumber) prices are high. Whereas we could have poor growth if we have no debt but commodity prices are very low.
The bottom line here is that while I agree in a general way that servicing debt has a cost, each circumstance has to be weighed in its context. In some cases debt may be important – even crippling. In others – it may be a useful tool to attain other factors which will create positive growth – which may in turn wipe out debt. This is something the Clinton administration was ceanly aware of. Look at economic performance and debt ratios in those terms. Clinton used growth to slay debt, not the other way round.
Jenssays:
“I think that Reinhart and Rogoff acted in good faith” is an interesting statement, and quite possibly true.
But how did the (certainly correct) analysis that “large debts may lead to tight fiscal policies that may reduce economic growth” lead to the (I think clearly absurd) conclusion that artificially pushing the number below the magic threshold (by enacting exactly those harmful policies) would solve the problem?
There is nothing in their paper to support the thesis that austerity will boost a floundering economy.
This is maybe what Reinhart and Rogoff believe and advocate but, as far as I can tell, this is not what they wrote in their research.
Paulsays:
It’s a great case study in scientific journalism. From what I’ve read, it doesn’t sound like the excel error significantly changes the results. What does make a large difference is R&R’s decision to exclude data for some countries in the years following WWII, and an unusual weighting scheme that doesn’t take time spent in a debt category into account (so 19 years of high UK debt and moderate growth count the same as the 1 year New Zealand experienced high debt and low growth).
So the issues that really skewed the results are questions of data appropriateness/cherry picking, and how to account for serial correlations between years. If the formula had been correct, the actual issues with the data would have been the same, but the public would never have cared. This would have been debated in esoteric journals. Instead, because an unrelated but relatable and unambiguous error was also there, the narrative is that the paper is fundamentally incorrect.
I guess the lesson is, if you’re going to screw up, screw up in a boring academic manner =)
I’m really wondering why everyone is so upset against the original authors: so they did a mistake, their reasoning was flawed by all those errors in their spreadsheet, ok. It remains that the ones to blame are those who used those results as a basis for their policy and claimed their decisions were sound because some scientists said so.
It’s called Argument from authority and it’s the first trick taught in Rhetoric 101. Or whatever it was called by the ancient greeks.
This is a classic case of confirmation bias. The start from a position that they want to prove, and find the data accomodating.
You even demonstrate the bias in your description of the work, with statements like “entirely credible”. How do we know? Now that the specific methodology has been proven flawed, do we really know that larger debt equals poorer growth? Is there any statistical correlation at all? Is there a correlation between limits? Perhaps debt to GDP is a factor, but it could be dwarfed by other more significant factors or local norms.
In Canada’s case (for example), we could carry a high debt ratio but do well if commodity (oil, iron, aluminium, potash, wheat, lumber) prices are high. Whereas we could have poor growth if we have no debt but commodity prices are very low.
The bottom line here is that while I agree in a general way that servicing debt has a cost, each circumstance has to be weighed in its context. In some cases debt may be important – even crippling. In others – it may be a useful tool to attain other factors which will create positive growth – which may in turn wipe out debt. This is something the Clinton administration was ceanly aware of. Look at economic performance and debt ratios in those terms. Clinton used growth to slay debt, not the other way round.
“I think that Reinhart and Rogoff acted in good faith” is an interesting statement, and quite possibly true.
But how did the (certainly correct) analysis that “large debts may lead to tight fiscal policies that may reduce economic growth” lead to the (I think clearly absurd) conclusion that artificially pushing the number below the magic threshold (by enacting exactly those harmful policies) would solve the problem?
@Jens
There is nothing in their paper to support the thesis that austerity will boost a floundering economy.
This is maybe what Reinhart and Rogoff believe and advocate but, as far as I can tell, this is not what they wrote in their research.
It’s a great case study in scientific journalism. From what I’ve read, it doesn’t sound like the excel error significantly changes the results. What does make a large difference is R&R’s decision to exclude data for some countries in the years following WWII, and an unusual weighting scheme that doesn’t take time spent in a debt category into account (so 19 years of high UK debt and moderate growth count the same as the 1 year New Zealand experienced high debt and low growth).
So the issues that really skewed the results are questions of data appropriateness/cherry picking, and how to account for serial correlations between years. If the formula had been correct, the actual issues with the data would have been the same, but the public would never have cared. This would have been debated in esoteric journals. Instead, because an unrelated but relatable and unambiguous error was also there, the narrative is that the paper is fundamentally incorrect.
I guess the lesson is, if you’re going to screw up, screw up in a boring academic manner =)
I’m really wondering why everyone is so upset against the original authors: so they did a mistake, their reasoning was flawed by all those errors in their spreadsheet, ok. It remains that the ones to blame are those who used those results as a basis for their policy and claimed their decisions were sound because some scientists said so.
It’s called Argument from authority and it’s the first trick taught in Rhetoric 101. Or whatever it was called by the ancient greeks.
Djamé