## Abstract

All economists say that they want to take their models to the data. But with incomplete and highly imperfect data, doing so is difficult and requires carefully matching the assumptions of the model with the statistical properties of the data. The cointegrated VAR (CVAR) offers a way of doing so. In this paper we outline a method for translating the assumptions underlying a DSGE model into a set of testable assumptions on a cointegrated VAR model and illustrate the ideas with the RBC model in Ireland (2004). Accounting for unit roots (near unit roots) in the model is shown to provide a powerful robustification of the statistical and economic inference about persistent and less persistent movements in the data. We propose that all basic assumptions underlying the theory model should be formulated as a set of testable hypotheses on the long-run structure of a CVAR model, a so called ‘theory consistent hypothetical scenario’. The advantage of such a scenario is that it forces us to formulate all testable implications of the basic hypotheses underlying a theory model. We demonstrate that most assumptions underlying the DSGE model and, hence, the RBC model are rejected when properly tested. Leaving the RBC model aside, we then report a structured CVAR analysis that summarizes the main features of the data in terms of long-run relations and common stochastic trends. We argue that structuring the data in this way offers a number of ‘sophisticated’ stylized facts that a theory model should replicate in order to claim empirical relevance.

## Comments and Questions

For general readers, DSGE is not meaningful, in the title and abstract.

I agree with the above post and request the high level scholars to make it easy to understand their abbreviations, techniques and methodologies. A bit more care is necessary and authors should keep in mind that many applied economists are potential users of these techniques. Nevertheless, I very much ...[more]

... appreciate that the authors made available the RATS code, data and sample output.

There are some strong methodological views among the experts on how to model time series models. Take for example Harvey's 1997 paper in the Economic Journal. He was very critical on the mainstream autoregressive models with deterministic trends. An easy to understand paper on these controversies is “Deterministic and Stochastic Trends in the Time Series Models: A Guide for the Applied Economist” and it may be downloaded from the RePEc site.

The paper mentioned in the above post should be available at RePEc in a few days. It is already posted at Munic Personal.

In principle the idea of the paper is to try to avoid making unnecessary assumptions and it advocates the use of statistical tests on the assumptions of economic models.

Testing in principle however is impossible. The data never ever speaks on itself. One always tests in the framework of ...[more]

... a specific model. One could think that using a more general model can be a reasonable way to test the validity of a specific model. And so it is. But only in a statistical/econometric point of view, not an economic one. What the authors do is to test a specific econometric specification of an economic model. Economic models are rarely, if ever specified in such form. They are usually expressed in very general terms, using variables that are not directly observable (or not even defined, such as supply and demand for that matter). Thus regrettably a statistical/econometric rejection of such a model cannot be conclusive. Take for example this instance. There is some discussion on the choice of variables, which should be better suited for the purposes of statistical inference. The choice is dictated by the statistical model (to make it estimateable and testable). The central premise is that it has to be this way. What if the model cannot be estimated? Does this make it invalid?

The other issue is that the theoretical model is non-linear and for inference purposes needs to be linearised. What are the consequences of the linearization for the inference? Thus even the more general VAR model is still an approximation. But where is the guarantee it will be a better approximation? If one goes in that direction, then the proposed approach evidently does not go further enough. It does after all make the restrictive assumption of integer orders of integration (what about the possibility of fractionally integrated variables). Furthermore, it only considers deterministic cointegration (as opposed to stochastic one). Thus one has to put some limits to the generality of the model. Economic theory does this to some extent. It may be an abstraction and simplification, but is a necessary one. Note that the proposed approach does actually use the proposed economic model and tries to generalise it, but on purely mechanistic (statistical) grounds.

One would have to concede though that there are some interesting empirical implications of the analysis. In particular the finding that shocks to consumption and labour do contribute to the explanation of the business cycle. But this is a reasonable assumption that could have been embedded in an economically justified model and tested within it, rather than employing an atheoretical approach. In other words the economic model could have been generalised on economic grounds and then tested against the more specific one.

For me the current approach is indefensible, because one does not typically know whether all the relevant variables are present, and thus it does in contrast to what it says, depend on the assumptions of the used economic model. Furthermore the implications of potential non-linearity are far reaching and approximations may be inefficient in dealing with it generally.