Discussion Paper

No. 2019-23 | March 08, 2019
Escape from model-land


Both mathematical modelling and simulation methods in general have contributed greatly to understanding, insight and forecasting in many fields including macroeconomics. Nevertheless, we must remain careful to distinguish model-land and model-land quantities from the real world. Decisions taken in the real world are more robust when informed by our best estimate of real-world quantities, than when “optimal” model-land quantities obtained from imperfect simulations are employed. The authors present a short guide to some of the temptations and pitfalls of model-land, some directions towards the exit, and two ways to escape.

JEL Classification:

C52, C53, C6, D8, D81


  • Downloads: 781


Cite As

Erica L. Thompson and Leonard A. Smith (2019). Escape from model-land. Economics Discussion Papers, No 2019-23, Kiel Institute for the World Economy. http://www.economics-ejournal.org/economics/discussionpapers/2019-23

Comments and Questions

Keith Beven, Lancaster University, UK - Comment
March 14, 2019 - 11:20

I have read the paper through and think it is one of best expressions of the issues of using models in the face of epistemic uncertainties I have read. It should therefore be published.

The only thing missing (though not necessary in the context as it would add ...[more]

... too much length) is a discussion of how to decide whether models are informative in decision making. The authors rightly refer to out-of-sample testing – but a failure can be a result of the epistemic uncertainties and non-stationarities in the data available as well as the model structure and parameters. So in the same way that statistical theory may not necessarily apply in such cases, new ways of measuring information and added information may need to be considered that allow for the qualitative subjective judgements that will often be involved.

I can add the following references to their list that address some of the same issues (and can supply copies to the authors if required).

Beven, K J., 2016, EGU Leonardo Lecture: Facets of Hydrology - epistemic error, non-stationarity, likelihood, hypothesis testing, and communication. Hydrol. Sci. J. 61(9):1652-1665, DOI: 10.1080/02626667.2015.1031761

Beven, K J, 2018a, On hypothesis testing in hydrology: why falsification of models is still a really good idea, WIRES Water, DOI: 10.1002/wat2.1278.

Beven, K. J., 2019b, Towards a new paradigm for testing models as hypotheses in the inexact sciences, Proceedings Royal Society A, submitted

Beven, K J, Aspinall, W P, Bates, P D, Borgomeo, E, Goda, K, Hall, J W, Page, T, Phillips, J C, Simpson, M, Smith ,P J, Wagener, T and Watson, M, 2018, Epistemic uncertainties and natural hazard risk assessment – Part 2: What should constitute good practice?, Natural Hazards and Earth System Science, , 18(10): 2769-2783, https://doi.org/10.5194/nhess-18-2769-2018

Beven, K. J. and Lane, S., 2019, Invalidation of models and fitness-for-purpose: a rejectionist approach, Chapter 5 in: Beisbart, C. & Saam, N. J. (eds.), Computer Simulation Validation - Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives, Cham: Springer, to appear 2019

And for an alternate view:

Nearing, G.S., Tian, Y., Gupta, H.V., Clark, M.P., Harrison, K.W. and Weijs, S.V., 2016a. A philosophical basis for hydrological uncertainty. Hydrological Sciences Journal, 61(9): 1666-1678.

Also – to be submitted shortly.

Keith Beven and Stuart Lane, 2019, On (in)validating environmental models. 1. Principles for formulating a Turing Test for determining when a model is fit-for purpose. Hydrological Processes, to be submitted

Keith Beven, Stuart Lane, Trevor Page, Ann Kretzschmar, Barry Hankin and Nick Chappell, 2019, On (in)validating environmental models. 2. Implementation of the Turing Test to modelling hydrological processes, Hydrological Processes, to be submitted

Anonymous - What about macroeconomics
March 14, 2019 - 16:41

This paper appears to me to offer a rather general discussion of problems of system estimation quite remote from macroeconomics (and also from biology, psychology, and sociology). It is to my mind much too general. I miss an example of a macroeconomic problem that can be tackled better by the ...[more]

... proposed approach (which is not quite clear to me). The problems raised, such as structural instability, seem to be well-known.
The authors seem to be unaware of positions that see macroeconomic models as more general and more stable than disaggregated models, that not all macroeconomic models involve optimization, and that economic structure changes all the time -- as Alfred Marshall emphasize a long time ago.

Arthur Petersen - The two uses of "expert judgement"
April 03, 2019 - 10:37

This paper both welcome and timely. It very clearly makes the point, of relevance to all field that use models to inform decision-making, that one has to argue that one actually has derived decision-relevant information about the real world from one's model.

I have only a minor issue with ...[more]

... the way "expert judgement" is used in two ways in the paper, as is also the case in wider practice: my point is to signal the fact that this raises confusion. Like in the IPCC, the phrase "expert judgement" is used BOTH to refer to a result obtained from applying an expert elicitation technique instead of being obtained directly from observation or a model (see, e.g., the IPCC uncertainty guidance, the audience being experts) AND to indicate that all statements that experts provide to decision-makers are expert judgements, whether they derive from observations, models, or expert elicitation (e.g., the Summaries for Policymakers, the audience being decision-makers and their advisers).

On page 8, it is stated that "models and simulations are used to the furthest extent that confidence in their utility can be established, either by quantitative out-of-sample performance assessment or by well-founded critical expert judgement." This may lead the uninitiated reader to forget that experts will still need to judge that the quantitative out-of-sample performance assessment is sufficiently reliable.

Anonymous - Referee Report
April 15, 2019 - 09:27

see attached file

Marcus Miller, University of Warwick, UK - Comment
May 10, 2019 - 08:49

see attached file