Discussion Paper
No. 2015-50 | July 13, 2015
Robert Elliott Smith
Idealizations of Uncertainty, and Lessons from Artificial Intelligence
(Published in Radical Uncertainty and Its Implications for Economics)

Abstract

Making decisions under uncertainty is at the core of human decision-making, particularly economic decision-making. In economics, a distinction is often made between quantifiable uncertainty (risk) and un-quantifiable uncertainty (Knight, Uncertainty and Profit, 1921). However, this distinction is often ignored by, in effect, the quantification of unquantifiable uncertainty, through the assumption of subjective probabilities in the mind of the human decision makers (Savage, The Foundations of Statistics, 1954). This idea is also reflected in developments in artificial intelligence (AI). However, there are serious reasons to doubt this assumption, which are relevant to both AI and economics. Some of the reasons for doubt relate directly to problems that AI has faced historically, that remain unsolved, but little regarded. AI can proceed on a prescriptive agenda, making engineered systems that aid humans in decision-making, despite the fact that these problems may mean that the models involved have serious departures from real human decision-making, particularly under uncertainty. However, in descriptive uses of AI and similar ideas (like the modelling of decision- making agents in economics), it is important to have a clear understanding of what has been learned from AI about these issues. This paper will look at AI history in this light, to illustrate what can be expected from models of human decision-making under uncertainty that proceed from these assumptions. Alternative models of uncertainty are discussed, along with their implications for examining in vivo human decision-making uncertainty in economics.

JEL Classification:

B59

Links

Cite As

[Please cite the corresponding journal article] Robert Elliott Smith (2015). Idealizations of Uncertainty, and Lessons from Artificial Intelligence. Economics Discussion Papers, No 2015-50, Kiel Institute for the World Economy. http://www.economics-ejournal.org/economics/discussionpapers/2015-50


Comments and Questions



David Hales, University of Szeged, Hungary - Referee Report 1
August 03, 2015 - 09:04
see attached file

Robert Elliott Smith - Revised Version
September 04, 2015 - 13:02
Hello again, and sorry for the delay. Here are some notes on my revisions to the paper: First, I'd like to thank the reviewer for his insightful and helpful comments, which have certainly led me to improve the paper. Reviewer's comments are in quotes, with my response noted below each one. "There are some curious gaps in emphasis however. For example in section 3.2, AI models of learning, we jump straight into (sub symbolic) connectionism as the main paradigm without mention of the symbolic more traditional (and huge area) of machine learning that predates much of the connectionist work. Specifically classifier systems that induce actual symbolic human readable rules. This is minor issue in the context of the critique but could mislead the reader into thinking that basically AI learning is connectionism or pure statistics – rather than attempts to induce meaningful symbolic representations that make sense also to humans." A paragraph has been added to this relevant section, mentioning more symbolic approaches to inductive AI, and how they fit into the paper's overall thrust, to clarify this issue. "In a few instances the author makes very general statements claiming the status of obvious fact when these are contentious and debatable. For example it is stated that:“It is true that at some (neural) level human thinking is mechanical, and explicable by equations of some sort” Why is it true? Is there any proof of this? This is an assumption rather than a truth. It might be useful to qualify these kinds of statements in some way because the detract from the plausibility of the main argument."That particular sentence has been modified to overcome this quite-right observation. "The main conclusion to the argument is that due to the problems of existing AI models one must look at how people actually behave. However little, if any, attention is given to practical ways forward in this regard or indeed past successes of failures following this approach which surely exist?In fact the final parts paper are weaker than the earlier parts because it is rather vague as to what is to be done. For example the author states: “Considering the realities of the in vivo social communication of ideas may lead to understanding of the juxtaposition of representations from those shared ideas, which may provide a basis for understanding innovation of ideas themselves” It is not clear what this means or how it would inform practical work."That sentence has been modified and added to to emphasize a practical way of advancing the field along those lines.All the typos caught by the reviewer have been repaired, and I'd like to thank him once again for his insights.

Anonymous - Referee Report 2
October 09, 2015 - 08:27
see attached file

Robert Elliott Smith - Revised Version
November 09, 2015 - 08:29
I want to very much thank the reviewer for his insightful comments. I've added material to address his concerns on pages 21 (regarding multi-prior models), and pages 24 and 26 regarding Evolutionary Economics and the Economics of Innovation (see pdf file)