### Discussion Paper

## Abstract

The distinction of risk vs uncertainty as made by Knight has important implications for policy selection. Assuming the former when the latter is relevant can lead to wrong decisions. With the aid of a stylized model that describes a bank’s decision on how to allocate loans, the authors discuss decision making under Knightian uncertainty. They use the info-gap robust satisficing approach to derive a trade-off between confidence and performance (analogous to confidence intervals in the Bayesian approach but without assignment of probabilities). They show that this trade off can be interpreted as a cost of robustness and that the robustness analysis can lead to a reversal of policy preference from the putative optimum. They then compare this approach to the min-max method which is another main non-probabilistic approach available in the literature.

## Comments and Questions

We are grateful, Yakov and Maria, for an innovative and rigorous treatment of the theme. Permit me to make a few comments. Thanks to Basel Accords extant, and those to come, we can confidently estimate that data on classes of borrowers, their propensities to repay their loans, and matrices of ...[more]

... default probabilities will get more refined. Along with the assumption of “significant” correlations across borrower types, I am not sure about the degree of uncertainty in the setup of the model in the paragraph before 3.1. I believe the meaning of and rationale for the skjs introduced in the opening lines of page 8 should be made precise and not left to the interpretation of the reader. Picking up from line 7, I would plumb for a confidence parameter mirroring the state of the cycle. It need not be subjective but can also be dropped for reasons developed below.

For me, your original contribution is equation (9). Would you clear the air on the following? The pkjs, strictly speaking, are unknown. If the times they are a tranquil and/or banks are duly diligent, the set of ps would be large, even as h increases. On the other hand, a year or two of a ‘great immoderation’, and the pkjs would be way off their estimates for a low h. The skjs would be connected with the drawing down of the pkjs in 2008 or 2009, not their estimates. Do they add value?

Hi Romar Correa,

Thanks for your useful comments on our paper. Here are some responses. Later we will upload a version of the paper that has some clarifications in response to your 3rd comment below.

1. You write that:

"we can confidently estimate that data on ...[more]

... classes of borrowers, their propensities to repay their loans, and matrices of default probabilities will get more refined. Along with the assumption of “significant” correlations across borrower types, I am not sure about the degree of uncertainty in the setup of the model in the paragraph before 3.1."

Banks will certainly use the best available data. Our example (which is more simplified than a realistic bank risk assessment) uses estimated probabilities, and these are intended as the best available values. However, data-based estimates are by definition based on the past, reflecting historical situations. The meaning of Knightian uncertainty is that the future may differ substantially from the past. Innovations, social and political change, historical events, etc., can make the past a weak indicator of the future. We discussed Knightian uncertainty at considerable length, and we don't want to repeat ourselves too much.

2. You write:

"I believe the meaning of and rationale for the skjs introduced in the opening lines of page 8 should be made precise and not left to the interpretation of the reader."

The skj's are described in the first paragraph of section 3.2, just before defining the info-gap model of uncertainty in eq.(9).

3. You write:

"For me, your original contribution is equation (9). Would you clear the air on the following? The pkjs, strictly speaking, are unknown. If the times they are a tranquil and/or banks are duly diligent, the set of ps would be large, even as h increases. On the other hand, a year or two of a ‘great immoderation’, and the pkjs would be way off their estimates for a low h. The skjs would be connected with the drawing down of the pkjs in 2008 or 2009, not their estimates. Do they add value?"

This is a very important point, and needs clarification of the distinction between the horizon of uncertainty, h, whose value is unknown, and the robustness, h-hat, whose value is calculated.

The first point to make is that the info-gap model of uncertainty in eq.(9) is NOT a single set. It is an unbounded family of nested sets. This is expressed formally in eq.(9) by the statement "h >= 0". The horizon of uncertainty, h, is unbounded: we do not know by how much the estimated probabilities err. As you rightly say, the pkj's are unknown, and this means that we don't know how must the estimated probabilities err.

The second point is that the info-gap model (IGM) of uncertainty underlies the evaluation of robustness in eq.(11). We can make judgments about robustness, h-hat, much better than we can make judgments about the horizon of uncertainty, h. Furthermore, robustness is calculated---we know its value---while the horizon of uncertainty is unknown.

Thus we would NOT say that in tranquil times the uncertainty sets U(h) are large. The IGM is an unbounded family of nested sets at all times. When we are confident that our understanding is sound, then we would say that we need only small robustness against error in the estimated probabilities (the p-tildes) because these p-tildes are reliable.

Likewise, we would NOT say that in "a year or two of a ‘great immoderation’ ... the pkjs would be way off their estimates for a low h." We would say that in tumultuous and uncertain years we need large robustness (large h-hat) against uncertainty in the estimated probabilities because these p-tildes are changing in unknown ways. This means that, if our contextual understanding is that things are changing in unknown ways, then we need immunity against large horizons of uncertainty.

We can summarize the info-gap robustness idea as follows. We assess the trade-off between robustness to uncertainty and aspirations (required policy outcomes). While robustness to uncertainty is a good thing to have, it is also difficult to know how much robustness is sufficient. More robustness is obviously better than less, but the crucial point is: at what cost? It is this cost that is most important to the policy maker and this is where info-gap theory is helpful. Policy makers have views on what policy outcomes they want, and what outcomes the simply cannot tolerate. Quantifying the trade-off between robustness and outcome enables policy makers to make informed decisions.

We will revise the discussion to clarify these issues.

The modifications are explained in our response to Romar Correa, item 3.

1. The comparison conducted in the article between (1) the maximin paradigm and (2) info-gap's robust-satisficing approach is seriously flawed and grossly misleading.

The article does no appreciate the fact that the comparison is actually between a "prototype" approach (maximin) and one of its many ...[more]

... simple "instances" (info-gap robust-satisficing). By analogy, the difference between the maximin approach and info-gap robust-satisficing approach is akin to the difference between say a "polytope" and a "triangle".

One of the obvious implications of this fact is that the maximin approach is much more general and versatile: it can do all that info-gap's robust-satisficing approach can possibly do, and more, in fact much more. For instance, "info-gap robustness" is inherently "local" in nature whereas the maximin paradigm is capable of dealing with both "local" and "global" robustness.

Note that the peer-reviewed literature provides formal proofs that info-gap robust-satisficing models are indeed maximin models (e.g. see Sniedovich 2012, 2014 and references therein).

2. Regarding the info-gap robust-satisficing approach itself. The article is oblivious to the fact that the concept "info-gap robustness" is a reinvention of the well-established concept "radius of stability" (circa 1960) that has been used for decades in numerous fields to measure the local stability/robustness of systems. Formal proofs of this fact are available in the peer review literature (e.g. see Sniedovich 2012, 2014 and references therein).

3. Methodologically speaking, info-gap's robustness model is incompatible with the severity of the uncertainty stipulated by info-gap decision theory (IGDT), and therefore, as pointed out by Hayes et al. (2013, p. 609):

"... Plausibility is being evoked within IGDT in an ad hoc manner, and it is incompatible with the theory’s core premise, hence any subsequent claims about the wisdom of a particular analysis have no logical foundation. It is therefore difficult to see how they could survive significant scrutiny in real-world problems. In addition, cluttering the discussion of uncertainty analysis techniques with ad hoc methods should be resisted. ..."

Specifically, IGDT is utterly unsuitable for the treatment of Knightian uncertainty, especially in cases where the uncertainty is "unbounded", which, according to the IGDT literature, is the typical case. Sniedovich (2014) explains in detail why IGDT is a "voodoo" decision theory par excellence (as in voodoo economics, voodoo science, voodoo mathematics, etc.)

More on the flaws in IGDT can be found in the references listed below and at http://info-gap.moshe-online.com

In short, the article continues to propagate the misconceptions about IGDT, the maximin paradigm, and the relationship between them, that originated in the IGDT literature in the early 2000s.

References.

Hayes, K.R., Barry, S.C., Hosack, G.R., and Peters, G.W. (2013). Severe uncertainty and info-gap decision theory. Methods in Ecology and Evolution 4:601-611.

McCarthy, M. (2014) Contending with uncertainty in conservation management decisions. Annals of the New York Academy of Science, 1332:77-91.

Sniedovich, M. (2012) Fooled by local robustness. Risk Analysis, 32(10):1630-1637.

Sniedovich, M. (2014) Response to Burgman and Regan: the elephant in the rhetoric on info-gap decision theory. Ecological Applications, 24(1):229-233.

Reply to comments by Moshe Sniedovich on 21.8.2015.

1. Sniedovich presents several criticisms of info-gap decision theory, none of which is either new or true. More importantly, his comments detract from serious attempts to deal responsibly with decision making under severe uncertainty.

We will briefly respond to Sniedovich's ...[more]

... claims, though first we note that responses to his claims have been published repeatedly and can be found in the following articles:

1.1 Mark A. Burgman and Helen M. Regan, 2012, Information-gap decision theory fills a gap in ecological applications, Letter to the Editors, Ecological Applications, 24(1), pp. 227-228.

1.2 Mark A. Burgman, 2008, Shakespeare, Wald and decision making under uncertainty, Decision Point #23, p.10.

Link to this issue of this on-line resource (see p.10):

http://decision-point.com.au/wp-content/uploads/2014/12/DPoint_23.pdf

1.3. Yakov Ben-Haim, Clifford C. Dacso, Jonathon Carrasco and Nithin Rajan, 2009, Heterogeneous Uncertainties in Cholesterol Management, International Journal of Approximate Reasoning, 50: 1046-1065.

See especially section 7. A link to a pre-print of this article is found here:

http://info-gap.com/content.php?id=14

1.4. Yakov Ben-Haim, 2012, Why risk analysis is difficult, and some thoughts on how to proceed, Risk Analysis, 32(10): 1638-1646.

See especially section 3.4 A link to a pre-print and to the final on-line version of this article is found here:

http://info-gap.com/content.php?id=23

1.5. Barry Schwartz, Yakov Ben-Haim, and Cliff Dacso, 2011, What Makes a Good Decision? Robust Satisficing as a Normative Standard of Rational Behaviour, The Journal for the Theory of Social Behaviour, 41(2): 209-227.

A link to a pre-print of this article is found here:

http://info-gap.com/content.php?id=23

1.6. Many additional sources, by numerous authors, are found here: info-gap.com

We will now briefly summarize the errors in Sniedovich's reasoning.

2. Sniedovich Claims That Info-Gap Is A Special Case Of Min-Max.

This is false, as explained in careful detail in the publications mentioned above (especially item 1.3). The error is in failing to distinguish the starting point for min-max and for info-gap robust-satisficing. Min-max requires specification of a worst case, while info-gap robust-satisficing does not. Conversely, info-gap robust-satisficing requires specification of an outcome requirement, while min-max does not. These two approaches are complementary, but different. Sometimes they lead to the same decision (but for different reasons), and sometimes they lead to different decisions (because the reasoning is different).

3. Sniedovich Claims That Info-Gap Theory Addresses "Local" Rather Than "Global" Uncertainty.

Once again, this claim of his is not new and the claim is false, and has been addressed repeatedly. I quote here from the article cited in item 1.4 above.

"Info-gap theory, like all theories of robustness, starts with the analyst's models, and asks: how much error in these models can be tolerated? The info-gap robustness question - how wrong can one's models be and yet the decision still yields an acceptable outcome - is pertinent when maximum error is unknown. If the robustness is large, (and this is a judgment that the analyst must make, like other judgments made by risk analysts) then one may have confidence in the decision. If the robustness is not large, and especially if the robustness is small, then confidence is not warranted. If the robustness is small then confidence is warranted only "locally", near the models, while if the robustness is large then confidence is warranted over a wide domain of deviation from the models. Info-gap theory uses the analyst's models, but this does not make it a "local" theory of robustness."

4. Sniedovich Claims That Info-Gap Theory is Unsuited for Knightian Uncertainty.

Knight distinguished between probabilistic risk and non-probabilistic "true uncertainty" (Knight's term). An info-gap model of uncertainty is non-probabilistic: it is an unbounded family of nested sets of possible contingencies. There is no probability (or any other) measure function in an info-gap model of uncertainty. Likewise there is no known worst case in an info-gap model of uncertainty. These two attributes make an info-gap model of uncertainty entirely Knightian in nature. Info-gap does not have a monopoly on quantifying Knightian uncertainty (Knight himself never quantified "true uncertainty"). For example, Wald's single-set worst-case model of uncertainty is also Knightian in nature. But to claim that info-gap is unsuited to represent Knightian uncertainty is far from the truth.

Sniedovich's claims are neither true nor new. More unfortunately, his claims divert attention from earnest efforts to face the challenging task of making responsible decisions under severe uncertainty.

Yakov Ben-Haim and Maria Demertzis

In their response to my comments, Ben-Haim and Demertzis, henceforth Authors, do not address the serious issues that are raised by them. Instead, they attack these comments as amounting to a repetition of my already known criticism of info-gap decision theory (IGDT). My answer to this "rebuke" (as might ...[more]

... be expected) is that the reason that my comments have a familiar ring to them is self-evident: the Authors continue to grind, both in the discussion paper and in their response to my comments, the same old claims that have been bandied about without rigorous proof, in the info-gap literature, since the early 2000s.

The Authors also contend that their response to my comments explains the errors in my reasoning. But, the fact of the matter is that these purported explanations bring out more forcefully the Authors' profound misconceptions about the issues concerned.

Of these, the most glaring misconception is that exhibited in the Authors' explanation of the allegation that info-gap's robustness model is not a maximin model. This allegation is based on some "fuzzy" undefined maximin model that lacks some of the essential properties intrinsic to generic maximin models. To appreciate the futility of the Authors' explanations that info-gap's robustness model is not a maximin model, readers are urged to go to:

http://info-gap.moshe-online.com/economics_maximin_proof.html

or to the attachment to this post, to see for themselves how straightforward it is to prove formally and rigorously that info-gap's robustness model is indeed a simple maximin model.

I therefore challenge the Authors to prove formally and rigorously that info-gap's robustness model is not an instance of the maximin model featured in this simple proof.

I discuss this and other issues at

http://info-gap.moshe-online.com/economics.html

Here I need only point out that both conceptually and technically, IGDT does not contribute anything new to the state of the art. That is, contrary to the claims in the IGDT literature that this theory advances our ability to deal with the difficulties besetting decision making under severe uncertainty, IGDT in fact takes us back to the 1960s. The IGDT literature thus misleads its readers on two counts:

(1) In its claims of a contribution to the state of the art, and

(2) In its claims that the severe uncertainty that it postulates can be properly dealt with by means of a local analysis in the neighborhood of a nominal value of the uncertainty parameter.

The reader is reminded that the uncertainty postulated by IGDT is severe in that (a) the uncertainty space can be vast (e.g. unbounded) (b) the point estimate is poor and can be significantly wrong (e.g. it can be no more than a wild guess), and (c) the quantification of the uncertainty is probability-free, likelihood-free, chance-free, plausibility-free, belief-free, etc.

The reader is also reminded that the three basic facts about IGDT that are on the agenda here are these:

Fact 1: Info-gap's robustness model and info-gap's robust-satisficing decision model are simple maximin models. The implication therefore is that the maximin paradigm is vastly more general, hence immensely more versatile than info-gap's robust-satisficing approach.

For formal rigorous proofs see Sniedovich (2012, Theorem 2, Theorem 3, p. 5) and

http://info-gap.moshe-online.com/economics_maximin_proof.html

Fact 2: The concept "info-gap robustness" is a reinvention of the well-established concept "radius of stability" (circa 1960) that has been used for decades in many fields to define/measure the local stability/robustness of systems agains perturbations in parameters of the systems.

For a formal rigorous proof see Sniedovich (2012, Theorem 1, p. 5).

Fact 3. Info-gap's robustness analysis is inherently local in nature in that it is conducted over a neighborhood around the point estimate. It is therefore incompatible with the severity of the uncertainty postulated by IGDT. For this reason Hayes et al. (2013, p. 609) argue convincingly that

"... Plausibility is being evoked within IGDT in an ad hoc manner, and it is incompatible with the theory’s core premise, hence any subsequent claims about the wisdom of a particular analysis have no logical foundation. It is therefore difficult to see how they could survive significant scrutiny in real-world problems. In addition, cluttering the discussion of uncertainty analysis techniques with ad hoc methods should be resisted. ..."

In fact, given that IGDT allows its uncertainty to be unbounded, the implication is that in such cases info-gap's robustness analysis effectively ignores all the values of the uncertainty parameter except those that are located in a minuscule (infinitesimally small) neighborhood around the point estimate. Indeed, considering that the IGDT literature contends that the most common applications of IGDT involve an unbounded uncertainty, it follows that IGDT is in fact a voodoo decision theory par excellence (as in voodoo economics, voodoo science, voodoo mathematics, etc).

For a detailed explanation, see the appendix of Sniedovich (2014).

Observations.

O1. It is most revealing that the Authors do not respond to Fact 2.

It is high time therefore that proponents of IGDT faced the fact that the concept "info-gap robustness" is a reinvention of the long-established concept "radius of stability" (circa 1960). Indeed, it is imperative that the discussion paper be modified so as to make it unequivocally clear that "info-gap robustness" is no more and no less than a reinvention of the long-established concept "radius of stability".

02. As for the Authors' comments on the differences between min-max and info-gap's robust-satisficing approach. The Authors are well aware that the peer reviewed literature provides formal rigorous proofs demonstrating that info-gap's robustness model and info-gap's robust-satisficing decision model are maximin models. And yet, they neither address these proofs, nor do they refute their validity.

Instead, the Authors put forth a "soft" explanation of alleged differences between min-max and info-gap's robust-satisficing approach. The bottom line is this:

(i) The issue here is not whether differences exist between the maximin paradigm and info-gap's robust-satisficing approach. Obviously, differences between the two exist but these are differences between a prototype (maximin) and one of its numerous simple instances (info-gap's robust-satisficing approach).

(ii) By analogy, the differences between the maximin paradigm and info-gap's robust-satisficing approach are akin to the differences between a generic polynomial (degree n, where n is arbitrary) and a polynomial of degree 2. A generic polynomial and a degree 2 polynomial are obviously different, but a degree 2 polynomial is still a … polynomial.

(iii) Therefore, contrary to the Authors' claim, Info-gap's robust-satisficing approach cannot possibly complement the maximin paradigm. Simply because info-gap's robustness model is a simple maximin model, namely it is a simple instance of generic maximin models.

(iv) Thus, the maximin paradigm can do all that the info-gap's robust-satisficing approach can possibly do, and more, in fact a great deal more. Hence, to reiterate, the Authors' pronouncements on the min-max vs info-gap's robust-satisficing approach, attest to a lack of appreciation of some of the essential capabilities of maximin models which results in a misguided analysis of the relationship between info-gap's robust-satisficing approach and the maximin paradigm.

(v) No amount of rhetoric can alter the fact that info-gap's robustness model is a simple maximin model. As indicated above, for a simple formal proof of this fact go to:

http://info-gap.moshe-online.com/economics_maximin_proof.html

O3. As for the Authors' denial that info-gap's robustness analysis is local. No amount of rhetoric can alter the fact that info-gap's robustness analysis is inherently local in nature, hence that it is incompatible with the severity of the uncertainty postulated by IGDT. Indeed, Yakov Ben-Haim, the Father of IGDT, concedes that IGDT's robustness analysis is local in nature in his Wikipedia Sandbox (see https://en.wikipedia.org/w/index.php?title=User:Ybenhaim/sandbox&oldid=187039657), (version: 15:38, 26 January 2008) :

" … Thus it is correct that the info-gap robustness function is local, with respect to u. However, the value judgment of whether this neighborhood of robustness is small, too small, large, large enough, etc., is characteristic of all decisions under uncertainty. A major purpose of quantitative decision analysis is to provide focus for the subjective judgments which must be made. …"

O4. The fact that the uncertainty stipulated by IGDT is Knightian does not in any way shape or form imply that IGDT's robustness model indeed has the capabilities to deal properly with such an uncertainty. To the contrary, it is precisely because this uncertainty is Knightian that IGDT's treatment of the uncertainty comes to grief. This is so because the robustness analysis that IGDT prescribes for such an uncertainty is akin to a prescription to administer a local anesthetic in cases requiring a global anesthetic.

05. In a futile attempt to justify the use of IGDT, the Authors make unsubstantiated assertions about theories of robustness in general. For instance, they allege that all theories of robustness start with the analyst's models, whereupon the theories ask: how much error in these models can be tolerated?

This assertion is without any foundation, indeed it is manifestly false.

In fact, the foremost theory of robustness, namely maximin decision theory, does not pose this question. Rather, it poses a far more general question, one that can deal with tolerance for errors when/if required. But, it can also deal with situations that have got nothing to do with tolerance to error.

In short, the Authors' attempt to attribute a central IGDT feature to all theories of robustness betrays a lack of appreciation that there are measures of robustness that are different from the local measure prescribed by IGDT.

Conclusions.

Rather than address the serious issues raised in my comments, the Authors engage in the same old groundless rhetoric that has been circulating in the IGDT literature since the early 2000s.

I call attention to the fact that all my claims about the failings besetting IGDT are backed up by formal rigorous proofs. As the Authors do not refute these proofs, I challenge them to prove them wrong by proving formally and rigorously that:

(a) The concept "info-gap robustness" is not a reinvention of the well-established concept "radius of stability" (circa 1960).

(b) Info-gap's robustness model is not a maximin model.

(c) Info-gap's robustness analysis is not local in nature.

(d) Info-gap's robustness analysis is compatible with the severity of the uncertainty that it stipulates.

More on this at:

http://info-gap.moshe-online/economics.html

References.

Hayes, K.R., Barry, S.C., Hosack, G.R., and Peters, G.W. (2013). Severe uncertainty and info-gap decision theory. Methods in Ecology and Evolution 4:601-611.

McCarthy, M. (2014) Contending with uncertainty in conservation management decisions. Annals of the New York Academy of Science, 1332:77-91.

Sniedovich, M. (2012) Fooled by local robustness. Risk Analysis, 32(10):1630-1637.

Sniedovich, M. (2014) Response to Burgman and Regan: the elephant in the rhetoric on info-gap decision theory. Ecological Applications, 24(1):229-233.

Summary

The authors compare different evaluation measures for risk and Knightian uncertainty. In particular they argue that higher robustness against uncertainty comes at the cost of minimum return and show this effect in a bank loan example.

General Comments

While I like the topic and the questions tackled in ...[more]

... this paper, the answers given by the authors lack relevant depth. The top level approach answer is twofold. Firstly, the authors say that addressing the uncertainty in risk estimations may change decisions. Secondly, they argue that using different decision rules (min-max and satisficing) yields different optimal decisions. Both answers are correct but quite trivial.

They also give a very explicit example to illustrate their point. The example, however, is very detailed and concrete. So I'm not convinced that the example really illustrates the general points. Basically the authors compare two different portfolios, one with higher risk, higher ambiguity (uncertainty in the estimation of the default probability) and higher return (repayments). Naturally there is a trade-off of increasing potential risk (higher h) and higher return. The linear dependence of both risk factors (info-gap model in definition 1) implies that higher risk also implies higher uncertainty, as s2 is higher than s1. Therefore in the example the authors cannot differentiate between the trade off risk - return and satisfaction level - robustness. It is just their interpretation that they focus on the latter.

If they had an example with a naturally given cost of robustness they could highlight their very interesting point on how the assertion of ambiguity may change decisions.

Minor comments:

The authors too often repeat the basic difference of risk and Knightian uncertainty. This is quite tedious. Also in the description of estimation problems after the recent crisis the authors should come more to the point.

Decision making under risk typically more cares about the mean than the mode of the event (pg. 3).

The first paragraph on page 6 is very vague.

Finally it would help to also consider partial ambiguity.

Response to review of our paper:

Decision Making in Times of Knightian

Uncertainty: An Info-Gap Perspective

Yakov Ben-Haim and Maria Demertzis

The reviewer raises several objections. Importantly the referee comments on the lack of relevant depth but also considers the example too detailed and concrete. The objective ...[more]

... of our analysis is to define robustness in the context of Knightian uncertainty (therefore non-probabilistically) and to explain the rationale behind info-gap and how it compares and contrasts to min-max and to putative optimization. While the conclusions that these approaches draw sometimes differ and sometimes do not, it is important to understand that the perspectives these methods adopt are different. This may be of importance to the decision (policy) maker. We feel that this argument (which is generic) is best demonstrated through an example. We have however attempted to use an example that is free of theory (mostly definitions) to ensure that the arguments made remain general. We respond to the individual comments below.

1. THE REVIEWER WRITES: "While I like the topic and the questions tackled in this paper, the answers given by the authors lack relevant depth. The top level approach answer is twofold. Firstly, the authors say that addressing the uncertainty in risk estimations may change decisions. Secondly, they argue that using different decision rules (min-max and satisficing) yields different optimal decisions. Both answers are correct but quite trivial."

OUR RESPONSES are as follows.

FIRST, our assertion is much stronger and more specific than that "addressing the uncertainty in risk estimations may change decisions." We develop and describe an explicit methodology - info-gap robust-satisficing - for representing Knightian uncertainty. We explain, generically and through specific example, how this methodology can lead to decisions that differ from both putative optimization and from min-max. We explain how the policy maker's preferences and knowledge are incorporated in the robust-satisficing decision process and how they lead to different policy choices. We identify the situations in which the robust-satisficing choice is preferable from a policy-responsible perspective, and we explain that this results from the different angle from which each of the two procedures looks at the problem.

SECOND, the reviewer thinks our claims are "correct but quite trivial". Here we disagree. The important difference between min-max and robust satisficing is that in the former the decision maker asks the question: "What are the worst circumstances that could arise?" Once this is answered, then the decision maker chooses the option to ameliorate this worst contingency and accepts the outcomes that these circumstances entail. In robust satisficing, the decision makers asks by contrast: "What is the worst outcome that I can live with?". Once this question is answered, the decision maker chooses the option that satisfies this critical requirement over the widest range of possible circumstances. We argue that decision makers (in particular in policy) are in much better position to define the outcomes that they are willing to put up with, than to define the worst circumstances they will be faced with in the future. We think that this is very relevant and actually a more realistic way to think about preparing for the "unknown unknowns".

2. THE REVIEWER WRITES: "The example, however, is very detailed and concrete. So I'm not convinced that the example really illustrates the general points."

OUR RESPONSE is as follows: The example is concrete in that it is an explicit economic example. On the other hand, it is fairly simple and mostly limited to simple definitions that do not make theoretical choices. This enables the reader to appreciate the general principles of the robust-satisficing methodology in a contextual set-up but also see how these principles could be applied to other problems. Examples rarely demonstrate generic claims; they illustrate and assist in understanding. Our example does that, and enables us to "illustrate the general points", namely, the operational distinction between robust-satisficing, putative optimization, and min-max. These distinctions are discussed quite explicitly with text and graphs.

3. THE REVIEWER WRITES: "in the example the authors cannot differentiate between the trade off risk - return and satisfaction level - robustness. It is just their interpretation that they focus on the latter."

OUR RESPONSE is as follows: We remind the referee that we do not (and cannot) discuss any trade-off with risk because we are considering non-probabilistic Knightian "true uncertainty". However, our discussion of the trade off between the level of satisfaction and the level of robustness is the analog - in the Knightian context - of the probabilistic concept of the trade off between probabilistic risk and return. In this respect it is not "our interpretation". A central point of our treatment of Knightian uncertainty is that this is the appropriate analog of the probabilistic risk-return trade off. We would be happy to review the text in order to ensure that this point comes out clearly.

4. THE REVIEWER WRITES: "If they had an example with a naturally given cost of robustness they could highlight their very interesting point on how the assertion of ambiguity may change decisions."

OUR RESPONSE is: It is not clear what "natural" means in this respect. However, we feel that it is natural that ambitious outcomes are more vulnerable to surprise, whereas less ambitious outcomes are less vulnerable. This is a "natural" trade-off and holds both in probabilistic set-ups and as we show here it is unavoidable also in our non-probabilistic analysis. We would be happy to consider it, if the referee has something else in mind.

5. THE REVIEWER HAS SEVERAL "MINOR COMMENTS":

* "The authors too often repeat the basic difference of risk and Knightian uncertainty. This is quite tedious." OUR RESPONSE: We regret being tedious, and we can certainly review the text to prevent repetitions.

* "Also in the description of estimation problems after the recent crisis the authors should come more to the point." OUR RESPONSE: We will revise that discussion in attempting to be more explicit about the message, without undue lengthening of the discussion.

* "Decision making under risk typically more cares about the mean than the mode of the event". OUR RESPONSE: Both means and modes are of interest. For asymmetric distributions the mode has some advantages of realism. No real argument here, except that the choice, in this brief discussion, is more a matter of taste than substance.

* "The first paragraph on page 6 is very vague." OUR RESPONSE: We will revise this paragraph to sharpen its message, without unduly lengthening the discussion (which follows a discussion of Brainard's early contribution).

* "Finally it would help to also consider partial ambiguity." OUR RESPONSE: In practice most examples that we see in the literature do indeed refer to partial ambiguity, if nothing else at least for tractability purposes. So we agree that this is the most relevant way of approaching our lack of knowledge of underlying distributions. Indeed, our comparison of min-max and robust satisficing considers partial ambiguity in the sense that only some aspects of the problem are considered Knightian-uncertain. Naturally, policy choices may differ substantially depending on where partial ambiguity is placed (and therefore, which parameters of the models have well defined underlying probability distributions and which do not). We can certainly discuss our set-up right at the start in the context of partial ambiguity, the most commonly used way of describing uncertainty in the literature.

The analysis in the discussion paper is seriously flawed. I challenge the authors, again, to prove formally and rigorously that:

(a) The concept "info-gap robustness" is not a reinvention of the well-established concept "radius of stability" (circa 1960).

(b) Info-gap's robustness model is not ...[more]

... a maximin model.

(c) Info-gap's robustness analysis is not local in nature.

(d) Info-gap's robustness analysis is compatible with the severity of the uncertainty that it stipulates.

More on this at:

http://info-gap.moshe-online.com/economics.html

see attached file

Response to 2nd review of our paper:

Decision Making in Times of Knightian Uncertainty: An Info-Gap Perspective

Yakov Ben-Haim and Maria Demertzis

1. The reviewer writes that the paper is "littered with jargon".

We do indeed use technical terms. However, all terms are either standard terms, or are defined ...[more]

... in the paper, or are referenced. Could the referee provide examples where this is not the case so that we make sure that terms are properly defined?

2. The reviewer writes: "I found the bank portfolio example impenetrable." However, no specific or even generic criticisms are presented. The example is clearly and precisely defined, and the analysis is developed and discussed systematically. In fact section 3.1 where we describe the bank portfolio example uses only statistical definitions. Which aspect of the example does the referee consider impenetrable?

3. The reviewer claims a "disconnect between the initial sections and the bank portfolio example."

The reviewer is correct that the discussion of forecasting failures, appearing as part of section 2, is not reflected in the subsequent example. We have therefore removed the discussion of forecasting failures because it distracts from the main issue, which is the methodological implications of Knightian uncertainty vs probabilistic risk.

4. The reviewer asserts that "the paper offers no practical insights into this problem" of forecasting failure, and that "the reader is left with little sense of how and indeed whether this principle [of robust satisficing] can be made operational."

We have removed the discussion of forecasting, and strengthened the intended focus on the problem of Knightian uncertainty as opposed to probabilistic risk. We do offer specific practical insights into managing the problem. These insights are outlined generically in bullet form in section 1 and then elaborated in detail with an academic-scale example in section 3. The discussion demonstrates conclusions that are more general than the specific example, whose purpose is illustrative. Very specific methodological insights are abstracted from the analysis of the example, including:

* The adverse implications of policy selection based on putative predictions of models.

* The quantitative assessment of the impact of Knightian uncertainty as a trade off between robustness-to-uncertainty and quality of outcome.

* The potential for reversal of preference between policy options when robustness curves cross one another.

* Robust dominance of one policy over another when robustness curves do not cross.

5. The reviewer is troubled by the question "How should the policymaker go about choosing feasible minimal acceptable outcomes for instance?"

The reviewer is right that the answer to this question needs clarification. We have strengthened and clarified the claim that, in most situations, the policy maker does not need to choose a specific minimal acceptable outcome. In the case of robust-dominance (no crossing of the robustness curves) then the policy choice is entirely independent of the required outcome. In the case of preference reversal (robustness curve cross one another) then the choice only requires the policy maker to decide whether the required outcome is below or above the crossing point. However, the important aspect that we put forward is that policy makers are more capable in identifying what is an acceptable outcomes than horizons of uncertainty. In other words, choosing on the horizontal axis rather than on the vertical axis of a robustness curve. This is because policy makers do have preferences on outcomes, whereas they lack the knowledge to identify levels of uncertainty.

6. The reviewer considers a specific example and asks "How would the info-gap approach differ [from min-max] in this case?" Surprising question, because section 4 (titled "Robust satisficing vs min-max") is devoted entirely to a comparison of info-gap with min-max, with additional comparative discussion of info-gap and min-max in sections 2.1 (near the end) and 2.2 (2nd paragraph).

The referee missed a number of serious flaws in the article. As indicated above, I challenge the authors, again, to prove formally and rigorously that:

(a) The concept "info-gap robustness" is not a reinvention of the well-established concept "radius of stability" (circa 1960).

(b) Info-gap's robustness model is ...[more]

... not ...[more]

... a maximin model.

(c) Info-gap's robustness analysis is not local in nature.

(d) Info-gap's robustness analysis is compatible with the severity of the uncertainty that it stipulates.

More on this at:

http://info-gap.moshe-online.com/economics.html

see attached file

The analysis in the new version of the discussion paper is still seriously flawed. I challenge the authors, again, to prove formally and rigorously that:

(a) The concept "info-gap robustness" is not a reinvention of the well-established concept "radius of stability" (circa 1960).

(b) Info-gap's robustness model is ...[more]

... not ...[more]

(d) Info-gap's robustness analysis is compatible with the severity of the uncertainty that it stipulates.

... a maximin model.

(c) Info-gap's robustness analysis is not local in nature.

More on this at:

http://info-gap.moshe-online.com/economics.html