The order of knowledge and robust action. how to deal with economic uncertainty?

: Uncertainty in economics is generated by “nature” but also by the model we use to “produce the future”. The production of the future comprises besides the allocation of resources on different instruments (technologies, financial products) also the design of the instruments. Specialization and diversification considerations point to the advantages of targeting instruments to a rich set of possible future states. But, if a rich state space comes with unreliable probability measures, the actions based on such a set will be unreliable, too. The proliferation of financial products is a salient example. This paper argues that an appropriate model for dealing with uncertainty keeps the knowledge required by the model from its user in line with the available knowledge. In particular, it proposes to order state spaces by their “granularity” and “coverage”. Appropriate upper bounds are derived for these characteristics. A practical implication of the presented approach would be to limit financial product innovation. Abstract Uncertainty in economics is generated by “nature” but also by the model we use to “produce the future”. The production of the future comprises besides the allocation of resources on diﬀerent instruments (technologies, ﬁnancial products) also the design of the instruments. Specialization and diversiﬁcation considerations point to the advantages of targeting instruments to a rich set of possible future states. But, if a rich state space comes with unreliable probability measures, the actions based on such a set will be unreliable, too. The proliferation of ﬁnancial products is a salient example. This paper argues that an appropriate model for dealing with uncertainty keeps the knowledge required by the model from its user in line with the available knowledge. In particular, it proposes to order state spaces by their “granularity” and “coverage”. Appropriate upper bounds are derived for these characteristics. A practical implication of the presented approach would be to limit ﬁnancial product innovation.


Introduction
The reasoning in this paper rests on the following four principles: i) Actions are based on models. Models are abstract representations of a real world.
They express what we know or think to know about the world. ii) Actions are taken today, their consequences are experienced in the future. The future world is partly "men made" and partly determined by "nature". That is, the reality realized in the future results on the one side from exogenous factors, which may be called luck or fate. On the other side, they are an endogenous outcome of past or present day human actions. iii) Within a given model, we can form expectations about the consequences and choose actions such that they fit our goals. But we also know that expectations will be deceived if they are based on inaccurate models. iv) Uncertainty arises from limited knowledge. Knowledge can be limited in the sense that we do not know precisely, or that "we simply do not know" as Keynes (1937, p. 214) has put it. Several proposals have been made to formalize imperfection of knowledge -from Shackle's (1949) contributions to Fuzzy logic. I will use the probabilistic language for talking about things that are known more or less precisely, yet not for sure. In this language probability measures are assigned to possible realizations of events. We speak of risk if there is precise knowledge about the probability distribution of the uncertain states. Having knowledge about a set of distributions rather than a specific distribution, is usually addressed as Knightian uncertainty -after Knight (1921). This paper points to a more fundamental point. Probability measures can only be assigned to distinguishable events. So the question is, what is the appropriate state space on which measures are formed and actions are conditioned.
The purpose of the paper is first to ask: What is a rational, that is, a logically consistent way of accounting for the knowledge that actions are based on models and models are based on limited knowledge about the world? Second, which practical guidelines can be drawn for dealing with uncertainty? For answering these questions, I outline a general formal framework as well as a more specific model of standard economic decision making under uncertainty.
The specific model is then applied to an economy with financial markets.
The most salient example of the successful career of standard economic risk analysis and its tragedy was the sophistication of finance and the financial crisis. A huge set of new financial instruments has been developed over the last decades -with the claim to generate high returns and reduce risk at the same time. Ideally, financial innovations increase market completeness by expanding the set of states that is spanned by independent financial instruments (Magill and Quinzii, 1996). At the macroeconomic level, a richer set of state-dependent financial instruments allows to finance new specialized technologies which are risky but highly productive and thus foster development (Acemoglu and Zilibotti, 1997). Now, if the number of financial products traded in the financial market increases by a factor of ten or more (Studer, 2015), one may worry about whether they help indeed to deal better with economic uncertainty or they rather increase the risks to which an economy is exposed. If one asks which products have problematic features or are designed in a sloppy way, then one gets essentially two answers: First, it is the system, not the single product. Second, the products don't account properly for the correlation with all the other products. But why does the unreliable system with a rich set of sophisticated products emerge and expand? A possible answer is that many people are naive and the clever ones are "phishing for fools" (Akerlof and Shiller, 2015). I think we should add to these answers a further one: The increased sophistication in dealing with uncertainty in the financial market is based on a false pretense of knowledge -not only in the financial industry but in the scientific community as well. 1 The framework presented in this paper allows to discuss this pretense of knowledge in a rigorous way and to derive a rule for sound financial innovation.
The paper is organized in the following way. The next section discusses the general logic of modelling model-based action and presents a specific uncertainty structure. Section 3 introduces the concepts granularity and coverage as key characteristics for ordering the uncertainty structure presented in Section 2. Section 4 analyzes the choice of an appropriate model in terms of the two characteristics. Section 5 addresses the dialogue between experts and decision makers, Section 6 concludes.

Modelling model based action with limited knowledge
What is a logically consistent way of accounting for the knowledge that choices are based on models and models are based on limited knowledge about the world? To address this question, I formalize first the problem in a general way and then in a more specific model using probabilistic language. 1 Caballero (2010) prominently pointed to "the pretense-of-knowledge syndrome" in economics and emphasized that the lesson to be drawn from the financial crisis was to give up the pretension.

General framework
As the formulation of the question makes clear, an answer requires to transcend somehow the economic analysis in the narrow sense, looking at it from outside in a language that addresses economic models as objects. Rational choice of x in M usually means to maximize the value of the conse-2 At first glance the two layers -choice between models and choice within model -may bring to ones mind the discussion about choosing an opportunity set from a collection of such sets in the literature on flexibility or freedom. For instance, Puppe and Xu (2010) argue, based on Puppe (1996), a desirable choice set should be rich in terms of essential alternatives included in the set. Under uncertainty, however, there is an additional problem: Freedom of choice between unreliable alternatives is not necessarily a good thing.
3 Paul Romer's (Romer, 2015) criticism of "matheness" calls into mind that economic modelling requires to tie the abstract and formal components of a model to the real world to be modeled. Pfleiderer (2014) An omniscient decision maker would have an ideal model M ∞ that captures all relevant features of W correctly so that the value of the model consequences of the optimal choice coincides with the value of its real world consequences: While the approach "let the data" speak works well if a narrow set of hypotheses or statistical models is considered, the idea to verify or falsify theories in this way is more questionable. We rather have to look for model characteristics that might arguably be used to form reasonable priors about q(M). Formally, this means to define a structure on M that allows to compare models in M according to some order which can then be related to model reliability. 4 At the general level, no specific ordering is possible; only a way to approach the problem can be sketched. For guiding our ordering of models about the world, one can try to assign to models the pieces of information they require for finding an optimal choice. This gives us for M ∈ M, a list K d (M), which expresses the knowledge requirements of a model. We may call K d the demand of knowledge. 5 On the supply side, we have a data generating process which reveals information about the world. This gives us at time t some pool of knowledge K s (t); we could call it the supply of knowledge. Now, the central idea of the presented paper is that, if a decision is to be made, at a given time t, we should be aware that the quality of a model declines if its demand of knowledge is high. Rational dealing with uncertainty therefore requires not only behaving optimally within some framework but also to choose a framework that is in line with the knowledge we have at hand. 6 For being more specific, we need more structure and have to fix ideas about the real world context we are dealing with. 4 There are similar problems in other areas, for instance, the complexity of a program in a formal language. One can execute a program and measure the computing time, provided the program stops in the time span available. Or you can try to look for a suitable complexity measure on the set of programs. Nested Do loops or If then clauses may be indicators of computational complexity, for example. 5 To give an example, which I do not address in this paper: For answering economic questions in an expected utility framework one usually needs, apart from knowledge of the probability distribution, information on the risk aversion and how it changes with income.
This involves information about the third derivatives of the utility function. One may therefore ask if utility functions are an appropriate language to express risk attitudes. 6 Keeping things open until more knowledge is there, as suggested by the notion of flexibility, is possible sometimes, but in general decisions cannot wait until all required knowledge is available. More fundamentally, options don't create reality.

Specific uncertainty structure
Uncertainty in economics is generated by "nature" but also by the model we use to "produce the future". As the example of financial innovations outlined in the introduction illustrates, the production of the future includes both the design of instruments (technologies, financial products) and the allocation of resources on the instruments. In other words, the problem of rational dealing with uncertainty from a system's point of view is not only how to play given lotteries but also how to design the "lotteries" so that the future economic development is good according to some (subjective or social) value standards. 7 Hence, rational dealing with uncertainty involves two steps: First, choosing an appropriate frame for guiding the design of instruments for investing into the future and, second, making right decisions conditional on the chosen frame. My contribution focuses on the first step, assuming that the second step follows standard procedures. The presented general framework suggests to order models according to the knowledge requirements they imply. 8 In the standard economic model of decision making 7 This fact may be the main source of misunderstandings in the communication with standard decision theory or "behavioristic" representations of the problem of uncertainty.
In a world with innovation the consistent derivation of "number(s) used in calculating the expectation (integral) of a random variable" (Schmeidler (1989, p. 573)) is a necessary requirement but not sufficient for dealing with economic uncertainty -regardless of whether the numbers are objective or subjective probabilities. We play the lotteries that we have designed, or more precisely: Some agents, for instance the households, play the "lotteries" with technical or financial engineers design. 8 One may wonder what precisely is the meaning of knowledge requirements. My understanding of knowledge is pragmatic: What do I need to know for solving a problem?
For instance, for designing contingent financial products I have to identify the events upon which their pay offs are contingent, and for pricing the products I must know the probability distribution of the events. If I want to choose a portfolio, I need to know under uncertainty the core model components are state space and probability distribution on the space. Therefore, we need a formal structure that allows to order state spaces according to characteristics which are related to the knowledge required for a correct assessment of the probability distribution on the space that is relevant for the decision to be made. Well, the most fundamental requirement for assigning probability measures to states is that the states are distinguishable from each other. As Diamond (1967) pointed out in the context of technological uncertainty and financial markets, the ultimate limit of market completion is "an inability to distinguish finely among the states of nature in the economy's trading" (p. 760).
Let us suppose that the ideal model M ∞ of the exogenous part of the future is a probability space (Ω, A, π) where A is a σ-field in state space Ω and π a probability measure on A. The space represents the "nature" of potential future events in full detail and comprises an accurate measure π for all the events.
The "men-made" part of the future results from the allocation of an economic resource K, for example capital, on a set N of instruments -technologies and financial products. To focus on the role of uncertainty we assume that my endowment, my risk attitude, the set of feasible instruments, their state-contingent performance and the probabilities of the realization of states, where endowment and risk attitude are known givens in this paper. The fact that I can apply my own subjective probabilities does not change the fact that I need to know them for solving the addressed problem. More formally, in the language of theoretical computer science, problem solving requires to put input into a problem solver. Whoever uses the problem solver has to let the problem solver know the required input. Whether the user of the problem solver comes to the input by forming subjective beliefs or by evaluating data, the complexity of the user's task rises with the complexity of the required input; in the context discussed here, with the sophistication of the state space.
the provision of instruments is costless. To fix ideas, let us think of a set of financial products offered by the financial market. Each product has a real investment project as underlying. 9 The performance of projects can be more or less sensitive to exogenous conditions. Thus, a financial product is a contingent pay-off promise. Formally, project ν ∈ N is characterized by the return R ν and an event A ν ∈ A on which it is contingent. An agent who invests one unit of capital in project ν today (t = 0) is promised to receive (1) tomorrow (t = 1) . We say project i is specialized or targeted to A ν . The larger A ν , the more robust is the project.
Apart from the risky projects there is also a robust one, financed by a bond paying off a constant return for all ω ∈ Ω. A and π(A ν ) > 0} ν∈N be a decomposition of S ⊂ Ω (that is: A ν and A ′ ν are disjoint if ν = ν ′ and ν∈N A ν = S). I call n Θ the granularity of Θ and µ Θ = π(S) the coverage of Θ. In a Θ-constrained economy, besides the robust project, n risky projects contingent on A ν , ν ∈ N, are available.
Definition 1. An economy is Θ-constrained if the class of distinguishable 10 As Hirshleifer (1971) emphasized, "discovery" is to be distinguished from "foreknowledge". A map expresses the "foreknowledge" we have accumulated from past experience.
Agents who plan explorative adventurous tours are aware that they may discover things on which the map is silent. What would be more awkward is, if the map shows unreliable details and an engineer plans a rail track based on those details; plus the financial market offers financing instruments with pay off promises sensitive to these details.
events is Θ ∪ {S} and the set of targeted instruments consists of n Θ A- In a Θ-constrained economy, agents consider the events A ∈ Θ ⊂ A rather than ω ∈ Ω as relevant states and they can choose a portfolio {x Θ (A)} A∈Θ of state-contingent investments. For the (possibly empty) unknown terrain S no targeted instruments exist so that only a globally robust instrument, with return r, can be used to prepare for events there. Let x Θ (Ω) denote the capital allocated to this instrument. For allocating total resource K part of measure π one needs to know is π Θ = {π(A)} A∈Θ . Actually, agents may not know the measure correctly. Following the seminal contributions of Gilboa and Schmeidler (1989) and Bewley (2002) on Knightian uncertainty, limited knowledge about π can be modeled by assuming that agents know only the set of measures to which the true measure belongs: Apart from uncertainty about π Θ , coverage µ Θ may be uncertain, too. Yet the information basis about the boundary between territory S, known at scale Θ, and the unknown terrainS is of a different nature than the knowledge within S. Either one has information on the size of the whole world Ω or one has not. In the first case, we know µ Θ for sure; in the second case, we have 11 Since each ν addresses exactly one event A we can skip index ν. Here instruments are fully specialized to a particular event. A looser form would be that instruments work best in the targeted conditions but to some extent also perform in other conditions. In this case, the correlation between instruments could be used as a measure of (non-)distinguishability.
See Studer (2015) for an equilibrium analysis of financial innovations based on correlated underlying projects.
to choose a weight that expresses our subjective view on the importance of the unknown terrain relative to the terrain which we know at least to some extent. 12 In both cases µ Θ is exogenous -as an observed measure in the first case or as a belief in the second case. We have the restricition on Π Θ .
Projects with high and robust outcomes clearly dominate projects which work only under rare conditions and even then show poor performance. A nontrivial economic problem arises if high returns can only be achieved at the cost of robustness. We can capture such trade-offs by assuming that feasible projects are bounded by an efficiency frontier that satisfies the following property: Assumption 1. There exists R > 0 so that for any A ∈ Θ with 0 < π(A) < 1 12 If one thinks it is unreasonable to put a positive weight on something we do not know (though we know there may be something) the appropriate weight is µ Θ = 1. To require from the user of a model to take a stand on 1 − µ Θ ∈ [0, 1) mirrors the conviction that it is reasonable to be aware that there may be regions of events outside the familiar terrain.
In Schmeidler's (1989) subjective probability approach, if I understand him correctly, the instruments would be defined with respect to a partition Θ ′ of Ω rather than of S ⊂ Ω: and if we are not sure about the objective π Θ ′ we would determine the quantities invested in the alternatives A ∈ Θ ′ by evaluating the expected consequences, using a set of numbers {v(A)|A ∈ Θ ′ } with A∈Θ ′ v(A) < 1 and v(Ω) = 1. The non-additive subjective probability v reflects that we are not sure about π ′ Θ or that something apart from A ∈ Θ ′ may happen. But why then design lotteries for Θ ′ rather than ask: What can be distinguished in a way that additive probabilities can be assigned more or less reliably and what is the part about which we have no specific knowledge to distinguish events? the return of the efficient A-specialized project is inversely related to π(A): Moreover µR > r. Otherwise no risk avers agent would invest into risky projects.
R expresses the average productivity level of the risky projects in which agents can invest by using the financial products offered by the financial market. 13

Model ordering and reliability of action
The approach of this paper is guided by the following idea. We look at the world with a frame of mind. The frame may be structured in a more 13 The assessment of R involves two types of agents. On the one side, R is the average pay-off promised by the financial market for holding risky assets. It reflects the financial agents' beliefs about the average productivity of the underlying risky projects. The underlying projects, on the other hand, are specialized technologies which work highly productive under the conditions to which they are targeted but don't perform outside the conditions. Thus, R represents the stock of general technological knowledge, from which engineers and entrepreneurs can draw know-how for creating technologies which are targeted to performance in clearly specified conditions. Picking up insights of the endogenous growth literature, one may argue that due to spillover effects R increases with the " richness" of set N of instruments. This paper assumes that the financial market holds correct beliefs about the productivity of the underlying technologies so that pay-off promise and productivity coincide and both can be represented by the same symbol, R.
Actually, beliefs about average pay-offs may be biased by what Keynes called the "state of expectation" so that the financial market may distort investment decisions by spreading optimistic or pessimistc signals. Such distortions are not considered in this paper.
or less sophisticated way, where the degree of sophistication is a choice we make. More specifically, we shape the future by investing in instruments the performance of which is contingent on specific conditions. We assume that for a quasi-omniscient agent the probability space (Ω, A, π) is the best frame for guiding the use of such instruments. Under limited knowledge about the measure π on A, a cruder frame Θ ⊂ A, in which possible future events are distinguished in a less differentiated way, may be a more reliable guide for targeting instruments and allocating resources on the instruments. The goal of this section is to establish an order on the set of possible state spaces Θ and to give guidance on the choice of a space, in a way such that deceptions arising from actions conditioned on the states in the chosen space are kept within tolerable bounds. In the following I propose an ordering according to granularity and coverage.

Granularity and coverage
In a Θ-constrained economy, the possibilities to prepare for the uncertain future by contingent actions are limited by the granularity, n Θ , and the coverage, µ Θ , of distinguished events. Since targeted actions are more productive than a robust one, in an ideal world, a finely differentiated grid Θ covering many events is preferable to coarse granularity and low coverage.
The following definition characterizes decompositions of Ω along this line of reasoning.
Definition 2. Let for S, S ′ ⊂ Ω, Θ, Θ ′ ⊂ A be decompositions of S and By definition, for any strict refinement: n ′ Θ > n Θ , and for a strict extension: µ ′ Θ > µ Θ . Thus, restricting the discussion to decompositions which can be ordered as refinements and extensions of other decompositions, we can order state spaces by n Θ and µ Θ and express the value generated by optimal behavior in a model based on Θ as function of granularity and coverage: Advantages of a more differentiated or extended grid for targeting actions imply that V is weakly increasing in both arguments.
Yet, finer granularity and higher coverage imply increasing knowledge requirements. More events have to be distinguished and measured. For agents whose knowledge is limited, this usually implies to base actions on unreliable beliefs and to experience deception by unintended consequences of actions in the future. Thus, differentiation advantages have to be weighed against costs of unreliability. Before turning to this, I want to illustrate (5) in more detail.
Suppose that risk preferences of investors are such that an amount K 1 of total capital is invested in risky but highly productive projects and the rest is invested in the safe project with low pay-off r.
Under Θ, K 1 can be diversified in a portfolio (x Θ (A)) A∈Θ , where investment x Θ (A) promises to generate income stream Since R A = µR π(A) , according to (4), the income stream (y A ) A∈Θ can be fully smoothed by choosing x Θ (A) proportionally to π(A). Any risk avers investor will therefore choose the portfolio (Note that A∈Θ π(A) adds up to µ. Subscript Θ in π Θ and µ Θ is omitted if the restriction of π to Θ is obvious from the context.) This portfolio promises for all ω ∈ S the income where R may be increasing in n if there are technological spillovers from innovation. 14 Since K 0 generates income rK 0 in S as well as inS, the income stream generated by K 0 , K 1 is given by The optimal split of K into risky investment K 1 and robust investment K 0 depends on the specific risk preferences. An expected utility maximizer will 14 Following Ethier (1982) and Romer (1987) the productivity of a resource spent on a variety of specialized technologies can be modeled by a CES index If events in Θ are symmetric, we have π(A) = µ n and x(A) = K1 n so that R = n σ−1 K 1 , which increases in n.
Using the envelope theorem, we conclude that ∂EU ∂R > 0 and ∂EU ∂µ > 0 under the optimal portfolio, which supports (5). For a positive impact of n on R, and thus EU, technological specialization advantages have to be at work.
For an example, we solve the maximization program for logarithmic utility, which gives us for the optimal mix of robust and risky investment. 15 Substituting this mix into (8), we obtain for the income promised from investing K optimally. 16 (10)

Reliability of action
The allocation of K 1 on risky projects is based on a belief π ∈ Π(Θ). If actually the true measure is π * ∈ Π(Θ), then the productivity of the A-contingent investment is µR π * (A) . Thus, the actual income realised by x Θ (A) is a random stream The first-order condition for max µln (RK 1 + r( rather than a safe value expected according to (7).
In other words, the promise to be fully insured within S, by using the n Θ instruments feasible in the Θ-constrained economy, is deceived. The potential extent of deception depends on the "size" of Π(Θ), since any pair π Θ , π * Θ ∈ Π(Θ) may be relevant for (11). In particular, for a given level of experience from past realizations of events, a refinement Θ ′ of Θ leaves more room for uncertainty about the true probability measure. Thus, the potential deception rises with n Θ . Moreover, the volatility of (11) increases if a larger volume of income is generated by risky investment, that is, if more resources K 1 are invested in risky projects or if their average productivity R is high. Now, as a larger µ raises the expected pay-off of risky projects, K 1 tends to rise with µ (as explicitly shown by (9) for the example of logarithmic utility).
In sum, it seems natural to assume that a reasonable measure of deception is (weakly) increasing in granularity n Θ and coverage µ Θ -given the level of be such a measure.
To be more specific, suppose for instance that beliefs deviate from the true probability measure by some noise where ε Θ satisfies the following property.
Assumption 2. i) ε Θ has zero mean. ii) Its variance is zero for n Θ ≤ n 0 and an increasing function σ 2 (n Θ ) of granularity afterwards, eventually going to infinity.
Part i) of the assumption excludes systematic errors. That is, both π Θ and π * Θ add up to µ Θ . Part ii) captures the idea that the experience from past realizations of events suffices to assess π * Θ correctly as long as n Θ ≤ n 0 . Yet, if contingent actions are targeted to more finely distinguished events, the probability assessment of events becomes less reliable. 17 With (11) the volatility of the income generated by risky investments is Using this as a measure of deception and applying the measure to portfolio mix (9), optimal under logarithmic utility, we have 4 Choosing an appropriate level of sophistication In view of the differentiation advantages shown by (5), without limitations of knowledge any increase in the granularity and coverage of targeted actions would be a good thing. This reflects the common view among economists that diversification and specialization are beneficial. Obviously, the benefits must be weighed against the costs of innovation like development efforts for specialized projects or contingent financial products. For instance, in Acemoglu and Zilibotti (1997) a fixed set up cost limits the range of financial innovations, that is, the set of states covered by Arrow securities. Such costs are set zero in this paper. Yet, if one accounts for the true nature of uncertainty, there is an additional type of cost: The deception of plans by reality.
As we have seen in the last section, decisions under uncertainty lead to deceptions if they are based on an unreliable decomposition of future events into distinguishable environments. This points to a fundamental difference between decision making under risk and decision making under uncertainty.
In the former case, targeted actions are chosen in such a way that the net benefit of the expected value of the chosen actions minus their cost is maximal. In addition to that, in the latter case, the reliability of the frame, on which the choice of targeted actions is conditioned, has to be taken into account, too. Moreover, the frame is not exogenous but a men-made object. 18 Therefore, we have to order frames according to their advantages and disadvantages. On the one side, raising granularity and coverage has gains from diversification and specialization, as captured by V (n Θ , µ Θ ). On the other side, starting from a known terrain, Θ 0 , any refinement or extension of the 18 For a given set Π(Θ) of imprecise measures, the literature on Knightian uncertainty has proposed several approaches to choose the portfolio of actions x Θ more cautiously. For instance, to maximize the outcome in the worst case or to apply more general concepts of uncertainty aversion. Suppose, for instance, that µ Θ = 1 and Π(Θ) is a parametrized family (π Θ (p)) p∈P of measures, where p is distributed over P according to a known measure χ. Then the expected utility approach can be extended to the risk of the π assessment by choosing a portfolio x Θ that maximizes P v[ Θ u(x θ (A))dπ Θ (A, p)]dχ(p), where v represents the attitude towards uncertainty. But the more fundamental aspect of economic uncertainty is that Π(Θ) depends on the choice of Θ.
set of distinguished future events tends to lower the reliability of plans targeted to these events, as expressed by ∆(n Θ , µ Θ ). For an appropriate choice of the frame we have to make up our mind about what is a tolerable level of deceptions. At the level of society, deceptions of plans may concern many people so that unreliable frames lead to some form of crisis, as the example of financial product innovation has shown in a salient way.
Deliberations about what is a tolerable level of deceptions or crises are normative judgments so that there is no undisputable way of ordering frames. As suggested by the quality attribute q assigned to models in the general framework outlined in section 2.1, a criterion for appropriate choices of frames should keep their potential deceptions in line with the gains they promise.
Formally, granularity n Θ and coverage µ Θ should satisfy the inequality where β > 0 is a parameter expressing the tolerance for deception.
For getting more specific conclusions, we apply the criterion to the case of portfolio choice under logarithmic utility considered in section 3. Agents are aware of the fact that future income will be low inS. There is no deception in this, because they know that only a low productivity instrument is available for preparing to this part of the future. The deception arises from risky investment K 1 in S, where a sure income outcome was expected, whereas actually the outcome is volatile with variance Substituting Y p [K 1 ] for V and the variance for ∆, criterion (15) takes the form: By assumption, the left hand side of the inequality is zero for n Θ ≤ n 0 and then increases in n Θ . The right-hand side declines in µ from infinity (for µ Θ → ρ) towards β RK for µ = 1. In addition, for µ < 1, it rises in ρ and declines in R. Thus, if R rises with n Θ due to specialization advantages, the right-hand side of inequality (16) declines with n Θ ; otherwise it is independent of n Θ . Figure 1 shows, for a given µ, the two sides of (16) as functions of n Θ .
Intersection pointn defines the bound for the level of granularity which is in line with tolerance β. Signs below variables denote the sign of the respective partial derivative. The main message is that there is a trade-off between coverage and granularity. Other things equal, a fine grid is only appropriate if a small set of states is covered by targeted actions. If a large set of states is to be covered then the grid should be coarse. We may call this the bird's eye view, as opposed to the zoom lens. Moreover, more cautious granularity should be chosen if R or K are high, that is, if a high volume of income is generated from risky investment. Strategic choices and policy decisions are examples for action, where high levels of resources are at stake and a large range of potential events has to be accounted for. The above result recommends to guide strategic rationality by a bird's eye like frame rather than planning with a rich set of instruments based on finely distinguished contingencies. Unlike in a limited attention model, the recommendation does not follow from bounded information-processing capacity -they could be overcome by employing expert systems or intelligent machines. The reason is that the unreliability of instruments counts more heavy if large sums are at stake.
Finally, granularity and coverage may not be independent but jointly result from some explorative effort γ so that n Θ (γ) and µ Θ (γ) are both rising with γ. Then the right-hand side (RHS) of (16) is a declining function of γ; and the left-hand side (LHS) is rising in γ as soon as n(γ) has reached n 0 at some γ 0 . Figure  bound up to which explorative efforts should be exerted to raise granularity and coverage of the state space used for designing financial products and investing in risky projects. If a large stock of capital is at stake, a more cautious bound should be chosen. If there is a generous safe opportunity for well-performing robust investment (high r), then the bound can be slackened.
At last, the exploration frontierγ should be kept in line with the state of experience.
Since the 1990ties the set of contingent financial products offered in the financial market has been hugely expanded. The financial crisis revealed that the expansion was based on shaky foundation. The presented analysis suggests to limit financial product innovations so that the demand of knowledge about their risks is matched by the accumulated experience, which from a system's point of view is experience with the consequences of a given set of financial products for the real performance of the economy. Based on this argument, Falkinger (2014) proposes a rule for sound financial development, according to which the growth of financial products should be tied down to the long-run growth rate -analogous to monetary policy rules.

The dialogue between experts and decision makers in an uncertain world
Decision making is typically seen as an agent's choice how to allocate a resource or capability on different instruments for pursuing a goal. The role of an expert is then to check which choices are feasible, to calculate their consequences and to identify the best choice. Essentially, this boils down to an optimization-problem. As an input for solving this problem, the expert needs from the decision maker information on the volume of effort or resource he or she is able or willing to spend, and on the goal or valuation system by which the decision maker values outcomes. Under uncertainty, outcomes are state contingent and the expert needs in addition information on the agent's attitude towards risk or their capacity to bear risk. For instance, a firm that wants to take out a loan is checked with respect to its ability to bear risk.
And a client who comes to the bank with his or her savings is asked about the size of wealth to be managed and the risk preference.
Solving optimization problems is not the only function of experts. Another important role is to extend the set of feasible instruments by innovation.
In this role, the message of decision makers to experts is: Search for more productive instruments. Raising productivity is closely related to specialization and differentiation. In particular, in a model of uncertainty one wishes to have a richer set of instruments to be prepared for different events. For instance, in an incomplete market model innovation means to cover so far uncovered situations by new state-contingent securities.
Expert systems use expert languages, whereas decision makers typically face a problem in a different language. A third and very crucial function of experts is therefore the translation of real world problems into the disciplined language they use for supporting decision makers with their expertise, as well as the inverse translation of their results into the language of the users.
Essentially, this means an expert system is also responsible for appropriate modeling and communication. Appropriate modeling requires to check that primitives and assumptions on the primitives of the expert frame, capture in a reliable, though abstract way the essential features of the problem a decision maker faces.
A problem inherent to economic modeling is that there is typically no unique model for a given real world situation. As witnessed by Debreu (1991, p. 3), "a Grand Unified Theory will remain out of the reach of economics, which will keep appealing to a large collection of individual theories." This brings us to the fourth task of expert systems: The choice or design of the models.
In this respect, a problem to solve is the tension between sophistication and reliability. On the one side, a higher degree of sophistication allows more finetuned preparation for specific realizations of possible future conditions. More specialized instruments can be developed and resources can be allocated in a more diversified way. On the other side, a finely structured frame demands more information so that, for a given basis of experience, a more sophisticated frame may be less reliable. In this respect we can think of a model as a problem solver which requires a certain input from the user -the model's demand of knowledge from the user who wishes to be supported by the model. Thus, for choosing an appropriate model, models have to be ordered according to their knowledge requirements.
Under uncertainty, the tension between sophistication and reliability is particularly pronounced. Keynes, for instance, argued that it is better to account for true uncertainty by conditioning a model on an exogenous state of expectations or confidence rather than endogenizing expectations in a calculus of probabilities which in fact are unknown. The approach sketched in this paper stresses that probability spaces are not given by nature. The crucial aspect for choosing an appropriate space is the grid of distinguished events.
On the one side, distinction of events is crucial for targeted instruments and contingent action. On the other side, unreliable distinction leads to deceptions. Granularity and coverage of the used probability space are important determinants of both the advantages of diversification and their potential deceptions. Therefore, the task required from the experts is to present the stochastic models offered to the decision makers in an order of granularity and coverage. The decision makers' task is then to choose the granularity and coverage which keep deceptions within tolerable bounds. For instance, at the level of strategic decisions, large coverage and coarse granularity are better guides than fine but unreliable granularity or a narrow focus.

Concluding remarks
Rational choice under uncertainty considers optimal decision making in a given framework, in particular, conditional on a given state space. From a more fundamental perspective, however, rationality implies to be aware of limitations of knowledge. Moreover, it requires to account for the fact that reasoning is conditional on the cognitive frame in which we analyze things.
In dealing with an uncertain future, the critical element is: Which events can we distinguish so that actions can be properly targeted to the events? Therefore, separating distinguishable states from undistinguishable ones is important for reasonable dealing with uncertainty. In other words, the state space, on which decision making under uncertainty is conditioned from a conventional economic point of view, is endogenous and a matter of choice itself. The characteristics to be chosen are the granularity of the grid in which we distinguish and measure events and the weight we assign to the area that is covered by the grid relative to the uncovered unknown terrain.
Refined granularity and large coverage generate differentiation advantages.
Yet, for a given base of experience, more coarse granularity and moderate coverage lead to more reliable actions. To allow decision makers to keep actions in line with a tolerable level of deceptions or crises, experts need to order their models according to the required knowledge about future events and the performance of actions contingent on these events.
Reasonable dealing under uncertainty is therefore akin to strategic rationality. Choosing a strategy is not the same as acting according to a detailed optimal plan. Rather it means to focus on goals and to set priorities on broader scales; having thereby in mind that the strategic decision should be sustainable over a longer horizon, even though it may not be optimal under all specific conditions in the short run. It was said that appropriate reduction of complexity is an important component of good management. This is in contrast to the view that complexity is a fate to which we must react optimally by sophisticated actions. While the latter view is adequate in a given situation, it is less obvious for the framing of uncertain exogenous events.
And in shaping the future, complexity is definitely an endogenous outcome.
In this respect one might wonder if quantitative dynamic stochastic general equilibrium models are an appropriate guide for policy making.
At some level of abstraction one could argue that the request, to bring the complexity of a model in line with our knowledge about the events the model distinguishes, has also the flavor of rational choosing our focus of attention.
In the end, the bottom line is this: Rational dealing with limited knowledge is not an individual choice problem. In terms of the introductory example: The financial crisis was not caused by the fact that the one or the other individual made errors in risk assessment or had a wrong focus of attention; rather it was the consequence of coordination of agents by unreliable models of risks.
So if rational inattention is an appeal to the scientific community to focus attention and effort on reliable modeling, rather than advice to individual agents to be clever, then I am happy to join.