Jump to content

User:Ybenhaim/sandbox

From Wikipedia, the free encyclopedia

Info-gap decision theory is a non-probabilistic decision theory seeking to optimize robustness to failure, or opportuneness for windfall, under severe uncertainty [1] [2] .

In many fields, including engineering, economics, management, biological conservation, medicine, homeland security, and more, analysts use models and data to evaluate and formulate decisions. An info-gap is the disparity between what is known and what needs to be known in order to make a reliable and responsible decision. Info-gaps are Knightian uncertainties: a lack of knowledge, an incompleteness of understanding. Info-gaps are non-probabilistic and cannot be insured against or modelled probabilistically. A common info-gap, though not the only kind, is uncertainty in the value of a parameter or of a vector of parameters, such as the durability of a new material or the future rates or return on stocks. Another common info-gap is uncertainty in the shape of a probability distribution. Another info-gap is uncertainty in the functional form of a property of the system, such as friction force in engineering, or the Phillips curve in economics. Another info-gap is in the shape and size of a set of possible vectors or functions. For instance, one may have very little knowledge about the relevant set of cardiac waveforms at the onset of heart failure in a specific individual.

Info-gap robustness analysis evaluates each feasible decision by asking: how much can we deviate from a given estimate of the true value of the parameter, or function, or set, and still guarantee that the performance over the respective region of uncertainty surrounding the estimate will be acceptable? In other words, the robustness of a decision is a measure of the size of the area surrounding the estimate over which a decision meets pre-specified performance requirements. It is sometimes difficult to judge how much robustness is needed or sufficient. However, the ranking of feasible decisions in terms of their degree of robustness is independent of such judgments.

Info-gap theory also proposes an opportuneness function which evaluates the potential for windfall outcomes resulting from favorable uncertainty.

Some Applications of Info-Gap Theory

[edit]

Info-gap theory has been studied or applied in a range of applications including engineering [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16], biological conservation [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28], theoretical biology [29], homeland security [30], economics [31] [32] [33], project management [34] [35] [36] and statistics [37]. Foundational issues related to info-gap theory have also been studied [38] [39] [40] [41] [42] [43].

A typical engineering application is the vibration analysis of a cracked beam, where the location, size and shape of the crack is unknown and greatly influence the vibration dynamics.[7] Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes.[6] [8] Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.[9] [11]

Biological systems are vastly more complex and subtle than our best models, so the conservation biologist faces substantial info-gaps in using biological models. For instance, Levy et al [17] use an info-gap robust-satisficing "methodology for identifying management alternatives that are robust to environmental uncertainty, but nonetheless meet specified socio-economic and environmental goals." They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada. Burgman [44] uses the fact that the robustness curves of different alternatives can intersect, to illustrate a change in preference between conservation strategies for the orange-bellied parrot.

Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration.[35] Financial economics is another area where the future is fraught with surprises, which may be either pernicious or propitious. Info-gap robustness and opportuness analyses can assist in portfolio design, credit rationing, and other applications.[31]

A number of authors have noted and discussed similarities and differences between info-gap robustness and minimax or worst-case methods [5] [14] [33] [35] [45] [46]. Sniedovich [47] has demonstrated formally that the info-gap robustness function can be represented as a minimax optimization, and is thus related to Wald's minimax theory. Sniedovich [47] has claimed that Info-Gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong. On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This is related to the issue of how much robustness is needed in order to obtain adequate confidence [48]. (See also chapter 9 in[3] and chapter 4 in[2]).

Info-gap models

[edit]

Info-gaps are quantified by info-gap models of uncertainty. An info-gap model is an unbounded family of nested sets. A frequently encountered example is a family of nested ellipsoids all having the same shape. The structure of the sets in an info-gap model derives from the information about the uncertainty. In general terms, the structure of an info-gap model of uncertainty is chosen to define the smallest or strictest family of sets whose elements are consistent with the prior information. Since there is, usually, no known worst case, the family of sets is unbounded.

A common example of an info-gap model is the fractional error model. The best estimate of an uncertain function is , but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractional-error info-gap model:

At any horizon of uncertainty , the set contains all functions whose fractional deviation from is no greater than . However, the horizon of uncertainty is unknown, so the info-gap model is an unbounded family of sets, and there is no worst case or greatest deviation.

There are many other types of info-gap models of uncertainty. All info-gap models obey two basic axioms:

  • Nesting. The info-gap model is nested if implies that:
  • Contraction. The info-gap model is a singleton set containing its center point:

The nesting axiom imposes the property of "clustering" which is characteristic of info-gap uncertainty. Furthermore, the nesting axiom implies that the uncertainty sets become more inclusive as grows, thus endowing with its meaning as an horizon of uncertainty. The contraction axiom implies that, at horizon of uncertainty zero, the estimate is correct.

Recall that the uncertain element may be a parameter, vector, function or set. The info-gap model is then an unbounded family of nested sets of parameters, vectors, functions or sets.

Robustness and opportuneness

[edit]

Uncertainty may be either pernicious or propitious. That is, uncertain variations may be either adverse or favorable. Adversity entails the possibility of failure, while favorability is the opportunity for sweeping success. Info-gap decision theory is based on quantifying these two aspects of uncertainty, and choosing an action which addresses one or the other or both of them simultaneously. The pernicious and propitious aspects of uncertainty are quantified by two "immunity functions": the robustness function expresses the immunity to failure, while the opportuneness function expresses the immunity to windfall gain.

The robustness function expresses the greatest level of uncertainty at which failure cannot occur; the opportuneness function is the least level of uncertainty which entails the possibility of sweeping success. The robustness and opportuneness functions address, respectively, the pernicious and propitious facets of uncertainty.

Let be a decision vector of parameters such as design variables, time of initiation, model parameters or operational options. We can verbally express the robustness and opportuneness functions as the maximum or minimum of a set of values of the uncertainty parameter of an info-gap model:

(robustness) (1)
(opportuneness) (2)

We can "read" eq. (1) as follows. The robustness of decision vector is the greatest value of the horizon of uncertainty for which specified minimal requirements are always satisfied. expresses robustness — the degree of resistance to uncertainty and immunity against failure — so a large value of is desirable. Eq. (2) states that the opportuneness is the least level of uncertainty which must be tolerated in order to enable the possibility of sweeping success as a result of decisions . is the immunity against windfall reward, so a small value of is desirable. A small value of reflects the opportune situation that great reward is possible even in the presence of little ambient uncertainty. The immunity functions and are complementary and are defined in an anti-symmetric sense. Thus "bigger is better" for while "big is bad" for . The immunity functions — robustness and opportuneness — are the basic decision functions in info-gap decision theory.

The robustness function involves a maximization, but not of the performance or outcome of the decision. The greatest tolerable uncertainty is found at which decision satisfices the performance at a critical survival-level. One may establish one's preferences among the available actions according to their robustnesses , whereby larger robustness engenders higher preference. In this way the robustness function underlies a satisficing decision algorithm which maximizes the immunity to pernicious uncertainty.

The opportuneness function in eq. (2) involves a minimization, however not, as might be expected, of the damage which can accrue from unknown adverse events. The least horizon of uncertainty is sought at which decision enables (but does not necessarily guarantee) large windfall gain. Unlike the robustness function, the opportuneness function does not satisfice, it "windfalls". Windfalling preferences are those which prefer actions for which the opportuneness function takes a small value. When is used to choose an action , one is "windfalling" by optimizing the opportuneness from propitious uncertainty in an attempt to enable highly ambitious goals or rewards.

Given a scalar reward function , depending on the decision vector and the info-gap-uncertain function , the minimal requirement in eq. (1) is that the reward be no less than a critical value . Likewise, the sweeping success in eq. (2) is attainment of a "wildest dream" level of reward which is much greater than . Usually neither of these threshold values, and , is chosen irrevocably before performing the decision analysis. Rather, these parameters enable the decision maker to explore a range of options. In any case the windfall reward is greater, usually much greater, than the critical reward :

The robustness and opportuneness functions of eqs. (1) and (2) can now be expressed more explicitly:

(3)
(4)

is the greatest level of uncertainty consistent with guaranteed reward no less than the critical reward , while is the least level of uncertainty which must be accepted in order to facilitate (but not guarantee) windfall as great as . The complementary or anti-symmetric structure of the immunity functions is evident from eqs. (3) and (4).

These definitions can be modified to handle multi-criterion reward functions. Likewise, analogous definitions apply when is a loss rather than a reward.

The robustness function generates robust-satisficing preferences on the options. A robust-satisficing decision maker will prefer a decision option over an alternative if the robustness of is greater than the robustness of at the same value of critical reward . That is:

     if      (5)

Let be the set of all available or feasible decision vectors . A robust-satisficing decision is one which maximizes the robustness on the set of available -vectors and satisfices the performance at the critical level :

Usually, though not invariably, the robust-satisficing action depends on the critical reward .

The opportuneness function generates opportune-windfalling preferences on the options. An opportune-windfalling decision maker will prefer a decision over an alternative if is more opportune than at the same level of reward . Formally:

     if      (6)

The opportune-windfalling decision, , minimizes the opportuneness function on the set of available decisions:

The two preference rankings, eqs. (5) and (6), as well as the corresponding the optimal decisions and , may be different.

The robustness and opportuneness functions have many properties which are important for decision analysis. Robustness and opportuneness both trade-off against aspiration for outcome: robustness and opportuneness deteriorate as the decision maker's aspirations increase. Robustness is zero for model-best anticipated outcomes. Robustness curves of alternative decisions may cross, implying reversal of preference depending on aspiration. Robustness may be either sympathetic or antagonistic to opportuneness: a change in decision which enhances robustness may either enhance or diminish opportuneness. Various theorems have also been proven which identify conditions in which the probability of success is enhanced by enhancing the info-gap robustness, without of course knowing the underlying probability distribution.

Example: Managing an epidemic[49]

[edit]

The impact of an epidemic is evaluated with data and models from epidemiology and other fields. These models predict morbidity and mortality, and underlie strategies for prevention and response. Severe uncertainties accompany these models, and information is often too sparse to represent these uncertainties probabilistically, and even worst-case scenarios are hard to identify reliably. The rate of infection is central in determining the spread of the epidemic, but it is hard to predict. Population studies of infection rates in normal circumstances may be of limited value when applied to stressful situations. Also, the disease may be poorly understood, as in SARS or avian flu. Population-mixing behavior, which influences the infection rate, can be influenced by public health announcements, but this is also hard to predict. Other properties are important, for instance, the number of initial infections and the identity of the disease, both of which may be hard to know.

The spread of disease can be managed through prophylactic or remedial treatment, quarantine, public information, and other means. Consider an example in which public officials must choose whether to quarantine the effected population, and if so, what population size to include. Also, planners must determine how quickly medical treatment will be dispensed upon detecting an emerging epidemic. Thousands or millions of individuals cannot be treated immediately, and plans must allow time for implementing large scale treatment.

We will consider three different quarantine options: no quarantine in which case the vulnerable population is one million; moderate quarantine which limits the vulnerable population to fifty thousand; and strict quarantine which limits the vulnerables to one thousand. We wish to analyze each option and then pick one, together with a rate of deployment of medical assistance. Our models are incomplete and uncertain, but they nonetheless represent the best available understanding of the processes involved. We will use the models as the starting point in the policy-selection process, followed by an analysis of the robustness to model-uncertainty and its implication for selecting a plan. We will illustrate the info-gap robust-satisficing strategy for selecting a plan.

Specifically, for each quarantine scenario, we will use an epidemiological model to predict the morbidity as a function of the time required to deploy and dispense medical treatment. From such an analysis one finds that stricter quarantine allows medical assistance to be deployed more slowly without increasing morbidity. Conversely, stricter quarantine results in lower morbidity at the same deployment rate. For instance, one might find that the morbidity reached in the no-quarantine case, is reached in a quarantined sub-population of 50 thousand after a deployment approximately 20 times slower than in the no-quarantine case. If the disease can be contained within a population of 1000, then the deployment-duration is extended by a further factor of 50 without increasing morbidity.

Clearly, quarantine is valuable, but how robust are these conclusions to modelling error? These conclusions depend on models and data which entail many info-gaps. Even small errors can result in performance which is substantially worse than predicted. We seek a strategy for which we are confident that the outcomes are acceptable. For any given choice of quarantine size and deployment duration, we must ask: by how much can the models err, and an acceptable outcome is still guaranteed? (Note that this is different from asking: 'how wrong are the models', or 'what is the worst case'?) The answer to the question we are asking is expressed by the robustness function. In comparing two options, we will prefer the one with greater robustness; we will prefer the option which guarantees a specified level of morbidity for the widest range of contingencies. Incidentally, this preference-ranking of the options is independent of whether or not we judge the robustness of either option to be large enough.

Before discussing how the robustness function is used to select a plan, let's recall that large robustness means that the corresponding morbidity is obtained for a large range of contingencies. The robustness of a plan is a result of how, and how much, information is exploited, and how uncertain that information is as expressed by an info-gap model of uncertainty. Further understanding of the roots of robustness can be obtained by mathematical analysis.

Two attributes of any option are expressed by the robustness function. (The theorems which underlie these statements depend on the nesting axiom of info-gap models of uncertainty, discussed earlier.) First, the robustness function quantifies the irrevocable trade-off between robustness and morbidity: for any plan, lower morbidity entails lower robustness; higher morbidity, higher robustness. This suggests that one is less confident in obtaining low morbidity than high morbidity, with any given plan, because the robustness to uncertainty is lower for low morbidity. The planner who uses uncertain models to choose a plan for achieving low morbidity, must also accept low robustness. This is useful in identifying feasible values of morbidity: those values for which the robustness is very low are clearly unfeasible, large robustness suggests greater feasibility, etc. Second, when choosing between several plans, which plan is most robust may depend on the acceptable level of morbidity (in a way which is revealed by the mathematical representation of robustness). The preference-ranking of the plans (based on robustness) can change as the planner considers different levels of morbidity. In other words, there need not be a unique robust-optimal plan, but instead the most robust plan may depend on the planner's (or the society's) preferences on the outcome (morbidity in this example). By identifying feasible goals, and plans which can reliably achieve them, the robustness analysis supports debate and decision about resource allocation. In short, the info-gap analysis of robustness assists strategic planners to weigh the pros and cons of available options.


Example: Managing an epidemic: Long

[edit]

Consider the management of an epidemic. Infectious diseases have seriously effected humanity for centuries. The Black Death annihilated 30 to 60 percent of Europe's population around 1340. 50 to 100 million people died in the 1918-1920 influenza pandemic. SARS, AIDS, mad-cow disease, and avian flu have caused serious injury, economic loss, and death in recent decades. Civilian populations face serious threat of attack with biological agents by rouge states or terror groups.

The impact of a potential epidemic is evaluated with data and models from epidemiology and other fields. These models predict morbidity and mortality, and underlie strategies for prevention and response. The spread of disease can be managed through prophylactic or remedial treatment, quarantine, public information, and other means. Severe uncertainties accompany these models.

We will consider a simple example in which public officials must choose whether to quarantine the effected population, and if so, what population size to include. Also, planners must determine how quickly medical treatment will be dispensed upon detecting an emerging epidemic. Thousands or millions of individuals cannot be treated immediately, and plans must allow time for implementing large scale treatment.

Our simple epidemic model assumes constant population (no deaths, or death occurs much more slowly than infection), and infected individuals continue to infect the remaining susceptible population. This simple model will illustrate the info-gap robust-satisficing analysis of decisions facing strategic planners. The model contains four central parameters. determines the rate of infection of susceptible by infected individuals. is the duration from detection of the epidemic until medical treatment has been dispensed. is the number of initially infected people. is the size of the population within which the disease can spread.

The rate of mixing and infection, , is hard to predict. Population studies in normal circumstances may be of limited value when applied to stressful situations. Also, the disease may be poorly understood, as in SARS or avian flu. Population-mixing behavior can be influenced by public health announcements, but this is also hard to predict. We focus in this example only on uncertainty in , though other uncertainties are important, for instance, the number and identity of initial infections is hard to know.

Consider the choice of (delay until treatment) and (quarantine size). Suppose the vulnerable population is one million people. The best estimate of our model predicts the number of new infections as a function of the time required to deploy and dispense medical treatment. Now suppose that, by quarantine, the disease can be contained within a sub-population of 50 thousand. The model predicts that the same morbidity will be reached in the sub-population only after a duration approximately 20 times longer than for one million people. In other words, mass quarantine allows slower medical deployment, or, conversely, lower morbidity at the same deployment rate. If the disease can be contained within a population of 1000, then the deployment-duration is extended by a further factor of 50 without increasing morbidity. Clearly, quarantine is valuable, but how robust are these conclusions to modelling error?

File:Info-gap-epidemic.pdf
Figure 1: Robustness curves for 3 different plans.

Considerations such as these may contribute to a choice of quarantine size, , and deployment time, . However, the models entail many info-gaps. Even small errors in the models and data can result in performance substantially worse than predicted. We seek a strategy for which we are confident that the outcomes are not unacceptable. For any given plan, , we must ask: by how much can the models err, while acceptable outcomes are still guaranteed? The answer to this question, for various levels of morbidity and three different plans, are the robustness curves shown in fig. 1. The horizonal axis is the morbidity, , and the vertical axis is the robustness : the greatest fractional error in the estimated mixing rate, , up to which the corresponding morbidity, , will not be exceeded.

We see two important features of the robustness curves in fig. 1. First, the slopes are positive, indicating trade-off between robustness and morbidity: morbidity, , can be reduced only by also reducing the robustness against error in the models and data, . Second, the robustness becomes zero at the value of morbidity predicted by the best available models and data. For instance, the best estimate of the mixing rate, , indicates that morbidity in plans 1, 2 and 3 will not exceed 268, 182 and 227 infections, respectively. However, even tiny errors could result in greater morbidity. Best-model predictions are a poor basis for evaluating a plan. We should evaluate plans in terms of how much added morbidity we must tolerate, in order to gain added robustness against surprises, errors, and data deficiencies. For instance, in plan 2 (intermediate quarantine) we can "purchase" robustness to 0.5 fractional error in the estimated mixing coefficient, , by accepting the possibility of as many as 339 infections (point ), as opposed to the best-estimate of 182 (point ).

Note that plan 2 has lower estimated morbidity than the other two plans displayed in fig. 1, and that plan 2 is also more robust up to robustness of about 120 percent (1.2 fractional error). However, in light of the many unknown factors which can influence the infectious-mixing rate, , the decision-maker may well want far greater robustness. In this case plan 1---severe quarantine---is indicated due to its greater immunity to uncertainty. Nonetheless it must be recognized that this greater robustness is obtained only at the expense of accepting the possibility of greater morbidity. The large robustness premium for plan 1 is particularly striking since this plan has the largest best-estimated morbidity.

This example illustrates how info-gap analysis of robustness assists strategic planners to weigh the pros and cons of available options.


Treatment of severe uncertainty

[edit]

Sniedovich [47] has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate is a poor indication of the true value of . Sniedovich notes that the info-gap robustness function is "local" to the region around , where is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error. There are several possible responses to this concern.

Simon [50] introduced the idea of bounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated satisficing rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz [51], Conlisk [52] and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options." [53] Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties."[54] They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application.

Thus it is correct that the info-gap robustness function is local, with respect to . However, the value judgment of whether this neighborhood of robustness is small, too small, large, large enough, etc., is characteristic of all decisions under uncertainty. A major purpose of quantitative decision analysis is to provide focus for the subjective judgments which must be made.

References

[edit]
  1. ^ Yakov Ben-Haim, Information-Gap Theory: Decisions Under Severe Uncertainty, Academic Press, London, 2001.
  2. ^ a b Yakov Ben-Haim, Info-Gap Theory: Decisions Under Severe Uncertainty, 2nd edition, Academic Press, London, 2006.
  3. ^ a b Yakov Ben-Haim, Robust Reliability in the Mechanical Science, Springer, Berlin ,1996.
  4. ^ Keith W. Hipel and Yakov Ben-Haim, 1999, Decision making in an uncertain world: Information-gap modelling in water resources management, IEEE Trans., Systems, Man and Cybernetics, Part C: Applications and Reviews, 29: 506-517.
  5. ^ a b Yakov Ben-Haim, 2005, Info-gap Decision Theory For Engineering Design. Or: Why `Good' is Preferable to `Best', appearing as chapter 11 in Engineering Design Reliability Handbook, Edited by Efstratios Nikolaidis, Dan M.Ghiocel and Surendra Singhal, CRC Press, Boca Raton.
  6. ^ a b Y. Kanno and I. Takewaki, Robustness analysis of trusses with separable load and structural uncertainties, International Journal of Solids and Structures, Volume 43, Issue 9, May 2006, pp.2646-2669.
  7. ^ a b Kaihong Wang, 2005, Vibration Analysis of Cracked Composite Bending-torsion Beams for Damage Diagnosis, PhD thesis, Virginia Politechnic Institute, Blacksburg, Virginia.
  8. ^ a b Y. Kanno and I. Takewaki, 2006, Sequential semidefinite program for maximum robustness design of structures under load uncertainty, Journal of Optimization Theory and Applications, vol.130, #2, pp.265-287.
  9. ^ a b S.G. Pierce, K. Worden and G. Manson, 2006, A novel information-gap technique to assess reliability of neural network-based damage detection Journal of Sound and Vibration, 293: Issues 1-2, pp.96-111.
  10. ^ S.Gareth Pierce, Yakov Ben-Haim, Keith Worden, Graeme Manson, 2006, Evaluation of neural network robust reliability using information-gap theory, IEEE Transactions on Neural Networks, vol.17, No.6, pp.1349-1361.
  11. ^ a b Chetwynd, D., Worden, K., Manson, G., 2006, An application of interval-valued neural networks to a regression problem, Proceedings of the Royal Society - Mathematical, Physical and Engineering Sciences, (Series A), 462 (2074) pp.3097-3114.
  12. ^ D. Lim, Y. S. Ong, Y. Jin, B. Sendhoff, and B. S. Lee, 2006, Inverse Multi-objective Robust Evolutionary Design, Genetic Programming and Evolvable Machines, Vol. 7, No. 4, pp. 383-404.
  13. ^ P. Vinot, S. Cogan and V. Cipolla, 2005, A robust model-based test planning procedure Journal of Sound and Vibration, 288, Issue 3, pp.571-585.
  14. ^ a b Izuru Takewaki and Yakov Ben-Haim, 2005, Info-gap robust design with load and model uncertainties, Journal of Sound and Vibration, 288: 551-570.
  15. ^ Izuru Takewaki and Yakov Ben-Haim, 2007, Info-gap robust design of passively controlled structures with load and model uncertainties, Structural Design Optimization Considering Uncertainties, Yiannis Tsompanakis, Nikkos D. Lagaros and Manolis Papadrakakis, editors, Taylor and Francis Publishers.
  16. ^ Francois M. Hemez and Yakov Ben-Haim, 2004, Info-gap robustness for the correlation of tests and simulations of a nonlinear transient, Mechanical Systems and Signal Processing, vol. 18, #6, pp.1443-1467.
  17. ^ a b Levy, Jason K., Keith W. Hipel and D. Marc Kilgour, 2000, Using environmental indicators to quantify the robustness of policy alternatives to uncertainty, Ecological Modelling, vol.130, Issues 1-3, pp.79-86.
  18. ^ A. Moilanen, and B.A. Wintle, 2006, Uncertainty analysis favours selection of spatially aggregated reserve structures. Biological Conservation, Volume 129, Issue 3, May 2006, Pages 427-434.
  19. ^ Halpern, Benjamin S., Helen M. Regan, Hugh P. Possingham and Michael A. McCarthy, 2006, Accounting for uncertainty in marine reserve design, Ecology Letters, vol.9, pp.2-11.
  20. ^ Helen M. Regan, Yakov Ben-Haim, Bill Langford, Will G. Wilson, Per Lundberg, Sandy J. Andelman, Mark A. Burgman, 2005, Robust decision making under severe uncertainty for conservation management, Ecological Applications, vol.15(4): 1471-1477.
  21. ^ McCarthy, M.A., Lindenmayer, D.B., 2007, Info-gap decision theory for assessing the management of catchments for timber production and urban water supply, Environmental Management, vol.39 (4) pp. 553-562.
  22. ^ Crone, Elizabeth E., Debbie Pickering and Cheryl B. Schultz, 2007, Can captive rearing promote recovery of endangered butterflies? An assessment in the face of uncertainty, Biological Conservation, vol. 139, #1-2,pp.103-112.
  23. ^ L. Joe Moffitt, John K. Stranlund and Craig D. Osteen, 2007, Robust detection protocols for uncertain introductions of invasive species, Journal of Environmental Management, In Press, Corrected Proof, Available online 27 August 2007.
  24. ^ M. A. Burgman, D.B. Lindenmayer, and J. Elith, Managing landscapes for conservation under uncertainty, Ecology, 86(8), 2005, pp. 2007–-2017.
  25. ^ Moilanen, A., B.A. Wintle., J. Elith and M. Burgman, 2006, Uncertainty analysis for regional-scale reserve selection, Conservation Biology, Vol.20, No. 6, 1688–1697.
  26. ^ Moilanen, Atte, Michael C. Runge, Jane Elith, Andrew Tyre, Yohay Carmel, Eric Fegraus, Brendan Wintle, Mark Burgman and Yakov Ben-Haim, 2006, Planning for robust reserve networks using uncertainty analysis, Ecological Modelling, vol. 199, issue 1, pp.115-124.
  27. ^ Nicholson, Emily and Hugh P. Possingham, Making conservation decisions under uncertainty for the persistence of multiple species, Ecological Applications, vol. 17, pp.251-265.
  28. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge.
  29. ^ Yohay Carmel and Yakov Ben-Haim, 2005, Info-gap robust-satisficing model of foraging behavior: Do foragers optimize or satisfice?, American Naturalist, 166: 633-641.
  30. ^ L. Joe Moffitt, John K. Stranlund, and Barry C. Field, 2005, Inspections to Avert Terrorism: Robustness Under Severe Uncertainty, Journal of Homeland Security and Emergency Management, Vol. 2: No. 3. http://www.bepress.com/jhsem/vol2/iss3/3
  31. ^ a b Beresford-Smith, Bryan and Colin J. Thompson, 2007, Managing credit risk with info-gap uncertainty, The Journal of Risk Finance, vol.8, issue 1, pp.24-34.
  32. ^ John K. Stranlund and Yakov Ben-Haim, 2007, Price-based vs. quantity-based environmental regulation under Knightian uncertainty: An info-gap robust satisficing perspective, Journal of Environmental Management, In Press, Corrected Proof, Available online 28 March 2007.
  33. ^ a b Yakov Ben-Haim, 2005, Value at risk with Info-gap uncertainty, Journal of Risk Finance, vol. 6, #5, pp.388-403.
  34. ^ Yakov Ben-Haim and Alexander Laufer, 1998, Robust reliability of projects with activity-duration uncertainty, ASCE Journal of Construction Engineering and Management. 124: 125-132.
  35. ^ a b c Meir Tahan and Joseph Z. Ben-Asher, 2005, Modeling and analysis of integration processes for engineering systems, Systems Engineering, Vol. 8, No. 1, pp.62-77.
  36. ^ Sary Regev, Avraham Shtub and Yakov Ben-Haim, 2006, Managing project risks as knowledge gaps, Project Management Journal, vol. 37, issue #5, pp.17-25.
  37. ^ Fox, D.R., Ben-Haim, Y., Hayes, K.R., McCarthy, M., Wintle, B., and Dunstan, P., An Info-Gap Approach to Power and Sample-size calculations, Environmetrics, vol. 18, pp.189-203.
  38. ^ Yakov Ben-Haim, 1994, Convex models of uncertainty: Applications and Implications, Erkenntnis: An International Journal of Analytic Philosophy, 41:139-156.
  39. ^ Yakov Ben-Haim, 1999, Set-models of information-gap uncertainty: Axioms and an inference scheme, Journal of the Franklin Institute, vol. 336: 1093-1117.
  40. ^ Yakov Ben-Haim, 2000, Robust rationality and decisions under severe uncertainty, Journal of the Franklin Institute, vol. 337: 171-199.
  41. ^ Yakov Ben-Haim, 2004, Uncertainty, probability and information-gaps, Reliability Engineering and System Safety, vol. 85: 249-266.
  42. ^ George J. Klir, 2006, Uncertainty and Information: Foundations of Generalized Information Theory, Wiley Publishers.
  43. ^ Yakov Ben-Haim, 2007, Peirce, Haack and Info-gaps, in Susan Haack, A Lady of Distinctions: The Philosopher Responds to Her Critics, edited by Cornelis de Waal, Prometheus Books.
  44. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.399.
  45. ^ Z. Ben-Haim and Y. C. Eldar, Maximum set estimators with bounded estimation error, IEEE Trans. Signal Processing, vol. 53, no. 8, August 2005, pp. 3172-3182.
  46. ^ Babuška, I., F. Nobile and R. Tempone, 2005, Worst case scenario analysis for elliptic problems with uncertainty, Numerische Mathematik (in English) vol.101 pp.185–219.
  47. ^ a b c M. Sniedovich, 2007, The art and science of modeling decision-making under severe uncertainty, Decision-Making in Manufacturing and Services, Volume 1, Issue 1-2, pp. 109-134.
  48. ^ Yakov Ben-Haim, Scott Cogan and Laetitia Sanseigne, 1998, Usability of Mathematical Models in Mechanical Decision Processes, Mechanical Systems and Signal Processing, vol. 12: 121-134.
  49. ^ This example is an intuitive and non-technical discussion. The technical analysis of related problems can be found in: (1) Yoffe, Anna and Yakov Ben-Haim, 2006, An info-gap approach to policy selection for bio-terror response, Lecture Notes in Computer Science, Vol. 3975 LNCS, 2006, pp.554-559, Springer-Verlag, Berlin. (2) Yakov Ben-Haim, Info-Gap Theory: Decisions Under Severe Uncertainty, 2nd edition, Academic Press, London, 2006.
  50. ^ Simon, Herbert A., 1959, Theories of decision making in economics and behavioral science, American Economic Review, vol.49, pp.253-283.
  51. ^ Schwartz, Barry, 2004, Paradox of Choice: Why More Is Less, Harper Perennial.
  52. ^ Conlisk, John, 1996, Why bounded rationality? Journal of Economic Literature, XXXIV, pp.669-700.
  53. ^ Burgman, Mark, 2005, Risks and Decisions for Conservation and Environmental Management, Cambridge University Press, Cambridge, pp.391, 394.
  54. ^ P. Vinot, S. Cogan and V. Cipolla, 2005, A robust model-based test planning procedure Journal of Sound and Vibration, 288, Issue 3, p.572


Further information:

Category:Decision theory