Читать книгу Mechanical Engineering in Uncertainties From Classical Approaches to Some Recent Developments - Группа авторов - Страница 14
1.1. Introduction
ОглавлениеSeveral decades ago, the scientific and engineering community began to recognize the value of considering uncertainties in the design, optimization and risk analysis of complex systems, such as aircraft, space vehicles or nuclear power plants (Wagner 2003; Lemaire 2014). These uncertainties can manifest themselves in many forms, originating from many different sources. In the field of mechanics, sources of uncertainty can typically be due to:
– sources of uncertainty in the manufactured system: tolerances, variability of material properties, manufacturing methods, or loads and boundary conditions over the lifecycle;
– sources of uncertainty in experimental data: test apparatus and protocols, inaccuracies in measurements, as well as experimental and environmental conditions;
– sources of uncertainty in modeling: modeling approaches and choices, modeling tools, numerical resources available and model calibration based on experimental data.
With respect to these different sources of uncertainty, a distinction is often made between aleatory and epistemic uncertainties (Vose 2008; National Research Council 2009), although this distinction is debatable, as will be discussed in more detail in section 1.2.
Aleatory uncertainty is also referred to as irreducible uncertainty, stochastic uncertainty, inherent uncertainty or type I uncertainty. This uncertainty typically arises from environmental stochasticity, fluctuations in time, variations in space, heterogeneities and other intrinsic differences in a system. It is often referred to as irreducible uncertainty because it cannot be reduced further except by modification of the problem under consideration. On the other hand, it can be better characterized when it is empirically estimated. For example, the characterization of the variability of a material property can be improved by increasing the number of samples used, allowing a better estimate of statistical properties such as the mean and standard deviation.
An example of random uncertainty can be seen in an (unbiased) coin toss (that is, “heads or tails”). The intrinsic characteristics of the toss create uncertainty about the outcome of the toss, with a probability of 0.5 of obtaining tails. Assuming that tails is an undesirable outcome, it would then be desirable to reduce the probability of obtaining tails in order to reduce the uncertainty of the desired outcome (obtaining heads). However, without breaking the rules of the game, that is, without modifying the problem under consideration, it is not possible to reduce this uncertainty, hence the term “irreducible uncertainty”. On the other hand, if this uncertainty is characterized empirically on the basis of several tosses, it is obvious that this uncertainty can be better characterized by increasing the number of tosses.
In terms of an engineering example, the amplitude of the gusts that an aircraft will be likely to encounter during its lifetime can be seen as a random or irreducible uncertainty. Indeed, when designing a new model of aircraft, it would be desirable for the engineer to reduce this uncertainty as much as possible in order to reduce the weight of the aircraft structure. Unfortunately, since this uncertainty is essentially related to environmental stochasticity, the engineer has no way to reduce it without changing the design being considered, therefore, the uncertainty is seen as irreducible. However, it could perhaps be reduced by changing the problem under study. For example, the engineer might want to consider installing sensors that allow the aircraft to detect the amplitude of turbulence in its trajectory in advance, thus allowing pilots to undertake avoidance maneuvers. Nevertheless, such a choice would have numerous and serious consequences. (Would such an aircraft be certifiable? Is it more economical to avoid turbulence rather than designing the aircraft to withstand it? Would passenger comfort be satisfactory? etc.) Usually, the problem is thus considered fixed and the irreducible or non-irreducible nature of the uncertainties is estimated on a given problem.
Epistemic uncertainty, also called reducible uncertainty or type II uncertainty, is the result of a lack of knowledge. This type of uncertainty is usually associated with uncertainty in measurements, a small number of experimental data, censorship or the unobservability of phenomena or scientific ignorance, which, in general terms, amounts to a lack of knowledge of one kind or another. It is often referred to as reducible uncertainty, as it can potentially be reduced by additional actions (for example, additional studies), leading to an improvement in knowledge. It should be noted that once epistemic uncertainty has been reduced, random uncertainty may remain, which could then become the predominant uncertainty and would therefore be irreducible.
An example of epistemic uncertainty would be that associated with the estimation of the age (in years) of an individual. Let us assume that we have just heard on the radio that a Nobel Prize winning scientist is going to give an acceptance speech in our town and we would like to know the age of the scientist. There is no mention of that on the radio. At this point, let us say that we can only estimate the age of this person between 30 and 90 years old. Therefore, there is a lot of uncertainty about their age at that point. However, as mentioned earlier, epistemic uncertainty can be reduced through improved knowledge. If we attend the acceptance speech, we will have the opportunity to see this person, which could allow us to reduce the uncertainty about their age to between, let us say, 50 and 60 years of age. If we even go to talk to them after the speech, we may even be able to get additional information that will allow us to further reduce the uncertainty about their age. In this case, the uncertainty could even be reduced to zero if we can find out their date of birth.
A typical example of epistemic uncertainty in engineering problems is measurement uncertainty. Similar to the scientist’s age, the quantity that must be measured has a true value that is fixed (considering measurements at the macroscopic, not the quantum scale). Nevertheless, measurement instrumentation usually only allows this quantity to be determined with uncertainty. This uncertainty can be reduced by developing better instruments, using a better knowledge of the measurement phenomena involved, hence the term reducible uncertainty, although, in general, this does not mean that the uncertainty can be reduced to zero.
Probability theory has historically provided the first framework for modeling and quantifying uncertainties. Today, it is generally accepted that aleatory uncertainties can be adequately handled by probability theory. While the probabilistic approach can also be used to model epistemic uncertainties, other alternative representations have been proposed for such uncertainties, such as interval analysis, fuzzy set theory, possibility theory or evidence theory. These alternative approaches address, in particular, the need to quantify uncertainty when little or no data (either numerical or experimental) are available. Attempts to unify all of these approaches under a generalized theory of inaccurate probabilities have also been undertaken (Walley 2000; Klir 2004), without, however, leading to fully satisfactory approaches.
The purpose of this chapter is therefore to provide an overview of some of the different approaches used to represent and quantify uncertainties, both aleatory and epistemic. It is organized as follows. In section 1.2, a discussion about the need to distinguish between epistemic and aleatory uncertainty is presented. In section 1.3, an illustration of the probabilistic modeling approach is provided, including illustrations of some cases where its use may be problematic for representing epistemic uncertainty. In section 1.4, p-box theory is presented, which is an extension of probability theory that is designed to better address problematic cases in modeling epistemic uncertainties. In section 1.5, interval analysis is briefly discussed and in section 1.6 fuzzy set theory is addressed. In section 1.7, possibility theory is introduced, while evidence theory (or Dempster–Shafer theory) is presented in section 1.8. Some concluding remarks and discussions are provided in section 1.9.