Читать книгу This is Philosophy of Science - Franz-Peter Griesmaier - Страница 31
2.3.3 Theory-Ladenness of Measurement
ОглавлениеSuppose I want to determine the mass of my bowling ball. To that end, I might put it onto an ordinary balance and compare it with a known mass, such as a number of metal cubes each weighing 1, 10, or 100 grams. Things get more – much more – complicated if I want to know the mass of a distant star. Obviously, I can’t put it onto any balance. So how do I measure it? The details are actually quite complicated, but we can roughly say that measuring the mass of a star involves various background theories. For example, if we want to measure the mass of a binary star, we first determine a center of mass between the two stars, then their distance from that center which we can then use, together with a value for the period (the time it takes to complete one orbit around each other) and a certain instance of Kepler’s Third Law, to calculate the mass. In other words, in order to “measure” the star mass, we measure other quantities and use those values, together with certain equations, to calculate the mass. Obviously, the correctness of such a “measurement” not only depends on the correctness of other measurements (in this case, at least that of the period), but also on the correctness of certain background assumptions, such as Kepler’s Laws. Measurement is not a simple and unmediated estimation of independently existing properties, but often a determination of certain magnitudes before the background of a number of accepted theories.
A moment’s reflection shows that even in a simple measurement, such as that of temporal duration, background assumptions play a role. To measure the duration of a process A, we count the number of times that another process B is completed. Of B we assume that it exhibits strict periodicity, i.e., that its completion time does not vary from one to the next execution of the process. Familiar examples include the period from full moon to full moon. Today, the second as the unit of time is fixed by the number of oscillations of the cesium atom 133 (one second is about 9 billion such oscillations; see https://physics.nist.gov/cuu/Units/current.html). Since this behavior of cesium defines the unit of time, there is no way to verify by measurement that these oscillations really take one second. That they do is built into the definition of “one second.”
One might think that the theory-ladenness of measurement raises methodological questions. Measurements are supposed to deliver data, which in turn can be used as evidence for or against some theory. Obviously, if the truth of a theory to be tested by data delivered by a certain measurement procedure is presupposed by that very measurement procedure, we face a circularity problem. Fortunately, this is hardly ever the case, and current consensus has it that the theory-ladenness of measurement poses no significant methodological problem. After all, since the assumption that certain background beliefs are true is unavoidable for any measurement procedure, we can’t escape it. Moreover, as we will see in the next chapter, observation is also influenced by background beliefs.
Not only are background beliefs involved in the measurement process, there is an influence in the other direction as well: The measurement process can influence the measured. As a simple example, imagine a tiny amount of water the temperature of which is being measured by immersing a thermometer into it. Suppose the thermometer’s housing is quite a bit colder than the water. If the quantities involved are just right, immersing the thermometer, and then waiting to see where the mercury column settles, will of course change the water’s temperature, even if only by a small amount. Usually, the amount is so small as to be negligible. The situation is, however, different in the science of the very small – quantum mechanics (QM). There, we find the so-called measurement problem, which we will briefly discuss, together with other quantum oddities, in Chapter 11 .