Anda di halaman 1dari 2

The Macro-Objectification Problem In this section we shall clarify why the formalism we have just presented gives rise

to the measurement or macro-objectification problem. To this purpose we sha ll, first of all, discuss the standard oversimplified argument based on the so-c alled von Neumann ideal measurement scheme. Then we shall discuss more recent re sults (Bassi and Ghirardi 2000), which relax von Neumann's assumptions. Let us begin by recalling the basic points of the standard argument: Suppose that a microsystem S, just before the measurement of an observable B , is in the eigenstate |bj> of the corresponding operator. The apparatus (a macr osystem) used to gain information about B is initially assumed to be in a precis e macroscopic state, its ready state, corresponding to a definite macro property e .g., its pointer points at 0 on a scale. Since the apparatus A is made of elemen tary particles, atoms and so on, it must be described by quantum mechanics, whic h will associate to it the state vector |A0>. One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that wh en the initial apparatus state is triggered by the state |bj> it ends up in a fi nal configuration |Aj>, which is macroscopically distinguishable from the initia l one and from the other configurations |Ak> in which it would end up if trigger ed by a different eigenstate |bk>. Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as: (initial state): |bk>|A0> (final state): |bk>|Ak> Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a li near superposition of different eigenstates (for simplicity we will consider onl y two of them), one has: (initial state): (a|bk> + b|bj>)|A0> (final state): (a|bk>|Ak> + b|bj>|Aj>). Some remarks about this are in order: The scheme is highly idealized, both because it takes for granted that one c an prepare the apparatus in a precise state, which is impossible since we cannot have control over all its degrees of freedom, and because it assumes that the a pparatus registers the outcome without altering the state of the measured system . However, as we shall discuss below, these assumptions are by no means essentia l to derive the embarrassing conclusion we have to face, i.e., that the final st ate is a linear superposition of two states corresponding to two macroscopically different states of the apparatus. Since we know that the + representing linear superpositions cannot be replaced by the logical alternative either or, the mea surement problem arises: what meaning can one attach to a state of affairs in wh ich two macroscopically and perceptively different states occur simultaneously? As already mentioned, the standard solution to this problem is given by the WPR postulate: in a measurement process reduction occurs: the final state is not the one appearing at the right hand side of equation (2) but, since macro-objec tification takes place, it is either |bk>|Ak> or |bj>|Aj> with probabilities |a|2 and |b|2, respective ly. Nowadays, there is a general consensus that this solution is absolutely unaccept able for two basic reasons:

It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen th at the apparata behave as required by the WPR postulate (which is one of the axi oms of the theory). Even if one were to accept that quantum mechanics has a limited field of app licability, so that it does not account for all natural processes and, in partic ular, it breaks down at the macrolevel, it is clear that the theory does not con tain any precise criterion for identifying the borderline between micro and macr o, linear and nonlinear, deterministic and stochastic, reversible and irreversib le. To use J.S. Bell's words, there is nothing in the theory fixing such a borde rline and the split between the two above types of processes is fundamentally sh ifty. As a matter of fact, if one looks at the historical debate on this problem , one can easily see that it is precisely by continuously resorting to this ambi guity about the split that adherents of the Copenhagen orthodoxy or easy solvers (Bell 1990) of the measurement problem have rejected the criticism of the heret ics (Gottfried 2000). For instance, Bohr succeeded in rejecting Einstein's criti cisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner d isplaced the split by locating it between the physical and the conscious (but wh at is a conscious being?), and so on. Also other proposed solutions to the probl em, notably certain versions of many-worlds interpretations, suffer from analogo us ambiguities. It is not our task to review here the various attempts to solve the above diffic ulties. One can find many exhaustive treatments of this problem in the literatur e. On the contrary, we would like to discuss how the macro-objectification probl em is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neuma nn's model. This was established in a series of theorems of increasing generalit y, notably the ones by Fine (1970), d'Espagnat (1971), Shimony (1974), Brown (19 86) and Busch and Shimony (1996). Possibly the most general and direct proof is given by Bassi and Ghirardi (2000), whose results we briefly summarize. The assu mptions of the theorem are: that a microsystem can be prepared in two different eigenstates of an observ able (such as, e.g., the spin component along the z-axis) and in a superposition of two such states; that one has a sufficiently reliable way of measuring such an observable, mean ing that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptu ally different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the appar atus is entangled with the rest of the universe, the apparatus makes mistakes, o r the measured system is altered or even destroyed in the measurement process; that all natural processes obey the linear laws of the theory. From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the g reat majority of cases one ends up in a superposition of macroscopically and per ceptually different situations of the whole universe. If one wishes to have an a cceptable final situation, one mirroring the fact that we have definite percepti ons, one is arguably compelled to break the linearity of the theory at an approp riate stage.

Anda mungkin juga menyukai