On the Completeness of Quantum Physics

This page is the second in a series of pages that belong to the intersection of the author’s topics in physics, biology, and cognitive science.
Here is a link to the first page in this series.
Note: blue links are internal, green links are external; links in italics open in a new window.


In an earlier page, the following idea was explored: there are questions that appear physical to physicists, but in reality they are not answerable strictly within the domain of physics. To answer them properly we need the help of other disciplines; namely, biology and cognitive science. An example of such a question examined in the previous page was: “Why does the quantum world appear so strange?” i.e., why do the quantum laws appear to us so detached from everything we are familiar with? In the present page another such multi-disciplinary issue will be examined: the quest of physicists to reach “the rock bottom” of matter, the ultimate building blocks out of which everything is made. Physicists, it seems, would like to know the answer to everything physical. Is this a well-reasoned quest? (Arguments of physicists claiming that the quantum is the last layer of material structure are discussed later on in this text.)

Typically, physicists of today are extreme specialists. When one delves too deep into a very specific subject and becomes a specialist, it is only natural to expect that one cannot find the time to learn and assimilate sufficiently knowledge from other disciplines (and often even sub-fields of their own discipline). Unfortunately, the opposite kind of scientists, the “generalists” (as I describe myself), suffer from the opposite defect: they do not have the specialized knowledge required to understand in sufficient depth the specialist’s subject and find a common language with them. (The rare species who is both an extreme generalist and an extreme specialist, if it exists, I have not met with yet.) The important point here is that it would be nice if each kind of scientist, generalist or specialist, realizes that there are gaps in their knowledge: they can’t know everything. But there is an asymmetry: it is obvious to the generalist that they don’t know everything, because they see all these other disciplines and sub-disciplines in which obviously there exist specialists who know more than they do; “everything” is just too much, it makes one feel humble. But it is not as obvious to the specialists that there are questions that seem to be answerable within their domain of expertise, but in reality they are not. The specialist feels alone in their field, an expert, often with few or no “competing” experts; so they naturally develop an attitude which is antithetical to humbleness. It causes little surprise, then, that when confronted with an issue that seems to belong to their domain of expertise, specialists feel not only like the proper authorities to give an answer, but develop an attitude of indignation against those who they perceive as “intruders”, and want to defend their turf. It is hard to persuade the specialist that their “turf” does not extend to where they think it does, hence the need for this long introduction. What would you feel, after all, if you were told that your country’s borders do not extend up to where you think they do? Wouldn’t you feel indignation against the “intruder”? But the borders of scientific fields, contrary to national ones, are not defined by treaties and international conventions. There are even no borders in scientific fields — at least not crisply delineated ones — and the person who is more apt to answer a question is determined not by decree, but by the explanatory power of the answer that the person gives. Certainly it would be better if any gut reaction against this text, if it emerges, were suppressed, and issues examined calmly and objectively, as is supposed to be demanded by the scientific attitude toward knowledge.

So, by necessity — due to lack of a commonly agreed upon language — the discussion among generalists and specialists is restricted to a few borderline issues. Such is the issue of the quest for the ultimate building blocks of the material(1) universe. I made the above long introduction because telling to the physicist that perhaps there are no — there can be no — ultimate material building blocks, can be easily misinterpreted. People might think that I adopt a dualistic stance, in which it is assumed that there exist non-material elements in this world, such as “units of consciousness”, which are irreducible to other material (physical) units.(2) Far from it. I am as materialist as a materialist can be, and, having been trained as a cognitive scientist, I firmly believe that mind and consciousness are emergent properties of matter, ultimately reducible(3) to quantum physics. No, my objection to the possibility of a complete explanation of physics within physics itself does not stem from non-material considerations, but from logical ones. Allow me please to explain what I mean.

Suppose that, at some future time, we conclude that the world is made, after all, out of three elementary units: A, B, and C; and there can be no more elementary units making up A, B, or C, turning the latter into composites. Is this possible?

Are not units A, B, and C, supposed to have any properties? Of course they are, otherwise our knowledge of them would amount to nothing. When we talk about electrons today, for example, although we cannot answer the question “What is an electron made of?”, thus assuming that electrons are “fundamental” or “elementary” (the superstring theory notwithstanding), we nonetheless associate several properties with them: electrons have a negative charge of 1 unit, a spin of 1/2, a mass of around 9.1·10-28 g, are stable (have an “infinite” half-life — until evidence to the contrary), are constituents of atoms where they occupy the layers of the whole structure away from the nucleus, absorb and emit photons, jump from lower to higher energy levels, are responsible for electric currents, are considered point-like particles, and so on. What this means is that the “elementary” electron is actually a very rich concept. And this should be the case for units A, B, and C: they must be rich concepts, we must know and associate a large number of properties with them. “Elementary” particle or unit does not mean “empty concept”, because an empty concept is one that we have no knowledge of — essentially empty concepts do not exist, because the mere labeling of a concept with a word constitutes an association: the concept has the property of being called [substitute its word here].

Now, if we associate the rich concepts A, B, and C, with several properties, isn’t it legitimate to ask, “Why does A have property [such-and-such]? What is this property, where does it come from? Why does this property have the specific value we all know that it has for unit A, and a different value for units B and C?”

Physicists have displayed at least three attitudes (that I am aware of) when confronted with questions such as the above:

  • Dirac’s “shut up and calculate” attitude, probably the most pathetic of the three. It amounts to sheepishly admitting “I am just a physicist, all right? I know how to calculate my formulas, I strive to do it better than anyone else, but I don’t care about the meaning of things.” Perhaps physicists with this attitude feel that meaningful interpretations should be left to philosophers. Well, I don’t know what you think, but for me this kind of attitude amounts to admitting inferiority when compared to a modern computer.
  • Bohr’s “thou shalt not ask” attitude. It differs from Dirac’s in saying not that the physicist doesn’t care, but that there is no meaning behind such questions: they are inherently meaningless, thus even philosophers are talking nonsense when they ask such questions. Bohr, known as the patriarch of the Copenhagen Interpretation of quantum physics, and a follower of positivism,(4) is famous for having silenced most of the dissenting voices in his time. Why is this attitude also pathetic? Because it contradicts a fundamental tacit assumption in science: as long as there are questions, scientists are entitled to explore and seek an answer.
  • Einstein’s “as long as there are questions there must be hidden answers (and I want to know what they are)” attitude. I chose Einstein as the representative physicist with this attitude because, although some of his views in quantum physics turned out to be incorrect, he was the most well known among those few who dared to oppose Bohr’s hegemony. David Bohm was another one, pioneering the “hidden variables” approach. Whether their explorations were right or wrong is irrelevant here; it is their stance that is marked as worthy of attention for our purposes.

Assuming that, of the three attitudes above, Einstein’s is the only one that leaves any room to further scientific exploration, it is now necessary to examine whether questions such as those mentioned above (“Why does A have this property?”, “Why does this property have this specific value?”, etc.) are legitimate. Do such questions make sense? Could it be we are wrong in asking them in the first place, and thus Bohr was right, after all, wanting to un-ask them?

There are many examples of wrongly asked questions regarding the physical world. For instance, “What exists north of the North Pole?” is one such question, often compared with its cosmological analogue, “What happened before the Big Bang?” Another example from biology is: “When did the first human (or human couple) live?” Preschool children often ask: “Which is the largest number?” In later years they might ask: “What is happening now on Mars?” All these questions are wrong because they assume the existence of something that does not exist: a location north of the North Pole, an event before the beginning of time, a first human, a largest number, a ticking clock that keeps an absolute time in the universe. But our questions, such as “Why does A’s property have this specific value?” do not seem to be of this type because they do not assume something that does not exist: A’s are supposed to have been found experimentally, the values of their properties also have been measured experimentally, and so on.(5) I cannot find any other reason why such questions might be in principle wrong, so it seems to me that one has a legitimate right to ask them. Our present ignorance should not be an excuse for asking to not explore further.

Indeed, I will claim here, ignorance is the primary culprit for the refusal of scientists to even consider the questions as valid. There was ignorance since the ancient times, but back then it was so obvious that people knew very little about nature that no one could seriously think that the quest for the understanding of nature had ended. In Thales’s and even later in Aristotle’s time, the quest had just started. Nature appeared daunting to ancient peoples, humbling them. But in our times, and especially since the 20th century, a large amount of knowledge accumulated, to the point that nature is not perceived as threatening any more. In addition, scientists became extreme specialists. Along with extreme specialization comes a feeling of “No one knows these matters better than I do.” This is often true at face value, but it makes specialists feel subconsciously responsible for answering every question related to their field. If they cannot, their ego is traumatized. So when they are asked questions that would expose their ignorance, their subconscious reaction is, “If I don’t know, no one should.” This is sometimes extended to: “If I don’t know now — and it seems quite likely that I won’t know within my lifetime — then it better be that no one ever will.” It is important to note that such thoughts can be made not explicitly (consciously), but subconsciously.(6) Evidence for this attitude comes from statements that often proclaim the end of the scientific endeavor because everything there was to know is supposed either to have become known, or expected to become known in the very near future. But many present-day physicists believe that with our current knowledge of quantum physics we have reached, or are about to reach, the “rock bottom” in our exploration of matter. Their arguments for this belief are discussed later on, so as not to interrupt the flow of the text at this point.

In other words, specialists such as Bohr and others in the 20th century not only denied their ignorance in matters pertaining to the narrow focus of their specialty, but also denied the possibility that anyone else might fill in the blanks sometime in the future. They did this by either banishing the blanks as non-askable, or turning ignorance into an axiom:

If events seem to be happening randomly, this is not because we ignore the possible underlying causes, but indeed causes do not exist; randomness is promoted to an axiomatic element of the explanatory system; randomness is not caused, it simply is.

Let us examine the axiomatization of randomness. To claim that, for example, a radioactive element decays spontaneously without cause, is analogous to the claim that particle A, a rich concept in the scientist’s mind (see above), is elementary — please seek no further. Every theory is ultimately based on some axioms — there is no other option, otherwise we end up with circular reasoning — but axioms are not immutable and eternally unquestionable. For example, consider a very well known case of an axiomatic theory: classical Euclidean geometry.

It was assumed by the ancients that there are elementary concepts, such as “point” and “straight line”, that do not admit further explanation. There were also axioms (“postulates”), such as that it is possible to draw a straight line that passes through any two points, which did not require proof. Granted, one has to start from somewhere. But in the course of the millennia that passed since Euclid’s time, none of the elementary concepts and axioms remained unscathed from the attack of time and mathematical investigation: points were generalized to n-dimensional tuples (and 4-dimensional events in relativistic physics), and straight lines to geodesics, implying that, given the appropriate geometry, zero, one, or even many generalized “straight lines” can pass through two points (e.g., on the surface of a sphere, where an infinite number of geodesics, or equator-like circles — which are the “straight lines” on the sphere — pass through any two diametrically opposite points). There is also the famous case of the fifth Euclidean postulate, the abandonment of which led to non-Euclidean geometries, and hence to the mathematical foundation for the general theory of relativity. One of the greatest achievements of 20th century physics came about as a result of abandonment — or generalization — of what had earlier been considered axiomatically as eternal truths.(7)

Thus, not only one has the right to ask why the rich concept A is or acts like that, but also it has been shown historically that even if the answer seems impossible today, even if A appears to be at the “rock bottom” of our material exploration at present, most likely it will not remain at that status eternally. To assume otherwise is an attitude that betrays extreme arrogance.

But if the axioms of physics can always be questioned and explored further (possibly losing their axiomatic status later on), it follows that physics can never be a complete theory, one that contains its explanation within itself. An “axiom” is an initial assumption. Logically, a theory with assumptions can never be considered self-sufficient. In a nutshell,

Contrary to our psychologically understandable wishes,
if physics is an axiomatic system, it will remain
eternally incomplete.

Note that the objection “Physics is based not on assumptions, but on observations” is not valid: we observe objects (particles, etc.), until we cannot observe any further. Those objects where our observation stops — which thus appear elementary to us — are our assumptions, the “axioms” of our theory (or: our “axiomatic ontology”, to use a more appropriate term), akin to points and lines in geometry. In Dalton’s time, atoms were thought to be the ultimate building blocks of nature, the axiomatic ontology on which all chemical and physical theory was based; at present (beginning of 21st C.), our axiomatic ontology includes notions such as quarks, electrons, photons, charge, spin, mass–energy, space–time, and a few more. So far as these notions are not explained in terms of other notions, they are our “assumptions”.

But let us grant to philosophically-inclined physicists the right to elevate to the status of “primitive” (or “elementary”) any concept in physics that is not an already known consequence of other concepts. Thus, physicists decide that randomness is such a primitive concept, making probability a fundamental theory in our understanding of the world. Is this a reasonable choice? More specifically, can we claim we understood the way the world works now that we turned randomness into an axiom?

To have fully understood a physical system means that we can write
a program to simulate our understanding of it in a computer.

The above is not exactly a water-tight logical proposition, but is the best way I know of expressing the intuitive notion “We fully understood [some physical system].” If the reader is aware of a better way to express this notion, I’ll be glad to know. The “we” in the above sentence does not refer to individuals, but to “we, people”. Thus, if you are an individual and you don’t know how to program a computer, that doesn’t mean you can’t ever claim to have understood a physical system; but to say that “we, physicists” understood a system, this implies that we must be able to give instructions to some programmers so that they can simulate the system, without the programmers introducing any extra assumptions about the system, which assumptions were not explicitly given by the physicists.

Now, suppose you are a physicist with some elementary knowledge of computer programming (or you simply hire a programmer) and you attempt to simulate the workings of a system of quantum particles in your computer. You include some fermions in the system (e.g., electrons, quarks) and some bosons through which the fermions interact (e.g., photons, gluons). Each particle is an “object” in programming terms that includes its properties. For example, the electron has a property “charge” with value -1, and a property “spin” with value 1/2, etc.; quarks are objects with properties such as “type” (with values “up”, “down”, “strange”, etc.), “charge” (either -1/3 or 2/3), “color” (“red”, “blue”, “green”), and so on. Also, each object (actually the class of the object) contains the methods by which the object interacts with other objects; for example, a method specifies what happens when a free electron finds itself in the vicinity of another electron, causing one of the two electrons to emit a photon, the other electron to absorb it, and the two electrons to scatter in space. Any composite objects, such as protons (which are made up of two up and one down quark), or even whole atoms, must come about as emergent structures, assuming you did a nice job in programming your primitive objects. So far so good.

Now comes the time to write the method of the class “Electron” by which it is specified what happens to a specific object of that class (an electron) that is bound to an atom but in an excited state, and emits a photon, thus dropping to a lower energy level, coming a bit closer to the nucleus. You need to specify when this event happens. Well, all you know is that it happens at an indeterminate time, randomly. So you have to call a random number generator, which gives you a number by which you decide if the excited electron should emit a photon at a given time. Therein (in the random number generator) lies your cheating.

You see, the random number generator uses knowledge outside of your accepted set of primitives (your axioms). Whereas you accepted randomness as a primitive, when the time comes to implement this primitive you are forced to write up a mechanism that specifies exactly how this “primitive” behaves; but then you don’t have a primitive anymore, but a complex procedure, one that uses some arithmetic operators to generate a random number. A primitive is a primitive, such as “charge”, “color”, “baryon number”, “time”, “mass”, “angular momentum”, etc., not a complex set of procedures that compute something like a random number. (Note that you can’t use quantum randomness itself in your computer to generate random numbers, because this would imply you are using that which you set out to explain — you’ll be guilty of circular reasoning, in other words.) On one hand, if randomness is a primitive, then you must admit that you don’t have a full understanding of your system, and this will become painfully obvious to you when you realize that by omitting knowledge of how randomness comes about, your simulation won’t work because something as dumb as the compiler of your programming language will inform you that haven’t yet specified the randomness method; your program won’t even compile (let alone run). On the other hand, if you do implement randomness, assuming knowledge of it, then your system will work, but you won’t have randomness as a primitive anymore; you can’t have the pie whole and eat it.

Thus, in my view, the claim that randomness is a primitive and that, simultaneously, we have a complete understanding of quantum physics, is a contradiction. The person who makes both claims and does not see the contradiction must be confused.

Now, what follows enclosed in a frame, below, was an argument against the ultimate validity of Heisenberg’s Uncertainty Principle. However, as it turns out, it is a wrong argument. I understood it is wrong seven years after I first wrote the present. Instead of simply eliminating what I wrote, I thought it is more honest to include the previous error, rather than pretend that everything I ever wrote was fine. It can be even useful — from a didactic perspective — to leave my erroneous syllogism on this page, because the reasons for which we make errors in thinking about the natural world can be just as instructive as those for which we are correct. It might be interesting for the reader to find out where the error is. I discovered it by reading further about quantum physics; perhaps the reader already knows enough to spot the error instantly. So, what is enclosed in the frame that follows could be seen as a sort of intellectual exercise, probing the lay-reader’s depth of knowledge in physics.

When we turn to consider Heisenberg’s Uncertainty Principle (which prevents us from knowing accurately at the same time, e.g., the position and momentum of a particle), we come face to face with an issue that is also present in the randomness-turned-elementary problem, but not as obviously as in this case:

Human limitations are often confused with universal limitations in physics.

Here is what this statement means.

We humans are cognitive beings that have evolved under certain conditions on our planet. One of these environmental conditions is that electromagnetic radiation that originates from our Sun is allowed to pass through the Earth’s atmosphere, and the bulk of the photons that manage to make it to the surface of our planet have their wavelengths in the visible range of the electromagnetic spectrum. Photons of all wavelengths reach the surface of the Earth, but most are in what we call the visible range, from red to violet. As a result, we (along with most other animals) evolved to see precisely in that range. It turns out that vision is our most important sense by which we perceive the world, especially for scientific purposes. (Neither audition, nor olfaction, nor any other sense is useful for perceiving the stars and the quanta.) Thus, whatever information we collect from scientific experiments has to be registered in our brains ultimately through photons — there simply is no other possibility.

In some cases this is hard to fathom. In particle accelerators, physicists use all sorts of different particles — not just photons — smashing them against each other and registering collision data in computing equipment. From the results of such collisions they draw conclusions about the properties of particles. On the surface, it looks like photons do not have to be involved in this process: particles collide, their data are registered by equipment that — unlike us — does not necessarily depend on photons, and finally some scientists read the output of the computations. Where are the photons in this process? They are in the last step, when information is read and registered in the scientist’s mind. The collision data have to be of such nature that ultimately some photons will be “informed”, that is, preserve the information of the collisions so as to pass it into the human minds. As mentioned earlier, our brains have only a limited number of input modalities (senses), of which vision is the only useful one for scientific inquiry. Brains require photons, there is no other option.(8)

Now, the following is perhaps the central idea of this argument:

What if photons are too crude a medium for the transfer of information that might exist at yet undiscovered lower levels of material organization?

What if — just suppose for a moment — there is a level of material organization the units of which are to quantum particles as particles are to our familiar macroworld objects? Yes, I know very well that I am not the first person in the world who comes up with this idea, but here is a hopefully new one: It could be that there is such a level and we have no way of detecting it (for now at least, and for some time to come), because we evolved to have eyes that receive photons. We evolved to have biological mechanisms that only interpret information that reaches us here on Earth. I can surmise about at least a couple of reasons why we might fail to perceive information from a purported lower-than-quanta level:

  • It could be that it is biologically impossible to evolve receivers that receive and interpret this lower level of material organization. We already know of several such examples. For instance, a vast number of neutrinos penetrate our bodies every second, but matter essentially does not interact with them, so living beings evolved to be oblivious to neutrinos. Also, we receive a minute amount of X-rays on Earth, but no animal ever evolved a system to intercept X-rays, although obviously seeing through objects that are opaque to the visible light would increase the chances of survival of any animal that could achieve this feat. We also receive radio waves from the Sun, which are in fact harmless compared to the X-rays, and yet no organism evolved to communicate in the radio part of the spectrum.
  • Another possibility is that our corner of the galaxy (the neighborhood of our solar system) is in the “shadow” of some structure and thus does not receive the units of that lower level that would be the carriers of information (analogous to photons, which for us are the information carriers of the quantum level). As an analogy, we would be in the shadow from visible photons if, for example, life could only evolve in caves, perhaps due to a lack of atmosphere and deep enough oceans, thus turning the gamma-ray bathed surface of the Earth inhospitable. As it turns out we are in the shadow regarding photons that come from the center of our galaxy, because of nearby large nebulae that block the distant galactic center from us. Similarly, it could be the case that we are permanently in the “shadow” of something that blocks the purported lower-than-quantum-level information carriers from us.

Now, clearly it is easy to dismiss all the above as merely the hand-waving (unsupported surmising) of a non-physicist. But the point I am trying to make is that principles such as Heisenberg’s Uncertainty are based on a hidden assumption, which is never stated explicitly in physics textbooks:

What is in the world is one thing, whereas what we evolved to perceive is another.

The two do not necessarily coincide. When it is said, in the context of Heisenberg’s Uncertainty, that once the momentum of an electron is known fairly accurately then its location is blurred and cannot be measured no matter what, what is meant is no matter what we, limited, photon-communicating human beings do.

Physicists usually assume that human limits are also nature’s limits. But science should be concerned not with what humans think that there is out there, but with what really is out there, in a human-independent fashion. Consider this: Newton, and other physicists in the past, studied color; but in the 20th C. we learned that color is a human-dependent property, it’s not a real property of nature, in which there are no colors, only electromagnetic radiation of different wavelengths. We learned that we have pigments of three different kinds in the cones of our retinas (for seeing what appears to us as red, green, and blue), and thus we interpret wavelengths as colors. Pigeons have four pigments in their retinal cells, so they probably see more colors than we do, in a way that we cannot grasp. Humans are just one kind of intelligent observers, a biologically restricted kind. There might be other kinds. For instance, there might be intelligent aliens.(9) There might be cognitively sophisticated enough computers in the future. Whatever direction the generalization of the notion of cognition might take, the fact is that physics at present is still anthropocentric: it describes nature as humans see it through the distortive perceptual spectacles of their brains and minds.

Here is another example that serves to remind us how deeply anthropocentric present-day physics is. Why do we think space has up to three dimensions? Why can we imagine up to three mutually perpendicular axes perfectly well, but a fourth axis mutually perpendicular to the other three is utterly unimaginable? Why, whereas we live in a space–time continuum of four macro-dimensions,(10) we make such a sharp distinction between the three spatial dimensions and the single temporal one? (Note that this is not just a question of geometry, because time, traditionally, enters as a separate and fundamental entity in formulas of physics, and has always been treated specially, different from space.)

The answer is, once again, because we evolved in a world with certain parameters set with specific values. In particular, we evolved in a world of very slow speeds compared to the speed of light. Please take a look at your surroundings. Right now I am typing this text sitting at the balcony of my home. There was a pigeon that just flew across the field of my view, and the branches of plants are occasionally stirred by the light breeze. Other than that, and my fingers typing on the keyboard, nothing is moving. Even those few objects that move relative to me, the observer, have speeds that are ridiculously slow compared to c, the maximum speed allowed in nature. The result of this is that, temporally, the world in which we evolved is essentially flat: we may move about in the other three dimensions (the ones we call “spatial”), but our motion in the fourth dimension (time) is completely negligible. In particular, we know now (i.e., after Einstein) that we move in time whenever we move in space: if two observers synchronize their clocks at rest and then start moving in space with respect to each other, their clocks will go out of sync, which means that the observers will have moved in time with respect to each other (i.e., their temporal coordinates will change, as evidenced by their out-of-sync clocks). But at the speeds of our environment, such temporal differences are so minute that they can be completely ignored from a biological perspective: no animal’s survival would benefit from being able to calculate minute temporal differences. In addition, most likely it is biologically impossible to evolve a mechanism for computing (“feeling”) such small numbers. As mentioned in this earlier page, it was much easier to evolve to perceive clocks (i.e., each object’s local time) as always in sync with ours, and also to simplify the formula for adding velocities: when we hurl a stone that leaves our hand at speed v1 while we run at speed v2 relative to the ground, it is easier to believe that the stone will hit a stationary target at speed v1+v2 (the wrong formula), rather than at speed (v1+v2)/(1 + v1v2/c2) (the right formula), because the term v1v2/c2 is negligible and/or biologically impossible to measure. Given the flatness of our environment along the temporal dimension, it is little wonder that our cognitive makeup does not perceive time as a dimension on par with the other three, but as something different. (This would be true even if the geometry of our space–time were Euclidean; that it is not Euclidean implies that there are some additional reasons for perceiving time as something very different from space, but it is beyond the scope of this paragraph to delve deeper into this issue.)

In summary, time and space are ultimately “the same thing”, much as energy and mass are ultimately “the same thing”. It is only our human nature, which depends on the parameter settings of our environment, that makes us perceive the members of each pair as completely different. This has been reflected in the formulas of physics, which describe, after all, our physics, not some human-independent beast.(11)

I hope that, considering the above, it becomes clearer why it appears so illogical to insist that, although our physics reflects our limitations and human nature in practically every aspect of it, when it comes to the quantum world physics describes the ultimate reality.


Physicists have their reasons for believing that the material structure ends at the quantum level, and that the idea of “worlds within worlds” beyond the quantum level is incorrect. I have read their argumentation many times in several books, but, having not marked the point where I found it in each book, I can only present what I read in the book that I happened to be reading at around the same time I was writing this text, namely, in Kenneth W. Ford’sThe Quantum World: Quantum Physics for Everyone” (2004, Harvard University Press). [Nov. 2007 note: read Stephen Hawking’s affirmation of the same idea, below.] I will add more references on this subject as they come to my attention, assuming they differ or add some new element in Ford’s arguments.

So, why do many physicists conclude that we reached (or are just about to reach) the “rock bottom” of material construction? Ford offers two reasons, the first of which is the following:

“One reason for this conclusion is that it takes only a few quantities to completely describe a fundamental particle. An electron, for instance, is described by its mass, charge, flavor, and spin, and by the strength of its interaction with force-carrying bosons of the weak interaction—and that’s about it. Physicists are confident that if there are any properties of the electron still to be discovered, these properties are few in number. So it takes only a short list to specify everything there is to specify about an electron. Contrast that with what it would take to completely specify a “simple” steel ball bearing. Normally we would say that we know all we need to know about the ball bearing if we know its mass, radius, density, elasticity, surface friction, and perhaps a few other things. But those properties don’t begin to describe the ball bearing completely. To describe it fully, we would need to know how many atoms of iron it contains, how many of carbon, how many of various other elements, how these atoms are arranged, how the countless electrons are spread throughout the material, what energies of vibration are jiggling the atoms, and more. You can see that a list describing the ball bearing down to the last detail would have billions upon billions upon billions of entries. The description of matter does seem to get simpler as we peel away the layers of the material onion, and apparently can’t get very much simpler than it is for the fundamental particles.” (pp. 99–100)

Ford’s first argument, above, suffers from an obvious flaw: he makes use of knowledge that should be unavailable if he wanted his electron–ball-bearing analogy to be sound. Specifically, he says that the ball bearing is not described fully by properties such as mass, radius, density, etc., and instead, to describe it fully we need to talk about its atoms, electrons, quarks, force carriers, etc. But this is knowledge that was made available only after the quantum theory was developed. For a nineteenth century physicist — who is the 19th C. analogue of Ford’s, imagining that classic physics is about to have explained everything — a ball bearing was described by as few classical properties as Ford attempted to list. There were no quarks, no electrons, no bosons, nothing of today’s quantum zoo was known back then. Ford appears wiser today because he is endowed with the 20th C. knowledge of physics, but to use this knowledge to attack the 19th C. physicist’s certainty that the end is near, is inappropriate, at best. A future physicist might similarly laugh at Ford’s strong suspicion (if not certainty) that an electron is “simple”, pointing to the list of billions upon billions of sub-quantum elements that make up an electron. (Regarding whether this is possible, given that every two electrons appear identical, see Ford’s second argument, below.) In short, a 19th C. physicist would think that an ideal ball bearing is as simple as a present-day (early 21st C.) physicist thinks an electron is. The fact that the 19th C. physicist was wrong can only be concluded today with 100% hindsight.

Immediately below on the same page (100), Ford offers a second argument:

Another reason for believing that we are close to a genuine core of matter—a reason closely related to simplicity of description—is the identity of particles. Even with the strictest standards of manufacture, no two ball bearings can ever be truly identical. Yet we have good reason to think that all electrons are truly identical, all red up-quarks are truly identical, and so on. The fact that electrons obey the Pauli exclusion principle (which says that no two of them ever exist together in exactly the same state of motion) can be understood if the electrons are identical but cannot be understood if electrons differ from one another in any way. If there were infinitely many layers of matter to uncover, we would expect electrons to be as complex as ball bearings, no two of them alike, and each one requiring a vast array of information for its exact description. This isn’t the case. The simplicity of the fundamental particles and their identity give strong reason to believe that we may be close to reaching the ultimate “reality” of matter.

Ford’s second argument, as he explicitly admits, is closely related to the first one. Let us accept the beginning of his reasoning for a moment. Suppose we are pre-20th C. physicists, and are thinking that steel ball bearings always have minute differences: when placed on a very accurate scales they are found to differ slightly in weight (mass); if we put them under a microscope we can see small imperfections on their surfaces; and so on. So we conclude that there must be a lower layer of material reality that explains these differences. (The atomic constitution of matter was well known to chemists before the 20th century, but this is a thought experiment, so suppose nobody had knowledge of a lower layer before then, but its existence was suspected because of the non-identity of material objects.) Ford claims that because electrons (and all other elementary particles) appear identical to us, this is reason to believe they are fundamental. This reasoning commits the error of confusing what is out there with what our human nature has evolved to perceive, discussed earlier in this text. If we did not evolve to perceive anything beyond the quantum level (see above for possible explanations why this can be so), then naturally any two electrons will appear identical to us, judging by the only properties that we are capable of perceiving. But it is an anthropocentric view of physics to believe that what we can perceive in the world (now) is all that there is.

When we measure the charge of an electron today, we find it equal to -1 (the negative sign by convention). But it could be that it is not really -1, but varies, yielding values such as -1.000000000000000000000000000517... Our present theories might not distinguish such a value from an exact -1, just as classical mechanics could not distinguish between π, the quotient of dividing the circumference of an immobile circle by its diameter, and the same quotient when the circle is rotating around its center with respect to the observer.

As for Pauli’s exclusion principle (mentioned in Ford’s second argument, above), we have no guarantee that it is a principle that concerns all levels. Pauli’s exclusion might apply only at the quantum level, not at levels below it.(12) So, given the laws of the quantum level, it could be that we get something we call “Pauli’s exclusion principle” (though at present nobody knows why), observing that some particles (specifically, the ones with half-integer spin) cannot be in the same state according to the laws of the quantum level. But why should this be relevant to the laws of sub-quantum levels, whatever they are, and if they exist? As an analogy, in our macro-world we know that each object retains its identity at a given time: a stone is a stone, it can’t be simultaneously a stone and a cloud. So we observe something like “Aristotle’s Exclusion Principle” in the macroworld. But this principle evidently does not hold at the lower, quantum level, where an object can be both a particle and a wave — two qualitatively very different notions. Why should Pauli’s principle instead be invoked and expected to hold at levels beyond the quantum?


When I occasionally take a look at my old books for various reasons, sometimes I come across a point that I should have indexed the first time when I read the book, but failed. I just found such a point in Stephen Hawking’s well-known book and bestseller, “A Brief History of Time” (in Amazon), in which Hawking affirms his strong suspicion that we might be close to knowing the ultimate in physics. Specifically, he says:

[W]e know that particles that were thought to be “elementary” thirty years ago are, in fact, made up of smaller particles. May these, as we go to still higher energies [which allow us to probe deeper in matter — H.F.], in turn be found to be made from still smaller particles? This is certainly possible, but we do have some theoretical reasons for believing that we have, or are very near to, a knowledge of the ultimate building blocks of nature. [p. 68]

Hawking does not go on to explain what those “theoretical reasons” are. So my inclusion of the above passage in this page has only the effect of saying, “Stephen Hawking, too, thinks the end is near” — at least he thought so when he wrote that book.


For corrections, suggestions, comments, etc., consider contacting the author.


Footnotes: (Clicking on the footnote number brings back to the text)

(1) The term “material” is used in its generalized sense in this text, i.e., to mean anything scientifically explorable: matter, energy, space, time, etc. Likewise, the term “matter” will be used in a similar general sense, to mean either matter with mass, or energy.

(2) Traditional dualists, such as Descartes, have assumed that the mental is unreachable by science. At least one evolutionary biologist, Stephen Jay Gould, has indirectly supported this view (see his article on the non-overlapping magisteria). Other modern dualists, such as David Chalmers, claim that mental elements, although distinct from traditional quanta, are nonetheless scientifically explorable.

(3) The terms “reduction” and “reductionism” have caused a heated discussion among philosophers and can be easily misunderstood. What I mean by the statement that the human mind is ultimately reducible to quantum physics is that all the laws of cognitive science regarding human minds emerge from the laws of neuroscience, which emerge from the laws of molecular biology, which in turn emerge from the laws of chemistry, which finally emerge from the laws of quantum physics. It’s just that there are multiple layers in this reduction, and the unfamiliarity with some of the intermediate layers — as well as a hardwired propensity of some human minds toward mysticism — confuses some people. Note that when the laws of a higher level emerge from the laws of a lower one, this does not imply that the behavior at the higher level is determined by (can be predicted from) the laws of the lower level. The interested reader is invited to study the workings of cellular automata for a better appreciation of this last statement in a scaled-down and easily understandable domain.

(4) Positivism is a philosophical view of the 19th and 20th centuries, postulating — among other things — that only questions that involve measurements should be discussed by scientists.

(5) It occurred to a reader that a question such as “Why does A’s property have this specific value?” could still assume something that does not exist: that there is indeed an explanation; or, if paraphrased as “What causes A’s property to have this specific value?” then it assumes a causal relation: something causes this value and none other, but perhaps there is no underlying cause. But I still find a difference between these questions, and those other ones that I listed earlier in the paragraph: the latter make an assumption about something existing (e.g., a largest number), and based on that assumption, they ask something else; whereas the former ask directly what needs to be asked. For example, the previous questions could be stated in this way: “Is there an explanation for why A’s property has this value?”, and “Is there something that causes A’s property to have this value?”. These are straightforward questions, making no assumptions of things that have not been observed. If there is really no explanation or cause, this needs to be shown; whereas in the case of questions of the largest-number type it can be shown that there is really no largest number.

(6) There is a large body of experimental evidence in psychology and cognitive science showing that most of our thinking takes place subconsciously. Our conscious focus is only the tip of a thinking iceberg. (Note: this should not be confused with the popular misconception that at any time we utilize only 10% of our brainpower — a completely wrong idea.)

(7) There had been strong reaction against abandoning the traditional axiomatic system of geometry in the 19th century. Gauss is known to have confided by mail to his friends that he had reached the conclusion that Euclid’s 5th postulate could be replaced with other postulates, resulting in different and interesting geometries. But he never dared to start a revolution against the German mathematical establishment, and left that task to the young and unknown Hungarian János Bolyai, and to Nikolai Ivanovich Lobachevsky of Russia.

(8) Well, there is at least one more option: the computed output could be encoded in audio information (in Morse code, to make it completely ridiculous), and the specially trained physicist could listen to the information, deciphering it mentally. But this does not solve the problem of the crudeness of the information-carrying medium (to be explained immediately), it only makes it worse.

(9) Personally I don’t think so, but it is the possibility of there being intelligent aliens that matters in this argument.

(10) Leaving any curled micro-dimensions (as predicted by modern theories) aside for the purposes of this discussion.

(11) For more information and further discussion on the equivalence of time and space the reader might consider reading this page of mine.

(12) How can the non-applicability of Pauli’s exclusion principle at lower (sub-quantum) levels destroy Ford’s argument? Well, it can be that the spin of one electron is 0.500000000000000000000000000328..., and the spin of another electron is 0.500000000000000000000000000791... Suppose that what matters at the quantum level is the “significant part” of those numbers, which is 0.5. Thus, Pauli’s principle says that we can’t have two electrons in the same state of motion, and to find their state we take into account their spin as it appears at the quantum level, which involves the rounding of the “deeper” values, yielding 0.5. The two electrons seem to have identical spin at the quantum level, so Pauli’s principle applies on them. But they don’t have identical spins at a lower level. Thus it can be that Pauli’s exclusion principle works, but tells us nothing about the nonexistence of levels beyond the quantum.


Back to Harry’s index to topics in physics

Back to Harry’s index to topics in biology

Back to Harry’s home page