Science, Statistics and the Supernatural

The issue with the supernatural is not whether it’s part of the universe, but whether it is bound by the same laws as all the other elements of the universe. The bizarre claim about ghosts is that they somehow obey some laws but not others, for no obvious reasons.

Something supernatural could, in principle, interact with the universe sometimes but not at others. If it is operating outside of natural laws, that doesn’t obviously preclude it from sometimes interacting with things that do obey those laws, either by its own choice to obey those laws (“186,000 miles per second, it’s not just a good idea, it’s the law”), or by accident in the course of some random fluctuation of its supernatural nature.

.

.

.

.

The obvious rejoinder to this, leaped upon by a bunch of people in comments, is that if the supernatural doesn’t behave according to known laws of nature, that just means that the known laws are incomplete, and some more complete theory would encompass the seemingly supernatural. Which is true as far as it goes, but misses a subtle point, namely the determinability of those laws. To paraphrase a famous “law,” a sufficently advanced magic might be undetectable by science.

This is, in many ways, a question of practicality. That is, in order to be able to determine the rules governing some aspect of the universe, you need to be able to show that they behave in a consistent and repeatable manner. Which requires the ability to run large numbers of experiments (or make large numbers of observations), and knowledge of all the parameters that might affect the operation of the system. If the number of tests you can do is limited, or you do not have the ability to keep track of possible confounding factors, then it can be all but impossible to figure out what’s going on.

If you want to see quantum mechanical behavior in some system, you generally look for an interference effect of some sort, but since quantum mechanics is inherently probabilistic, that requires you to repeat the same experiment many times, and trace out an interference pattern in the probability distribution as you vary some quantity. If your system interacts with a randomly fluctuating environment, however, and those environmental interactions can shift your pattern by at least half a wavelength, you lose the ability to detect the interference, even though it’s still taking place.

You can see how this works even with a really simple variant of the experiment: In a classic double-slit experiment, you get a bright spot right at the center of the slits, with dark spots to either side. If you do something to slightly delay the light passing through one slit– placing a thin piece of glass over it, for example– you can shift the interference pattern so that there is zero probability of detecting a photon in the exact center of the pattern, so the center of the patten is a dark spot with bright spots to either side. If you repeat the experiment a million times, flipping a coin before each run to determine whether to put the glass in place or not, you won’t see any sign of the interference in the aggregate of all the data runs. You’ll see a nearly uniform intermediate intensity (halfway between dark and bright) at all positions on your detector.

Does this mean that interference isn’t occurring? No, not at all. If you select out the half of your data runs for which the glass plate was in place, you’ll very clearly see a central dark spot with bright spots to either side. The other half of the data runs for which the plate was absent will very clearly show a central bright spot with dark spots to either side. The photons passing through the double slit are always interfering, but your ability to detect that interference is lost when you don’t have complete information about the experimental conditions.

So, it’s possible that even in a system governed by very simple rules, those rules can be rendered undetectable by interactions with a large and unmeasured environment. And if you’re talking about the possibility of supernatural-type interactions affecting the entire universe, then there will always be some possibility of confounding interactions making the supernatural laws undetectable.

At some level, this is essentially the same issue that came up in the recent discussion of probabilityin quantum physics, namely how do we know that the probabilities we measure through repeated experiments are the “real” probabilities, and not just some weird statistical fluctuation (either as an inevitable result of living in a Many-Worlds type of universe, or because we’re exceptionally unlucky) that makes it look like our current models of probability are correct? While our current theories of quantum mechanics are spectacularly successful at predicting the probabilities for experimental measurements, it could be that there is some other theory that “really” determined the outcomes, and all our successes are a complete fluke.

That question is, in many ways, indistinguishable from the question of whether supernatural effects might exist, but work in some way that is effectively undetectable. The chief difference between them is that worrying about the philosophical implication of probabilities in quantum mechanics maintains a thin veneer of respectability, while talking about supernatural forces gets you mocked even by philosophically-inclined physicists.

(A vaguely related issue is the question of singular events, such as the famous magnetic monopole search that saw a plausible signal in 1982, almost as soon as it was switched on, and never saw anything else in the next twenty years of operation (possibly paywalled version in this Nature story). Given things like inflationary cosmology, it’s conceivable that this could be both real and effectively unrepeatable. There’s really no good way for science to handle that sort of thing, either.)

While it’s true as a matter of totally abstract philosophy that anything “supernatural” that interacts with the real world is in principle subject to some larger set of “natural” laws, it’s always conceivable that those supernatural laws could work in a way that is effectively impossible to detect. Science can do a lot of things, but the need for repeatable experiments makes it almost impossible to use scientific methods to ferret out the kind of subtle and ambiguous magic you get in books like Jo Walton’s Among Others where, for example, the protagonist’s magic spell to stop a factory polluting the water in her home town works by causing the management of the factory to decide to shut it down. A decision that was made some weeks before the spell was cast. The protagonist firmly believes that magic made this happen, reaching back through time, while most adults in her world (and most mainstream book reviewers) think that it’s just a delusion. And there’s really no way to sort the two out, without appealing to authorial intent, anyway.

Of course, as a question of practical argumentation, this is kind of a moot point, because most of the people taking the pro-supernatural side of these arguments aren’t talking about the kind of ambiguous magic Jo uses. Instead, they have a more direct sort of system in mind, where specific actions produce specific results on demand– healing through prayer, communicating with the spirits of the dead, bending spoons with MIND POWER, whatever. Those sort of situations are implicitly rule-bound systems, and should give rise to Ponder Stibbons stories. And there has yet to be a convincing demonstration of any of these phenomena that doesn’t have a more convincing non-supernatural explanation to go with it.

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s