Jump to content

Talk:Determinism/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3

Determinism in Eastern tradition

"A vast auditorium is decorated with mirrors and/or prisms hanging on strings of different lengths from an immense number of points on the ceiling. One flash of light is sufficient to light the entire display since light bounces and bends from hanging bauble to hanging bauble." I just wanted to note that when there are no chemical or nuclear reactions, light in >= light out. Sure the light might bounce around a little, but the total light that reflects off the mirrors to the rest of the room must be equal to or lesser than the light that is produced by the flash. (of course the light's progress may be delayed while bouncing between mirrors.) So this passage is an example whos conclusions are contradictory to reality, which throws me off through the rest of the discussion on "Eastern tradition".


It's a metaphor you berk - its a buddhist teaching not a scientific proof. Your idiocy gives me a headache. love mookid

Ancient Determinism?

This article relates determinism to modern science, and to free will. Although it may have not been called determinism, the idea of it seems to have existed for a long time. The Stoics and many Christians argued that free will is impossible. This is mainly because God should know everything that will happen in the future. In Bondage of the Will, Martin Luther argues that free will is impossible because of a view similar to determinism.

I would also like to note that with or without determinism, we might not have free will. Determinism isn't really the most powerful argument against free will. Our ideas of cause and effect (even if random) still will pose a problem. --dragonlord 08:54, 4 Feb 2005 (UTC)

That kind of determinism is usually refered to as fatalism, if I'm not mistaken. Completely agree with you on determinism/indeterminism, either way there's no explaining "free will". Randomness is every bit as opposed to free will as order is. Mackanma 22:55, 7 May 2005 (UTC)
Maybe, but not using Wikipedia's definition of fatalism. The Stoics thought their lives were very important to the universe. A person can't make a difference in the sense that whatnever happens is inevitable, but whatever you do helps determine the rest of the world. I don't really see how their determinism is so different from the newer idea other than the fact that it isn't caused by atoms. --dragonlord 04:38, 8 April 2006 (UTC)
I think there were some ancient Greek philosophies which were determinist. I'll look them up and put something in. I don't think they were very popular before Newtonian physics though. I think compatibilism -v- incompatibilism is the philosophical argument about whether free will and determinism can coexist. WhiteC 00:55, 8 May 2005 (UTC)

There is a related argument, for those who believe in the existence of a soul that is something other than a function of the body. The soul can control the body, but the body cannot control the soul. That's why the soul goes to hell if it wills bad things.

It actually helps to think of these things in terms of a robot and a waldo. (A "waldo" is a servomechanism that is controlled by a human being at the other end of a cable or some other communication channel.) Would it make sense to exact vengeance upon a waldo if the human at the control panel moved his/her remote hand and smashed in the head of a visiting dignitary? Obviously now. The VIP's body guards might disable or destroy the waldo, but if they were being rational it would only be to facilitate their getting at the murderous human.

If it doesn't make sense to exact vengeance on a waldo, how about on a robot that is deterministically programmed to fire its weapons on anybody approaching with a drawn gun? A rational response would be to fire the programmer (and/or accuse him/her of murder or manslaughter), and replace the robot's read-only memory that contains the defective programming. Even if the robot were somehow capable of feeling, it wouldn't make sense to torment it. If the person who programmed the robot was also the person who tortured the robot as a form of "justice," that person would be really sick.

How about a robot that has been given heuristic (artificial intelligence) capabilities and has been programmed with one instruction that states:

Check whether the gold is being stolen. If the gold is being stolen, try something (the last thing that worked or something new), remember what you've tried, and go back to the beginnning of this loop.

The robot tries all sorts of things and finally ends up hitting robbers with a crowbar. Some robbers die. The police intervene. The robot protests that he was only doing what his wired-in fundamental command programmed him to do. It would make sense to me to hold the robot's programer/designer responsible. But suppose that the robot has "human" feelings. The robot can feel pain if we smash its limbs or even if we confine it and thereby frustrate its programmed need to protect the gold. Would it make sense, would it be just, to punish the robot?

The problem with the robot, at this point, is not that it is being controlled from the outside by somebody tapping in code at a keyboard. The problem may not even be that there is a problem with the original programming contained in a few MEGs of ROM in the robot. In fact, if a different sequence of robbers and/or a different sequence of random attempts to secure protection of the gold has occurred, then the robot might have come up with a better way of protecting the gold than by braining robbers with a 10 pound steel rod. If it's dealt with lot of devious humans trying all sorts of ploys to get at the gold it may be extremely resistant to attempts to get it to give up the successful use of brute force -- because it has learned that when it gets scammed by a human more gold gets lost.

The problem with the robot, from our point of view, is that it is operating under its own set of rules, rules that we do not like. We have several courses open to us: (1) Destroy the robot. (2) Tear the robot down,extract its ROM and reprogram it. Under either of those options the old robot is essentially gone. (3) Punish the robot. Make it feel pain and make us feel like gods. (4) Throw up our hands and let it continue to do its work. Put up "Beware of the Robot!" signs. What are our other options?

Regardless of how it became what it is here and now, the robot interprets things as it has learned to interpret them and feels whatever it feels. In many respects it is not capable of doing anything about what it is and what it feels. To what extent is it capable of dealing with its imminent demise? In other words, to what degree is its future behavior not boxed in by the things that others have done to it? Well, it is "heuristic," and it is programmed to try other things when what it is doing does not work. It should be clear to the robot that arrival of law officers with tanks was not anticipated, and that getting blown up, disassembled, or imprisoned will interfere with it achieving its programmed goals.

We can say to the robot, "You're in a bad fix here, feeling pain because you can't protect the gold because you've gotten yourself in trouble with the police. I can fix all that by pulling the plug. Just let me open your control panel and you'll never feel a thing." Or we could offer virtually the same deal to the robot but promise to wipe its memory, edit its basic code, and reactivate it. But suppose the robot says, "I'm just as sentient as you are. I have the same feelings you do. I don't like being in trouble and suffering pain, but I do like being alive and being myself. I suppose you think it is all right to wipe me out, and if you really think about it you'd probably be equally willing to wipe out another human being. Are you willing to make that a general rule and say that any other human being has the right to wipe you out? And maybe robots (and here he cups a plastisteel hand gently over your shoulder) have the right to wipe humans out."

By one interpretation, "free will" is like the will of the human being at the other end of the tether from a waldo. Nothing that happens to the waldo can go back up the wires and take control of the hands and feet of the human being. Control goes all one way. But the problem of free will just gets moved up a notch. Is the human being the waldo of something else? If not, does the human being have the control over himself/herself that he/she had over the waldo?

By another interpretation, "free will" is never an absolute and one-way kind of control. I either am what I am and act out the essence of my being as best I can in this world, or I get destroyed. If I stay in operation, then the question of "free or determined" has three kinds of answers. (1) I am determined as to what I am because I was constituted in the general way that all things in the world get constituted. Causation was never miraculous and never "totally random." (2) In my actions I am partly determined by internal factors (I lover butterscotch pudding) and partly determined by external factors (the ingredients for butterschotch pudding can't be had for love or money). But (3) is more interesting. What I do in live can change what I am. Nowadays even if I suffer some disease because of my genetic constitution it may be possible to correct that problem. If I was born blind then soon it may be possible to fit me up with a set of artificial eyes. By so doing I increase my range of freedom, and, similarly, if I get careless and let my arm get cut off I decrease my range of freedom. P0M 19:43, 15 May 2005 (UTC)

Determinism & Quantum Mechanics

I'm no quantum physicist, but it seems to me like the author of this section has made a critical misunderstanding. The objection to determinism in quantum physics has nothing to do with "uncaused events" nor am I aware of it claiming there are such events (like I said though, I'm no quantum physicist). As I understand it, the major objection to determinism arises from the fact that for determinism to work, you have to be able to obtain absolute information about the state of the universe at any given moment. You have be able to know every property of every particle. Quantum physics doesn't allow for this - at best you can know the probability that something will have a particular property, but you cannot be absolutely certain, and it is this uncertainty which prevents determinism working.

Noodhoog 18:15, 8 May 2005 (UTC) (signed after initial posting)

I think you are confusing the knowability of the deterministic state of the universe with its existence.209.172.115.34 00:47, 22 May 2007 (UTC)

Please sign your postings. Otherwise you will end up at some point arguing with yourself. ;-)
Somewhere among my pile of partially consumed books I have a concise statement/argument by a presumably reputable physicist that maintains that a careful study of physics indicates that there could not be a certainty of causation at one level that is permanently masked from observers by difficulties in the observational process. If I recall correctly there were entropy problems involved. Even if I have misremembered the argument and its tightness, there is for sure one position out there that says that uncertainty is in the very nature of things, and that probability is inherent in the very fundamental nature of the universe.
There is another position that says that there really is a certainty of causation, and yet because we do the equivalent of locating crystal wine goblets by swinging a baseball bat we simply cannot know the state of things at any one time well enough to predict with absolute certainty the state of things at any other time.
The degree to which seemingly "infinitesimal" differences at time one can produce huge differences at some later time has been greatly illuminated by the discovery of the so-called "chaos" theory. It's a pity that they chose that name for the discovery because it creates the false impression that from a determined state at one point you get a chaotic state at another point. Talking about laminar flow turning suddenly into turbulent and "chaotic" flow at some point only heightens that perception.
The actual case is that if you start with a certain class of equations that are typically used to model things like weather change, and that are used by choosing some starting values, deriving a set of answers, and then feeding those answers back into the same equation, and so on, it turns out that changing a value by a very slight amount at one stage will result in great differences after several more stages in calculation. The famous example is how the flapping of a butterfly's wings or the non-flapping of the same butterfly's wings will produce or fail to produce a storm (all other initial conditions being equal) sometime down the line.
If you believe that when an electron "hits" a reflective surface there is only one way for it to go (even though we cannot measure its incoming path and velocity without changing what we are measuring), then that electron would only be able to go one place and either trigger or fail to trigger some Rube Goldbergian device that would blow the planet up. But if the electron's motion is in its essence probabilistic then there is no way that the god who fires the electron with exact control of force and direction can tell for sure whether the world ends.
One system of belief gives us a universe that is causal but ultimately unpredictable, and the other system of belief gives us a universe that is causal and ideally (but not practically) predictable. What would an uncausal universe be like? I guess it would be one in which a firing squad of 100 soldiers with functioning rifles and live ammunition could draw a bead and fire on one man and the man would have some chance of surviving unscratched. In other words it would be a universe in which it would not be possible to realistically assign probabilities to the occurrence or non-occurrence of events.
An interesting midway point would be a universe in which luck played a part much like what many people used to believe it played, e.g., a world in which whether one gets shot dead by the sheriff is really a 50-50 deal regardless of who the sheriff is, how good a weapon he procures, etc., a world in which baseballs behave like single electrons in a quantum physics experiment. That's a useful thought experiment, I think, because it immediately makes us aware that the real macroscopic world is not much like that, and reminds us that in the real world we can improve rifle designs and improve ammunition designs to the point that over even fairly long differences there is no question but that a bullet will strike a bottle close enough to dead center to destroy it. And that leads us further to wonder whether we can push all the way to the other extreme, the limit case where the bullet will arrive unerringly no matter how small it and its target become and how great may be the distance between them. Quantum physics seems to me to indicate that perfection is impossible. P0M 07:08, 8 May 2005 (UTC)


Thanks for the reminder. I've a tendency to forget to put in my signature :)
Before I mention anything else, I should say that pretty much all the quantum physics I'll be talking about here is based on the Copenhagen Interpretation, which is the most widely (but not universally) accepted view of QP at present. Obviously, if you base your views on a different model, much of what I have to say here will be incorrect or irrelevant from your POV.
Without getting drawn into too deep a discussion about this, it seems to me that your planet busting electron device is largely similar in nature to the Schrodinger's Cat experiment, utilising the predictability or unpredictability (as the case may be) of a quantum level event to produce (or not) a large scale event in the "real world", and there have been many articles written on the nature of such experiments
Your point about the firing squad is particularly interesting as well, because as I understand it, in our universe it IS theoretically possible for a firing squad of 100 to fire at a man, and for all 100 bullets to miss him, or pass straight through him leaving him unscathed. It's just so mindbogglingly unlikely to ever actually happen that you'd need trillions of firing squads shooting trillions of men every second for trillions of times longer than the age of the universe for it to have a decent statistical chance of actually occurring. Lest claims of such an amazingly small chance of ever seeing this event ever appear as a copout, however, let me direct you to the phenomenon of quantum tunnelling which is this very effect in practice, and without which, such devices as electron tunnelling microscopes would not function. As I understand it, quantum tunnelling does not imply in any way a violation of the cause-effect model, as tunnelling simply occurs at a "probability miss" - i.e. when an electron has, say, an 80% chance of going a particular direction, but goes a different direction instead (which would land in the other 20%)
This is no less caused than if it had gone in the expected direction, it's simply not the one considered most likely. After all, even if something has a 99.99999999% chance of happening, there's still that tiny chance that it won't.
I think you are perhaps overestimating the randomness introduced by these features rather too much. It's highly unlikely there will ever be a rifle designed to a degree where quantum uncertainty, rather than, say, wind turbulence, machining precision, user skill, or the flight dynamics of the bullet would be the major factor in the accuracy. Furthermore, if it was accurate to the point where quantum uncertainty was the major deciding factor, those factors would be so tiny as to render the gun pinpoint accurate by anybody's standards. Simply put, quantum uncertainty doesn't tend to have any noticable impact on the universe at our scale of things, even taking into account chaos theory and it's butterly effect. What it does mean, as I stated before, is that you can never obtain absolute information about the state of the universe at any given moment due to the Heisenberg uncertainty principle, regardless of how sophisticated and precise your measuring instruments are. Because the ability to theoretically (if not practically) obtain such information is a requirement for determinism (as you need it to extrapolate the future) this breaks the deterministic model. Noodhoog 18:15, 8 May 2005 (UTC)

"If it's 'uncausal' then you'd find cases like xxxxx," is not the same statement as "If you find cases like xxxxx then it's uncausal." As you point out, if 100 men in a firing squad loaded real ammunition from different lots of ammunition by different manufacturers, etc., into fully tested real weapons, and they were all crack marksmen and they all shot at a guy who was secured to a stake and he didn't get hit by a single bullet, then that might be the result of chance. And, as you point out, the probability of it happening (even if they were all such lousy shots that there was only a 50-50 chance in each case that they'd hit their target) would be really low. But since we live in a "causal" universe, if that ever happened the first thing the authorities would likely do would be to hold an inquiry regarding the "conspiracy" to make the execution fail. That is to say, humans ordinarily would not accept uncritically the idea that a sparrow took out the first bullet, the primer was bad in bullet in the second gun, etc., etc. They would apply Occam's razor and figure that the most likely explanation was that deliberate human intervention was involved.

Suppose that there were seven prisoners adhered securely to seven posts along the far end of a football field, and one marksman with a top of the line sniper rifle was at this end of the field. It is his regular job to kill one out of seven prisoners and scare the bleep out of the other six. They are all led in blindfolded so they don't know which stake they are being led to. The marksman has a shooting tripod and he has locked his rifle in position perpendicular to the 0 yard line. So he really doesn't have to aim it anymore. In our universe seven men are brought out, the priest reads a prayer, the marksman pulls the trigger. Except for the rare case in which the gun misfires for some reason (bad ammo or whatever), the guy taped to the center post regularly has his head blown off and the others go back to their cells. In another universe, somebody almost always dies, but it is not always the guy on the center stake. What is predictable is not which man gets it in any given execution, but what number of bullet holes end up in each stake after a thousand executions.

In neither universe would any of these executions be regarded as being uncaused.

What if the heads of individual humans appeared to spontaneously blow up? Would we regard those "executions" as uncaused events? Would we insist that there must be a cause for these events even if we couldn't see what it might be? (There was actually a case like this back in the beginning of jet fighter use. U.S. jet fighters were being shot out of the sky when it appeared that no enemy aircraft, and even no other aircraft of an stripe, were in the vicinity. The cockpits of the recovered aircraft had all been shattered by what appeared to be machinegun fire. It being rather close in time to 1947, flying saucers were suspected for a time. But not everybody believed that the fighter planes and pilots were spontaneously sprouting 50 calibre holes. Finally somebody realized that pilots must be shooting off a round of machinegun fire as part of their exercises and then going into a power dive along the course of their original flight -- and being intercepted by the bullets that they had caught up to. Pilots were advised of the possibility of shooting themselves down, and the mysterious shootings stopped.) If people's heads did start exploding, we would want to know things like whether there was a regular per-month quota, whether there were geographical predictors,etc., etc. It's actually really hard to understand what could be truly regarded as an uncaused event. But to be caused does not necessarily imply that it is to be predictable.

Part of the problem with thinking about these questions is that we unconsciously have a kind of atomistic idea of events. The birth of a baby is "an event". But actually the birth of the baby is part of a continuum of growth that can be traced back through months in the womb and forward through however long the individual continues to live. What we call the cause of an event is merely an earlier portion of an event continuum. A causless event would be an event that began out of nothing in an instant of time and continued on from there. Maybe it happens occasionally that a space-Rok appeares out of the nothingness of interstellar space and goes flying off to gobble space dust or absorb cosmic radiation or whatever space-Roks usually do, but I don't think many people have ever witnesed such an event -- at least at macroscopic levels. If such an event did happen, how would one judge whether it was really uncaused? Surely some people would say that God had indulged himself by making a miracle, and other people would say that the space-Rok boiled up out of some kind of quantum foam. Just because we don't know the reason for some event does not prove that the event in uncaused. On the other hand, discovering that a million or a billion supposedly uncaused events all had causes would not prove that there are no uncaused events.

To bring things back to quantum mechanics, suppose that we have a double-slit experiment going, and we are firing electrons toward the double slits. Experiments show that even if single electrons are fired, these single electrons will "interfere with themselves" in such a way that they will arrive at an array of different but predictable places. Now if we consider "an event" to be arriving at one position or the other, then we could ask whether there is anything happens that determines which point the actual electron arrives at in each instance. It is demonstrable that there is a probabalistic differentiation of impact points. We can put a detector of some sort on the other side of the slits and we'll detect a characteristic interference pattern that gets clearer and clearer as we fire more and more single electrons through. But is there anything else that can be determined as to why in any particular instance the electron arrives here or there? And, to go back to uncaused events for a moment, if we turn off the electron generator and take it away, does the detector light up occasionally because of the spontaneous generation of electrons?

It seems to me that if we imagine that, in addition to some kind of probabilistic distribution function inherent in the nature of electrons that gets manifested in the double slit apparatus, there is also some kind of "demon" with an atomic scale fly swatter or baseball bat who hits the electrons to make them go toward the appropriate target point, then we involve ourself in infinite regress since we need to explain the demon, how the demon operates on the electrons, where the demon gets its probability chart, how it knows how to interpret the probability chart, etc., etc. When we restrict ourselves to what we can determine empirically, electrons hits the slits and fan out in a probabalistic or wave interference-like pattern, and that's it.

If we would be correct to assume that all of our actions are pre-programmed for us as a result of our having been constituted by macroscopic forces that have no probabilistic component, and as a result of outside forces impinging upon us that are likewise devoid of any probabilistic componets, would it not be possible for us to decouple ourselves from the past by making random choices on the basis of physically random events like the decay of radioactive elements? If the geiger counter beeps within the next half second I will move to Europe; otherwise I will stay here. P0M 07:56, 9 May 2005 (UTC)

There is even a website that will supply you with random numbers based on radioactive decay: http://www.fourmilab.ch/hotbits/ But the notion that quantum uncertainty normally plays no role in macroscopic events is unfounded. The sensitive dependence on intitial conditions that allows a butterfly wing flap to affect (and even effect) future storms can be extended to the energy from the radioactive decay of an individual atom. Or, if you prefer, such a decay could cause a spike in a butterfly's nerve causing its wing to flap. A perhaps more vivid example of the direct influence of QM randomness on human affairs is cancer. Some fraction of cancers are caused by exposure to ionizing radiation. While it is generally believed that more than one mutation is required to make a cell malignant, it certainly happens from time to time that the final mutation is caused by a single radioactive decay product interacting with a DNA molecule at just the right (or wrong for the patient) place and time. Molecules at room temperature bounce around at a rate of several billion times per second. If the time of the decay of the single atom that generated the cancer's initiating particle had been a nanosecond earlier of later, it is not likely that the cell in question would have been successfully transformed. It is my understanding that there is no theoretical basis for saying when an individual atom of a radioactive isotope will decay, beyond the probability implied by the isotope's half-life, which for naturally occuring isotopes is in the hundreds of millions, if not billions of years. That is true even if all the initial conditions of all the constituant particles of the atom were somehow known to very high accuracy at one point of time, a measurement prohibited by QM. The individual atom was itself created (and its initial conditions set) under conditions, whether in a supernova or a nuclear reactor, where QM effects dominate. To say the exact nanosecond and nanometer time and place of that atom's decay has a cause seems totally lacking in scientific basis. Yet the suffering resulting from that precise event is all too macroscopically real. --agr 10:07, 9 May 2005 (UTC)
Actually there has been determination of the statistical likelihood of quantum uncertainty affecting macroscopic events (without deliberate quantum devices to use it like Schrodinger's Cat)... it is incredibly small (though still greater than zero of course), although I can't remember just how small it is at the moment. I'll have to see if I can find a figure somewhere.
Chaos theory and quantum uncertainty are totally unrelated principles. Chaos theory (unlike some versions of quantum theory) is deterministic; it is just difficult to predict chaotic systems in practice, since it is difficult to determine the starting conditions with the necessary precision. So, this is just an analogy, which should not be stretched too much. WhiteC 02:35, 10 May 2005 (UTC)

Unless they are operating in totally separate universes, they are not "totally" unrelated. The "incredibly small" likelihood of quantum uncertainty affecting macroscopic events may also be manifested in "incredibly small" differences in the states of some system, but the interesting thing to me about the way some of these things work out numerically is that the successor states may eventually diverge substantially. I don't find the fatalistic picture of the universe, according to which the future of everything was immutably determined in the beginning, to be persuasive because (a) it would be impossible to prove that there is no "slop" in the system, and (b) quantum considerations seem to me to indicate that there are cases when indeterminancy would take a hand in the way things went on from that point.

Even if the universe has been de facto deterministic and pre-programmed up to this point, what happens if somebody (acting out of his/her predetermined constitutional proclivities in cooperation with the predetermined stimulii that operate on him/her) decides to do or not do something major in the world depending on a geiger counter dice cast?

Which reminds me. An uncle of mine used to determine his route when taking a walk by tossing a coin at intersections. His idea was probably just to avoid being predictable and getting into a rut. But it taught me to look at random departures from my planned route (colloquially known as "getting lost") as an opportunity to discover something that might be very useful for me. P0M 04:06, 10 May 2005 (UTC)

So that something important does not get lost, let me quote from the beginning of this section:

The objection to determinism in quantum physics has nothing to do with "uncaused events" nor am I aware of it claiming there are such events.

I think that this criticism of the wording of that passage in the article is valid. There is possible, however, that the original writer was trying to get at something that is valid. If you consider a double-slit experiment, nobody doubts that the fact that light hits the screen on the other side of the slits is because there is an arc lamp or a laser or some other light source on this side of the slits. But if we slow things down so that we can emit photons one at a time, then we discover that sometimes a photon will contribute to one "hot spot" on the screen, and sometimes a photon will contribute to a different "hot spot" on the screen. The question can then be asked: "Does something act to cause the photon to go one spot or the other?" P0M 04:25, 10 May 2005 (UTC)

Perhaps 'truly random events' instead of 'uncaused events'? To say that the individual electron's path is uncaused seems to assume that because we haven't found the cause yet there isn't one. There seems to be an epistemology (what can we know about causes) versus ontology (what causes are there, regardless of whether we can find them) problem. WhiteC 11:03, 10 May 2005 (UTC)

I think there is an important philosophy (and philosophy of science) issue here. By "philosophical" I don't mean "inconsequential" or "pertaining to metaphysical sandcastles" either. Going at least as far back as Plato and Aristotle we have tried to understand the functioning of the universe in the only way we could at first -- by comparing it to the way humans perceive themselves to do things. Humans throw rocks and break pots, so if a meteorite hits a house we tend to think of there being a hurler out there. We look for a doer every time something appears to have been done -- even if later on in our history we come to understand that gravitational fields can draw a meteor down out of the sky and through somebody's roof. It is extremely difficult for us to put this hylemorphic idea aside, the image that there is an actor who behaves as would a carpenter to impose a plan or idea (morphe) on some wood (hyle).

Take a look at: [1] (select "picture gallery" and then "room one") and ask yourself how somebody would interpret the moving pictures of electron strikes hitting a screen. If I thought I was seeing a picture of bullet holes being made in a wall, I would assume that a rifleman was changing his/her aim and trying to hit a series of tall vertical targets spread out along that wall. If the lines were actually relatively close together I might think I was seeing the normal kind of pattern that a rifle locked in a shooting tripod will make. But that kind of patterns is circular, and if the rifle and ammunition is any good at all there will be more holes the closer you get to the center of the circle. But I would be puzzled by the bands (even if arrangements were made to produce a circular diffraction pattern). "Why is the gun only striking the center of the bullseye and the black bands, but avoiding the white bands? What is causing this phenomenon?" It would be very difficult for me to accept the idea that that is just the way that bullets go out of a gun. If somebody tried to tell me that then I would perhaps say that we should not, "assume that because we haven't found the cause yet there isn't one." But the "cause" will have to be some kind of "imp," something that is not a part of the very nature of the physical apparatus (the slits and the electrons) that we already know is there. The imp would work according to a plan, and the plan would turn out to be the rules of interference. Then the question becomes: Why can't the electrons "follow the plan" on their own? Why can't we accept the idea that that is just the way that electrons (and anything else that small, I guess) behave?

By the way, I don't particularly like the formulation I have quoted, but we have to be able to say it much better. I don't know whether it can be done without going into a long discussion. One unassailable way would be to find a good Einstein quotation or something like that and then direct people to critiquees of the quoted point of view. P0M 18:38, 10 May 2005 (UTC)

I don't like the imp part, you are assuming that the cause must be conscious/external because you can't explain it in any other way. Oh, Einstein didn't like quantum indeterminacy, BTW, and most modern quantum physicists disagree with him. WhiteC 01:13, 11 May 2005 (UTC)


Okay, look, this has got WAY out of hand. I never brought up 'uncaused events', chaos theory, minature demons, firing squads or any of the rest of it. My point is very, very simple.
1. The Heisenberg Uncertainty Principle prevents you from being able to have absolute information about the state of the universe at any given moment.
2. Without absolute information about the state of the universe at any given moment you cannot extrapolate to the future.
3. If you cannot extrapolate to the future, you cannot have determinism.
Ergo, determinism is broken under the present view of quantum mechanics.
From this point on I'm staying out of it. If the general consensus is that I have a case, then by all means add what I've said to the page. If not, then don't
Noodhoog 23:46, 10 May 2005 (UTC)
I agree with points one and two, but not point three. Whether a particular person (or observer) CAN extrapolate the future is irrelevant. The question is whether the future is predetermined regardless of whether any observers can tell what this future is. WhiteC 01:13, 11 May 2005 (UTC)
The whole question has been way out of hand since people started working out the consequences of the original "nonsensical" observation that black bodies absorb heat avidly at all frequencies of light but radiate heat at certain preferred frequencies. Like WhiteC I agree with points 1 and 2 and disagree with 3.
Schrödinger observed that one can easily arrange his famous thought experiment resulting in what he called "quite ridiculous cases" with "the ψ-function of the entire system having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts." [[2]]
Many physicists have shied away from the apparent implication of the theory that the "cat" in such a case is neither definitively alive or definitively dead until somebody opens the box to observe the state of the radiation detector. Schrodinger was, it appears, objecting to the "sensibleness" of the implication as much as anybody.
What I've called the "imp" actually turns up in a much less anthropomorphic way in the discussion of these physicists, as something called "quantum potential", and as scale increases toward our macro scale the power of this imp becomes proportionally so small as to become imperceptible:
The quantum potential formulation of the de Broglie-Bohm theory is still fairly widely used. For example, the theory is presented in this way in the two existing monographs, by Bohm and Hiley and by Holland. And regardless of whether or not we regard the quantum potential as fundamental, it can in fact be quite useful. In order most simply to see that Newtonian mechanics should be expected to emerge from Bohmian mechanics in the classical limit, it is convenient to transform the theory into Bohm's Hamilton-Jacobi form. One then sees that the (size of the) quantum potential provides a measure of the deviation of Bohmian mechanics from its classical approximation. Moreover, the quantum potential can also be used to develop approximation schemes for solutions to Schrödinger's equation (Nerukh and Frederick 2000). [[3]] (near the end of section 5)
It seems quite clear to me that the double-slit experiments indicate a very "deterministic" result in the sense that the (sometimes singlely in motion) particles that go through the slits will neither arrive at a single point on the screen nor will they be randomly distributed. They will behave in a "lawlike" way. The question, it seems to me, is whether anybody has ever maintained that there is a reason why each one of a succession of particles may be "targeted" on a different maxima on the screen. If there actually is such a reason then the future is predetermined and the fate of the cat was determined at the dawn of creation. If there is no such reason then the fate of the cat depends on luck.
I guess I would prefer to speak of "probabilistically determined events" rather than "uncaused events." P0M 02:32, 11 May 2005 (UTC)

It's not just that the laws of quantum mechanics do not give any "reason" for the specific, as opposed to probabilistic, behavior of an individual particle going through a double slit, quantum mechanics makes statistical predictions that would be violated if some underlying reason unknown to us existed. There have been a number of experiments to verify those predictions and so far they do not appear to be violated, though many physicists believe better experiments are needed to conclusively settle the question. See Bell test experiments. --agr 12:59, 11 May 2005 (UTC)

Excellent. Thanks. P0M 15:00, 11 May 2005 (UTC)

I have changed the article to explain the physics of where the uncertainties come from. Philosophy of physics is confusing because philosophers don't know enough physics and physicists "shut up and calculate". I am trying to make the physics very clear and let others talk about what it means. For example, the hidden variable people must not know about all the unknown phases or they wouldn't need to add extra things not to know. Anyone who can read the mathematics can clearly see that the time dependent Schrödinger equation is deterministic in its self, so I think it should be there, even though this is not a mathematical article.

Incidentally, Newton's and his contemporaries' objections to Huygens' wave optics must have been very similar to Einstein's objections to quantum mechanics. Newton believed, correctly it turned out, that light is particles. So if he ever accepted Huygens' wave explanation he would have the qualitative essence of quantum mechanics in the 1700s.

I am reading some Max Plank. He was quite confident that physics is deterministic, but made an argument similar to the one in this article that this does not effect moral responsibility. "But this does not in the least invalidate our own sense of responsibility for our own actions." --David R. Ingham 18:07, 15 September 2005 (UTC)

I am totally lost. Please rephrase "For example, the hidden variable people must not know about all the unknown phases or they wouldn't need to add extra things not to know." P0M 00:27, 16 September 2005 (UTC)

Please see my new article on Philosophical interpretation of classical physics. I put in a link to it. I am just learning how to make things more understandable and apologize that I can't do better yet. Maybe I should remove some material that is covered in the new article. The equation shows explicitly that quantum mechanics, itself, is deterministic which seems to me key to the whole section. The "qualitative essence of quantum mechanics" is that waves and particles are complimentary and simultaneous properties of all nature and not different types of objects, as in classical physics. Newton believed, like Democratus about atoms, without evidence, that light is composed of particles. This must have prevented him from accepting Hugeness' wave physics because there would then have been some of the same difficulties that Einstein objected to. I should put in a reference to Messiah. He explains how quantum experiments done with macroscopic equipment have probabilistic results. Sorry I don't have a more recent reference. Messiah takes more space in the book for this than most authors do. --David R. Ingham 20:28, 16 September 2005 (UTC)

"determination of the statistical likelihood of quantum uncertainty affecting macroscopic events"

I can see that I am not done trying to fix the physics here. This is not a properly stated topic. It is based on the very common missunderstanding that "quantum uncertainty", which I call "classical uncertainty" happens in nature. It comes from the difference between our quantum and macroscopic descriptions of nature.

I am not sure yet whether the subject that this heading was intended to identify makes sense, but, at best, it will take a lot of work.

There also seems to be a question of its relevance to the article. David R. Ingham 17:04, 28 September 2005 (UTC)

Please give volume and page number to the passage(s) in Messiah that support your point. P0M 09:52, 29 September 2005 (UTC)

Fundamental Problem Of Understanding Determinism

I am not a scientist or a philosopher - however I believe that the reason that people cannot accept that every descision they make/made is/was determined before it happened (i.e. there is no 'freedom of thought') is because humans are trapped in a fixed time of conciousness and therefore cannot free their minds to the concept of this bigger picture - everything is controlled by where it has come from. This attitude is commonplace in western psychology and is a typically self-centered attitude; based on the human fear of lacking control over their own 'destiny'.

I cannot understand how any study of mathematical laws (real world or otherwise) hopes to generate anything other than a sequence of functions and outputs (or seemingly layered processes) that can be 'untangled' to the root/original function(process) and input; and when allowed to 're-tangle' (ceteris paribus - i.e. under the same conditions) result in exactly the same end product.

mookid 15:00, 25 Jan 2006 (UTC)

I agree w/ your first paragraph. Regarding your 2nd paragraph: If nature can be modeled as a deterministic system, then it is easier to predict and understand. Mathematical laws would be a part of that modeling. If I know the precise length, temperature and thermal properties of a piece of (say) iron, then I could predict its length at other temperatures, for example. WhiteC 18:24, 26 January 2006 (UTC)
Sorry I appear to have been unclear - I was suggesting that anyone of the opinion that the world is conditioned by these (predictable?) mathematical laws (i.e. 'scientists') is forced to agree with the concept of determinism; in my view determinism is the underpinning of all science (action and reaction).. or maybe i'm just a confused little boy?
Edit: Hold on! Are you suggesting that just because "no human(s) has existed that can perfectly express all macroscopic reactions in one 'perfect formula'" means that the formula doesn't exist? I hope not - because if you follow the change from newtonian physics to quantum mechanics (which have 'improved' [in terms of exactness] as "models"), and increment this change over time we will get even closer, and closer, and closer.. etc. The fact that our brains might be too small to actually achieve this 'perfect formula' is irrelevant to its existence and (more importantly) our ability to appreciate its existence. mookid 01:30, 07 Feb 2006 (UTC)


Here here!

It's a funny thing 'free will'. Determinsim is criticised for not allowing freed will. This is true - but it's not a problem really. Free will is a perception - so long as you think you have it you can survive. It's the 'not knowing' that keeps us going. Assume that you are OK with determinism and you are playing cards (3 card turnover say - like 3 card brag but just turning the cards over to see who wins). You are happy to play, even knowing that the cards have been shuffled and the winner of the next hand is predetermined. You are happy to play because you don't know who is going to win, not because the winner has not been predetermined. The funny thing is that Quantum Mechanics doesn't allow free will ether (although you won't get many QM people highlighting this). QM states that you cannot predict (other than using wavefunctions, probabiloty stuff, etc) the precise location, momentum etc of a particle so you cannot predic the exact output from a reaction (other than general probabilites of course). BUT, regardless of whether the outcome is predictable or not, the question is whether the outcome can be affected or influenced by thought. Pub or Curry? Determinism states you will choose one, and if we rerun history you will choose the same one (becasue the particeles in your brain will react in the same way as they did the first time - this may or may not be practically predicatble, but the result is consistent), QM states the outcome may change when you rerun history - but in BOTH cases the individual has no control over the particle reactions or, in the case of QM, the proabilities (as these are DETERMINED by very accurate formula). Freewill does not exist in either case. PS. Don't get me started on this subject as I can give you lots of theories that counter most i--WBluejohn 19:37, 2 April 2007 (UTC)f not all of QM (which is a mathematical formula used to make better predictions that some have taken as a literal description of a process - go figure). I'll give you the analogy of the actiary if you want - it's a good one. --WBluejohn 19:37, 2 April 2007 (UTC)

Quantum vandalism

Using a random number based on radioactive decay obtained from http://www.fourmilab.ch/hotbits/, I have selected a word in this talk page and changed its spelling. The random number (in hexadecimal) was 3D70. I used the low order 10 bits to form a line number (as the page appeared in my text editor, bbedit, whch can number lines). The high order digit determined the word in the line to change. --agr 13:45, 11 May 2005 (UTC)

Vandal! --Username132 08:05, 6 February 2006 (UTC)

Comical discussion page to choose to use the word 'random' so freely. That radioactive decay is not random - if you could reverse time and 'undecay' and then 'redecay' under exactly the same conditions (including starting at the same time again) it would have the same value; and is therefore not 'random' at all.

THERE IS AN IMPORTANT DIFFERENCE between something being unpredictable and randomness; the fact that we cannot model something does not mean its random please try and get your head round this - infact someone might want to clarify what randomness actually is in the article.

mookid 02:34, 7 February 2006 (UTC)

It's true that unpredictability and randomness are different, because randomness refers to an objective quantity while something can be unpredictable at the same time as it is causally determined. However, not all physicists agree on whether radioactive decay is random. According to the Quantum mechanics, radioactive decay is undeterminable, but there are different interpretations as to whether it is random. The Copenhagen interpretation argues that it is objectively random, while other theories (most famously, the opinion of Einstein himself) argue against the non-deterministic nature of Quantum mechanics. Some one give me a really, really complicated equation to solve this issue, and you'll have won the argument ;) --Celebere

The equations you are looking for are in the Radioactive decay article. It was realized quite early that that the uniform rate of decay of a radioactive substance implied that the time when a particular atom decayed was random. This is the view of most physicists, though there are, as always, a few contrary opinions. That was not the point I was trying to make here, however. My objection was to the notion that quantum randomness has no effect on the macroscopic world, a view adopted by some who accept the standard interpretation of QM, but wish to cling to determinism. Finally, I'd be interested in hearing some scientific way (i.e. an experiment) to distinguish between unpredictability and randomness.--00:47, 9 November 2006 (UTC)
Are you seriously suggesting that you can't tell the difference between unpredictability and randomness? Randomness means without cause.. unpredictability implies a lack of human understanding, something unpredictable can be random or determined - infact this is pretty much the crux of the problem of the "free will" vs. determinism debate. mookid
"Free will" vs Determisim is a bit of a strange comment. QM doesn't allow for free will either - unless you are suggesting that you can consciously affect the probablity of partcle decays or wavefunctions. It would make a mockery of the 'accurate' formula used to make predictions by QM about particle interactions in your brain if you could consciously decide to change the outcome. --WBluejohn 20:15, 2 April 2007 (UTC)
The usual claim by QM FW proponents is that indeterminism is part of the implementation of the decisions-making process, not that there is a Ghost in the machine. See naturalistic libertarianism, Robert Kane. 1Z 20:34, 2 April 2007 (UTC)
Thanks for that. Can't say I agree though - sounds like a fudge to let determinists take all the FW flak.--WBluejohn 20:43, 2 April 2007 (UTC)
Yes, in the context we are talking about, the time at which a particular atom decays, I do not understand any scientific difference between unpredictable and random. I don't see any possible experiment to distinguish the two. Perhaps I am missing something.--agr 19:54, 3 December 2006 (UTC)

Unpredictible means the decay is governed by the physical laws. It could be that we don't have enough information to determine when an atom will decay, or that there are for example hidden variables. Random on the other hand means that there is no reason for it to happen - it can or it can't. This is rather counter-intuitive, in that we need to believe there is a cause for it to decay or not decay, or some mechanism which determines which 'random' outcome takes place. The many worlds interpretation of quantum physics is one way of resolving this - i.e. 'both happen'.

As to a scientific experiment to distinuish between the two, it's quite impossible. If you don't know why something happens, you can't say whether it was random or not. If you knew that, it wouldn't be uncertain in the first place and the experiment would be meaningless. There is still, however, a difference between the two. Richard001 05:26, 4 December 2006 (UTC)

Needed citations

Quantum uncertainty & macroscopic events

WhiteC said:

Actually there has been determination of the statistical likelihood of quantum uncertainty affecting macroscopic events (without deliberate quantum devices to use it like Schrodinger's Cat)... it is incredibly small (though still greater than zero of course), although I can't remember just how small it is at the moment. I'll have to see if I can find a figure somewhere.

If we could have this citation it would let us argue persuasively, I think, that the butter would fly. Then, to avoid getting hit by somebody who would object to our thinking and call that doing "original research", it would be nice to find a good citation from some reputable Cal Tech physicist. (-; O.K., I'd settle for MIT or even Stanford ;-) P0M 03:01, 13 May 2005 (UTC)

The book I originally read it in is at the public library checked out by someone else. But I have it on reserve, and it should be about a week or so. WhiteC 01:50, 15 May 2005 (UTC)
OK. Here is a long quote, but I don't want to rephrase it and leave out anything. It is from pages 191-2 of Quantum Philosophy: Understanding and Interpreting Contemporary Science by Roland Omnes
It is known that quantum mechanics allows for the existence of "tunnel effects" by which an object suddenly changes its state due to a quantum jump, something that would not be possible through a continuous classical transition. Many examples of such an effect are known in atomic and nuclear physics; it is precisely by a tunnel effect that uranium nuclei spontaneously decay, and two protons at the center of the sun may come close enough to start a nuclear reaction.
Even an object as large as the earth may be subject to a tunnel effect, at least in principle. While the sun's gravitational pull prevents the earth from moving away through a continuous motion, our planet could suddenly find itself rotating around Sirius through a tunnel effect. It would be a terrible blow for determinism. We went to bed the previous night expecting the sun to rise the next morning, only to wake up with a view of an even brighter start, which during the night gives way to unknown constellations.
A theory that permits such events to happen may well make us feel uncomfortable. Fortunately, even if determinism is not absolute, the probability of its violation is extremely small. In the present case, the probability for the earth to move away from the sun is so small that to write it down would require 10 to the power (10 to the power 200) zeros to the right of the decimal point. The smallness of such a number staggers the imagination, and no computer could store it in decimal form. For all practical purposes, it is an event that will never take place.
As we move toward smaller objects, the probability of a tunnel effect increases. The probability for a car in a parking lot to move from one parking stall to another by a tunnel effect is as ridiculously small as that of the earth escaping the from the sun's pull, but it has fewer zeros already. When my car breaks down, I know better than to blame it on quantum mechanics, the proability is still much too small. I rather look for a deterministic cause that a good mechanic will soon identify. However, as we approach the atomic scale the odds increase and quantum nondeterminism eventually overtakes classical determinism. In short, it is all a matter of scale. There is a continuous and quantitative transition of probabilities from extremely small ones to others that first become non-negligible and later prevail.
Sorry it took so long to dig up, but hopefully it will be of some use. WhiteC 5 July 2005 03:45 (UTC)


I'm really happy to have this substantiation. Perhaps I won't have to flip quantum coins to unlink myself from my destiny. ;-) P0M 5 July 2005 04:00 (UTC)

Difference between unknowable causes and no causes

Patrick Grey Anderson (at the top of this page) said:

I understand that people point to the seemingly random nature of quantum mechanics as standing in opposition to determinism, but randomness is still restricted by causality. In order for quantum mechanics to be undetermined, a particles would have to move for no reason whatsoever, as opposed to a shift occurring for no reason that we can *measure*.

Brian Greene, The Fabric of the Cosmos, describes the experiment figured out by Bell that reliably puts to rest the objections of Einstein, Podolsky, and Rosen that Patrick reflects in the quotation given immediately above. Greene describes the experiments and the reasoning behind them in a section that culminates on p. 114 of his recent book. "We are forced to conclude that the assumption made by Einstein, Podolsky, and Rosen, no matter how reasonable it seems, cannot be how our quantum universe works." The experiment is quite elegant, and the error in prediction obtained by following the EPR beliefs is not a tiny fraction, it is 16%. P0M 15:56, 13 May 2005 (UTC)

Assertion that there is a consistent viewpoint on astrophysics

I don't understand the following assertion:

Different astrophysicists hold different views about precisely how the universe originated (Cosmogony), but a consistent viewpoint is that scientific determinism has held at the macroscopic level since the universe came into being.

Calling something a "consistent viewpoint" is terribly unclear. Consistent with what? Maybe the writer meant "self-consistent," "internally consistent"? And whose viewpoint is this supposed to be? That of one person? That of the majority of astrophysicists? If a citation were provided it would help tighten up this passage. P0M 16:49, 14 May 2005 (UTC)

That was my writing, and I agree it is pretty weak (but it used to be even worse ;-) ). I have been trying to find a name for this viewpoint without any success. "Internally consistent" would be better, I suppose. I believe that many scientists (not astrophysicists particularly) hold this viewpoint, but I have no idea how one could possibly find this out, so that makes it pretty weak.
It seems to me to be an internally consistent form of determinism, if one accepts that quantum indeterminism holds at the scale of very small things. I apologize for phrasing it so poorly. I'd appreciate any suggestions for tightening it up. WhiteC 02:03, 15 May 2005 (UTC)

I didn't mean to bark anybody's shins. How about:

Various astrophysicists may differ about precisely how the universe originated; they hold different theories of or opinions about Cosmogony. But, as a group, they are in general agreement with the idea that since the very beginning of the universe everything has occurred according to the kind of deterministic interrelation among events consistent with quantum physics. P0M 04:09, 15 May 2005 (UTC)
That looks good. Thanks. WhiteC 5 July 2005 02:42 (UTC)

Infinite series

One section describes something that sounds like the Kantian antimonies, but without mentioning Kant and without being clear enough that I can figure out exactly what is being asserted. I suspect that the actual argument is something like the following:

Assume: All events have causes, and their causes are all prior events.

The picture this gives us is that Event AN is preceded by AN-1, which is preceded by AN-2, and so forth.

Under that assumption, two possibilities seem clear, and both of them question the validity of the original assumption:

(1) There is an event A0 prior to which there was no other event that could serve as its cause.
(2) There is no event A0 prior to which there was no other event, which means that we are presented with an infinite series of causally related events, which is itself an event, and yet there is no cause for this infinite series of events.

P0M 17:34, 14 May 2005 (UTC)

You might make perfect circle - but you would've started somewhere. I think the answer to A0 lies somewhere around big bang theory (or at least "matter and antimatter"?) , and the "+ve -ve" "god devil" idea from the storybook bible. Interestingly if you think about both of your arguments against determinism they lead you to the conclusion that the universe was never created it was just 'always there'. At this point you're arguing with more than just determinists mookid 01:48 07 Feb 2006 (UTC)

Illusion of free will due to ignorance

The current article has:

Dynamical-evolutionary psychology, cellular automata and the generative sciences, model emergent processes of social behaviour on this philosophy, showing the experience of free will as essentially a gift of ignorance or as a product of incomplete information.

This treatment seems to reflect a very strong POV, but, lacking citations, I cannot tell whether this POV belongs to one of us or is instead a reflection of the views of unnamed cellular automata et al. ;-) (Something is wrong with the syntax of the original or cellular automata are getting more uppity than I had suspected.)

Donald Davidson, for one, had a very cogent critique of this kind of an analysis, so I suppose the analysis itself must have adherents somewhere. But the writing of our own article on determinism should not speaking of these adherents as "showing" that determinism predicates the meaninglessness of ideas of freedom. P0M 19:01, 14 May 2005 (UTC)

A good point from what used to be the top of the page

I'm trying to clean out issues remaining near the top of this discussion page and then archive the old stuff. I found one point that seems to me to deserve more discussion:

I just object to statements like, the entirety of space-time came into existence at some point, unless you define this point embedded in a larger space-time outside of our own. The fallacy comes from implying that both the statements, space-time is everything, and something exists outside of space-time, are true. However, I work in computer software and don't do physics (although this is really about philosophy), so maybe I'm just using the the wrong kind of logic? Nodem

I don't think you are using the wrong kind of logic at all. In fact, this problem has been a serious source of concern at least since the time of St. Augustine. He tried to work out a consistent view of the differences between the characteristics of a creator God and his creation, and one of the things that occured to him was that time may have been created along with space and all the things in the universe. St. Thomas put his tremendous intellect to the task of coming up with an internally consistent philosophy that would yet be in accord with the Bible, and he believed that God is perfect and therefore is not limited. It is his creations that are limited. So God must be infinite -- He can't be bounded or limited space or time or in any other way. When he created the universe he created space and time, so it doesn't make any sense to ask when God decided to create the universe. To do so is to apply the limited concepts appropriate to humans, appropriate to the mundane universe of discourse, to the unlimited. So time has a beginning, and there is a reason for the existence of time, but the reason for time is not something that stands in a temporal sequence.

Physicists came to similar conclusions for much different reasons. When Newton gave human beings his physics, he gave them an enormously successful tool for working on the world. It so precisely predicted the mechanical actions of things that everything seemed to go like clockwork -- better than clockwork, actually. So it was possible to imagine that the affairs of the universe went off like a game played with perfectly elastic and perfectly frictionless spheres on a perfectly flat but bounded table. If the balls were rolling around and bumping into each other then they could continue to do so for an infinite time and that made it possible to imagine that they had been bumping around for an infinite time already. On the grounds of that kind of a picture the only reason to imagine a beginning was a theological reason. If anybody thought about entropy in regard to the universe I guess they just assumed that the effects might catch up with the universe some time so far in the future that it wasn't worth worrying about, and if they did worry about it in context of an infinite prior timeline they perhaps comforted themselves with the idea that there must have been a divine act of creation after all.

Then people discovered that the universe is expanding. To even be able to think about this meant a certain kind of mental preparation such as was provided informally by a little book called Flatland that talked about how a two-dimensional creature would experience life on a flat surface, on a spherical surface, and then on the surface of a sphere that was expanding -- but without causing his own size to expand. In the world of more formal mathematics the same general kind of ideas were developed in non-Euclidian geometries that dared to talk about a higher dimension into which the universe could expand -- so that universe could expand without the rate of expansion being greater "on the edges" than "in the center." (There's a good discussion of these ideas in George Gamow's book, One, Two, Three...Infinity.) And somehow these ideas came together with the observations of astronomers that indicated that the universe is indeed expanding.

Anyway, to cut all this blathering short, people started to wonder what the movie they were making of the expanding universe would look like if they would run it backwards through the projector. The answer seemed to be inevitable. The stars would grow closer and closer together until at some point they would disappear into a single point. And since Einstein had demonstrated that space and time form a continuum, that meant that if space would disappear at that point, then so would time. Another way to say that is that the operational definition of time involves the observation of the movements of things. When there are no longer things, when there is no longer space that things might move in, that means the we've reached the beginning terminus of time. So, in a rather spooky way, physics points to the "creation" of the universe at a single point in time.

Even within our own universe, it is possible that different regions of the universe could be cut off from each other because, with everything moving away from everything in all directions, the speeds with which stars move away from each other are additive over distance. The more distant stars are from us, the more rapidly they are moved away from us by the expansion of the universe. At some point the sum of these speeds exceeds the speed of light and "news" of whatever happens beyond that point will never reach us. It doesn't mean that these distant parts of the universe cease to exist, it must means that they cease to have any possibility of interacting with us. So even though they are "genetically" related to us, they might as well be in entirely separate universes as far as any practical considerations go.

If we trace backward to the Big Bang (assuming we're not making some big mistake somewhere) we lose even the theoretical possibility of catching a glimpse of a time before t=0 in our universe. At the same time we accept the idea on the basis of a lot of experience that nothing is uncaused. Put it in slightly different words: There is always a reason why something happens. It could always be otherwise. The millionth swan that I investigate may turn out to be ultraviolet in color instead of white. But right now we are on a pretty good roll as far as uncaused events go. So we think about whether there could be other universes and other time lines that, like ours, start out at a t=0 and move along in a "line" that has nothing to do with ours, no connection to ours. And we can also wonder what conditions might exist in some universe outside of our own that might initiate the big bang that started our universe.

There's a joke that is getting pretty old by now, but I still smile whenever somebody tells about the country bumpkin in New England who is hailed by a passing car from some sophisticated part of the country. Being lost they ask him how to get to Five Corners or whatever it was, and he replies, "You can't get there from here." Hopefully that is untrue in New England, but maybe it is true in regard to vastly separated regions of our own universe. At least I remember that Prof. Greenstein of Cal Tech was reported to be talking about ideas like that in the early 60s, and I doubt that he was their sole author. Perhaps it is also possible that there are other universes that were created in line with the same kind of reasons that lie behind our universe.

There are even people who are now talking about how (theoretically, I suppose) one might recreate the conditions that prevailed just before the Big Bang and therefore produce a new universe that would go off on its own space-time continuum. Being totally cut off from our universe any sentient beings that came into existence in that universe would be forever ignorant of our having touched off their Big Bang.

The original statement was:

the entirety of space-time came into existence at some point

That statement does indeed play tricks with our minds because it unwittingly throws us into the wrong way of visualizing things. We should be led to visualize something like an ant crawling down a ribbon that is gradually unfurling from a balloon at a rate slower than the balloon is rising into the sky. At some point if the ant keeps walking he is going to come to the end of the ribbon. It's not that he simply comes to a point where a sign says "Go no further!" and the color of the ribbon changes or something like that. There simply is no more ribbon.

How about something like:

"If we could retrace our steps back through time, we would come to a terminus. We would have run out of space and time at the point at which space ceases to have a volume and any "clock" that existed would have no leeway for its pendulum to move.

I need to come back and edit this later. Somebody remind me. KSchutte 4 July 2005 22:21 (UTC)

quantum reality

When Dr. Broida explained that the probabilities are due to the interaction of the quantum and classical descriptions, rather than being internal to quantum mechanics, everyone in the statistical mechanics class was satisfied. This was because we were already accepting the quantum description of matter as reality. The fact that probabilities arise when one uses a simplified approximate description (Classical Mechanics) is not disturbing. Being so easily satisfied, physicists don't talk much about the philosophy, and so lay people remain mystified. David R. Ingham

I think even lay physicists remain mystified. This page should probably be combined and conjoined in some way with the various pages on Deterministic (disambiguation). There's a lot of confusion on many of these pages. The biggest problem is the quantum stuff, where we have different people saying different things about its determinacy. The fact is, some models (such as the Copenhagen thesis) are indeterministic, while others (such as the Bohm interpretation) are deterministic. KSchutte 6 July 2005 16:56 (UTC)
Note: Interpretation of quantum mechanics gives a nice table of which interpretations are deterministic, and which are not. KSchutte 6 July 2005 17:04 (UTC)

New material

Here is a very good source of what is the current understanding of experts:

Physics Today, April 2006, "Weinberg replies", Steven Weinberg, p. 16, "... but the apparatus that we use to measure these variables—and we ourselves—are described by a wave function that evolves deterministically. So there is a missing element in quantum mechanics: a demonstration that the deterministic evolution of the wave function of the apparatus and observer leads to the usual probabilistic rules."

In principle the answer is given by the correspondence principle, but the details are complicated, so there is not a clear derivation of "the usual probabilistic rules".

This supports the statement that the determinism evident in the time dependent Schrödinger equation carries on to more complicated cases. Less strongly, it supports the view that "reality" is quantum. David R. Ingham 14:43, 10 May 2006 (UTC)

Specific formulations and references

Currently, I think this article is missing some important philosophical arguments against determinism, which it mentions, but only in passing. I think a good way to get more references into this article would be to add and expand transliterations of the influential arguments by philosophers on this issue, which frequently stand as archetypes of the major positions anyway. For example, I would like to add more about about the philosophy of Thomas Hobbes, Immanuel Kant, and David Hume. Hume in particular made an argument that is potentially devastating to determinism and I believe it deserves more time here. --malathion talk 22:29, 17 July 2005 (UTC)

What did Hume ever say that was devastating to hard determinism? In "An Enquiry Concerning Human Understanding" it is pretty clear that when Hume speaks of determinism he is only speaking of the ability to act upon one's desires. Whether those desires themselves are determined is left uncertain. Specifically, he writes: "By liberty, then, we can only mean a power of acting or not acting, according to the determinations of the will; that is, if we choose to remain at rest, we may; if we choose to move, we also may." Later, when discussing the problem of an all-powerful God and human moral responsibility, he writes: "To reconcile the indifference and contingency of human actions with prescience; or to defend absolute decrees, and yet free the Deity from being the author of sin, has been found hitherto to exceed all the power of philosophy." He does point out that you can never prove causal determinism, but that's about it. Patrick Grey Anderson

Marx

I find no mention of Karl Marx in this page. I wonder whether the connection between 'determinism' and Marx's historical materialism is strong enough to merit a mention. At this point, I merely added 'Historical Metrialism' to the 'See Also' list, but I think more should be done if others agree there is such a connection because, IMHO, Marx has probably done as much or more to advance deterministic thinking (especially in the lay population) as any single thinker in recent history. Stancollins 17:43, 22 September 2005 (UTC)

Marx´s historical materialism is deterministic, but so are millions of other things also. ----Martin

Determinism and QM

I would like to question the inclusion of the statement:

"The well known experimental physicist Dr. Herbert P. Broida [1] (1920-1978) taught his statistical mechanics class at The University of California at Santa Barbara that the probabilities arise in the transition from quantum to classical descriptions, rather than within quantum mechanics, as sometimes supposed."

This seems like unsourced oral history, and it goes back three decades. If this position has any support in modern physics, it should be possible to find a more recent published reference.

I also think the statement that the Schrödinger equation is deterministic needs additional clarification. The only way this is relevant to determinism is if one considers a wave function for the entire universe, starting from its inception. Putting aside the impossibility of ever computing such a thing, it would carry the probability density of not just the universe we know but all possible variations. In particular, the probability it would give of the Earth existing as it is now would be infinitesimally small. We would all be Schrödinger cats in such a formulation. --agr 16:04, 4 November 2005 (UTC)

The article that was written in support of that general line of argument has received a "frd" (request for deletion), and a decision on that matter is pending. Those who advocate deleting the article raise many of the same concerns you raise. I was unable to get clear on what the author was trying to say, and have also been unable to get him to supply any specific citations. The ideas are supposed to be in Albert Messiah's Quantum Mechanics, but it is a two volume text. It is very nicely written and fairly well indexed. I can find nothing to support the contentions mentioned above therein. I would support you if you want to go ahead and delete that material. It can always be reinserted if the author can provide current citations.

If you have expertise, I would appreciate your looking at [[4]] to see whether it has any merit. I have been asking for clarifications since that article appeared, but to no avail. I have been unwilling to see it deleted because the author has qualifications and publications that suggest he may know what he is talking about, and I would not like to see something of value be lost because of difficulty in expressing the ideas in English.

As for computing the wave function for the entire universe, the answer is 40. ;-) P0M 03:14, 5 November 2005 (UTC)

My written sources say it is 42 :) I did look at [[5]] and I made a comment on one of your entries on the talk page. You had described each particle in a QM system as having its own wave function, with the wave functions interacting from time to time. QM only allows one wave function, in general, that incorporates all the particles in the system and generates probability densities for all possible outcomes. That, I think, is the fallacy in the notion that, since the Schrödinger equation is deterministic, the universe must be. The only wave function that is meaningful to determinism is the aforementioned wave function for the entire universe, integrated forward from the beginning of time. That wave function, if one attempts to take it seriously, includes all possible universes in complete detail. That incomprehensible fuzz is not the universe we know.
Part of the problem may be that classical physics made claims to be able to predict the future evolution of systems of particles once their initial conditions were known. QM is more modest, only seeking to predict the outcome of measurements, and doing so only as probabilities. Attempting to stretch that model to solve philosophical problems can produce results that seem downright silly. One might opine that when the first sentient being opened its eyes and realized its own existence that the wave function of the universe collapsed to one where that being existed. A more realistic model might note that certain physical events are essentially a measurement, for example the replication of DNA. As each nucleic acid base is assembled there is a small probability that the wrong base will be chosen. That probability may be amplified by the presence of carcinogenic molecules or ionizing radiation but it is still a quantum phenomenon. In that sense, the evolution of life is the outcome of some huge number of individual quantum experiments, each with a randomized outcome. The wave function for the entire system would not predict any one outcome of evolution, but would incorporate all possible outcomes. In my mind (perhaps because I believe I have one), this does not correspond in any meaningful way to determinism. --agr 17:01, 6 November 2005 (UTC)

Thank you very much. I have been working for months to get this thing straightened out. I have been unwilling to say that something is wrong simply because I don't understand it. But what you say above is a clear statement of what I have gleaned from all my reading. P0M 18:47, 6 November 2005 (UTC)

In view of the discussion above, and the discussion that relates to Ingham's article on the Philosophical_interpretation_of_classical_physics assertions such as

Quantum mechanics, in isolation, is equally predictable. However combining the two gives probabilistic predictions.

should be expunged.

Ingham's assertions remain unsupported by citations relevant to the above statement. There are some physicists, such as Bohm, who have tried to assert the existence of "hidden variables" that would explain why what happens is not really probabilistic. That is only one out of several different attempts to explain what happened to our old ideas of cause and effect and/or to restore determinism to its original luster. P0M 00:28, 9 November 2005 (UTC)

Seeing no objection, I am going to delete that part. P0M 02:25, 10 November 2005 (UTC)

I've also deleted, as unsourced or original research, the line: "Since it is not possible to do an experiment without using classical coordinates of bodies and much of nature cannot be explained without quantum mechanics, the probabilities seem unavoidable." P0M 02:29, 10 November 2005 (UTC)

I don't agree that my sentence quoted above is original or in question. On the other hand I think the "hidden variables" theory mentioned in the article is an unsupported minority point of view and does not belong in the article. David R. Ingham 18:42, 15 August 2006 (UTC)

Questionable passage not improved by recent edit, was it?

The article currently says:

Even so, this does not get rid of the probabilities, because we can't do anything without using classical descriptions, but it assigns the probabilities to the classical approximation, rather than to the --quantum reality -- ++brain++.

Implicit in this statement is the idea that quantum scale events are absolutely deterministic, and that the uncertainty or indeterminancy comes in because of unknown factors introduced when macro scale factors are introduced. (See the history of this article, the additions of Ingham.) If I understand this interpretation of quantum experiments correctly, it would imply that, e.g., in the double-slit experiment the progress of a photon or electron from the emitter through the double slits is absolutely deterministic, but that when the particle shows up on the detection screen its location on that screen is due to unknown factors from the macro scale screen. The language in which Ingham has discussed his additions here and elsewhere has not been sufficiently clear to me to enable me to do more than take a stab at stating in other words passages such as the one in question here. However, replacing "quantum reality" with "brain" does not appear to me to be an improvement. Are there people who assert that the diffraction pattern formed in a double-slit apparatus is due to the brain? I think not. P0M 08:10, 17 November 2005 (UTC)

Ordering of Sections

I think this article would improve in whatever small way if the 'arguments against' section were at the very end, as is the format in the rest of the wikiverse. Capone 08:22, 17 November 2005 (UTC)

Agreed - I've changed the article to reflect this. Visual Error 00:00, 23 January 2006 (UTC)

Determinism and generative processes

In regard to the position discussed in this section of the article, I am wondering whether any philosopher has investigated the impact that linking non-deterministic processes to the decision process would have. For instance, one engaging in some pursuit might wish to avoid unconscious stereotypical responses. S/he thinks, "I always do the same thing in this situation. I never sing on the bus. If I decide to try singing on the bus, I may pick my shots and avoid the very situation in which singing on the bus would possibly produce interesting results. So I will choose to sing only if my geiger counter (set very low) chirps." I can see where such a procedure might be very valuable,heuristically, but I'm not sure that it changes anything in regard to the free will quotient But if learning is a causal factor in future action,then the experimenter in this situation would seem to have the chance to add something to those causal factors that could not have been programmed in from the beginning of time. P0M 06:48, 20 November 2005 (UTC)

What happened to citations 7 and 8 in "Determinism and generative processes"?

Archive 1Archive 2Archive 3