Saturday, July 14, 2007

New York Times Reports the Non-Existence of Souls

Perhaps I should have sold my soul before the market tanked. Maybe I could still get something for it.

19 comments:

Sona said...

Excellent article. Thanks for that.

Keifus said...

Alf's back! In pog form!

The article was slow to load, but I think I had really better get my ass to work anyway.

K

Kevin Clark said...

The interesting thing, though, is that science presupposes the soul. Supposing that we said that there is no distinct non-material cognitive ability within humans. If we say that, then we have to admit that the totality of human thought is a result of the deterministic and/or random effects described by classical physics and quantum mechanics. That means that human thought would be combination of determined outcomes and random outcomes. If that is true, then there would be no circumstances under which people could weigh the scientific evidence for and against a proposal and freely come to a conclusion. If thought is only material, then your belief in evolution is purely the result of the chemistry of your brain, and the belief of other people in creationism is purely the result of the chemistry of their brain, and neither conclusion would in any way be related to the truth or falsity of the underlying proposition. There would be no way of determining who was right; in fact, there would be no such thing as right.

Science itself is based upon a presumption that people are able to look at the world around them and actually understand it. But if human thought is purely materialistic and determined/random, then humans could possess no such capability of understanding. Hence, science itself is merely an illusion with no meaning at all.

The headline should read "Scientists prove that science cannot prove anything"

Archaeopteryx said...

Hi, Kev. Of course we know that science cannot prove anything. Any scientist will tell you that.

I don't think I buy your idea that claiming there is no soul is the same as saying that human thought has to be deterministic. There seems to be a growing consensus that the human mind is the result of an emergent phenomenon in which the whole is much greater than the sum of its parts.

An excellent example of this is a termite mound. Termites are tiny little creatures with tiny little nervous systems; what passes for a brain in a termite is really no more than a series of knots of neurons spread over the front half of the animal. Each individual termite is capable of no more than a few dozen behaviors, yet together, they can build amazing structures with cooling towers and separate chambers for different groups, and intricate architecture that is adapted for different situations. The termite colony, which almost acts as a sentient being, emerges from the acts of individual termites.

The human brain is made up of millions of individual neurons, each of which is a pretty simple little machine that has a limited number of functions. However, they are connected together with a nearly infinite number of synapses, not all of which fire during every nervous impulse. Nerve impulses have an electric as well as a chemical component, meaning that (as you imply) a certain amount of synaptic activity is governed by the random movement of molecules and/or electrons. Exactly which of the several thousand connections at any given neuron fires as the result of any given impulse could easily be randomly determined. If that is so, it's an easy thing to see that every human thought isn't predetermined.

Of course, none of this (or none of the things in the Times article) disprove the existence of the soul. In fact, one could make the case that what we've been calling a "soul" for the last several thousand years is demonstrated by this idea of emergence.

Kevin Clark said...

Have you been reading Douglas Hofstadter? He makes the same argument in regards to the human mind and termite colonies. But if you accept that, it would mean that human consciousness is no more real than whatever consciousness is possessed by a termite colony.

From a physics point of view, there are only two classes of events--those which are random and those which are determined. An emergent system can't alter the basic physics of subatomic particles, which is what it would need to do in order to create true intelligence or free will.

If you say that human thought isn't determined because of the quantum uncertainty involved in the huge number of neurons firing, then you would have to say that human "intelligence" is partly random and partly determined. But this doesn't really do you any good, because it still doesn't give you the conditions necessary for science. Suppose a scientist is performing an experiment and reaching a conclusion. That conclusion would be partially deterministic and partially random. But neither the determined part nor the random part would be an act of considering the evidence and reaching a conclusion. I also don't think that there is any evidence that "emergent systems" would be able to achieve what is commonly understood as intelligence.

I urge you to read Stephen Barr's book "Modern Physics and Ancient Faith". He has a chapter on how materialism necessarily destroys the basis of science. Here's a review of the book: http://www.metanexus.net/magazine/ArticleDetail/tabid/68/id/8361/Default.aspx

Regards,
Kevin Clark

Kevin Clark said...

That link didn't work, trying again:
http://www.metanexus.net/magazine/ArticleDetail/
tabid/68/id/8361/Default.aspx

Archaeopteryx said...

I haven't read Hofstadter, but I don't necessarily agree that human consciousness and a termite mound are equivalents. After all, a human brain is much more complex than a termite mound.

Part of the idea of emergence is complex phenomena emerging from randomness. I haven't read Barr (his book is one of about a million lying on my nightstand right now), but I don't see why some combination of randomness and determined phenomena would negate free will by humans, especially in the context of emergent systems.

Neurons depend on the random movement of molecules to operate, yet we know that working in concert, neurons are capable of conceiving the works of Mark Twain or the art of Picasso. In other words, there is no denying that human thought is the result of deterministic and random effects (whether you posit the existence of a soul or not).

What you're describing as science (weighing pros and cons) is reasoning. We know that some animals (apes, ravens, parrots) have at least some reasoning ability. Are you saying that this means they have souls?

I understand that Barr is using his postulate to attack the reality of science, but his attack works just as well on all forms of human endeavor. By his logic, all human activity is pointless and unknowable without the existence of a soul, but that isn't proof of a soul.

Kevin Clark said...

I don't think that mere complexity can result in intelligence. If it could, then we might have to conclude that CPUs are intelligent at some point based upon the number of transistors they contain. But no matter how many billions of transistors a CPU contains, it still merely executes instructions, just the same as the simplest CPU ever did. If the brain is merely a computer executing instructions, then it is no more intelligent or free than a CPU. (If you do get to the Barr book, he discusses the work of Penrose regarding this point, which is pretty interesting.)

In order for free will to have any meaning, then people must have some active control over it. For example, most people would say that Hitler committed evil acts and was culpable for those acts. But if Hitler had no real control over his actions, then in what sense can we assign moral culpability to those actions? If mass murder is merely a function of a sequence of neurons firing, either deterministically or randomly, then there are no chosen actions and no free choices. We might as well blame Saturn for its orbit as blame Hitler for mass murder.

I think the most fundamental question in science is, "Are human beings the kind of things which are capable of understanding what kind of things human beings are?" Barr makes the point in his book that if humans can't reliably understand how their minds operate, then they can't reliably know anything. If human intelligence is merely something that looks like intelligence (like the termite colony) then science itself--and, as you say, all human endeavors--are illusions.

Archaeopteryx said...

Barr makes the point in his book that if humans can't reliably understand how their minds operate, then they can't reliably know anything.

I don't think this is necessarily true. This goes back to the old late-night drunken dorm argument that basically boils down to "Is reality real?" It might just as well be. It consistently acts real, and, at least to my experiences, never acts unreal. Maybe we don't have free will, but since the universe goes to all the trouble and effort of making it appear that way, I'm just going to go ahead and go with that. It sounds like Barr is just playing philosophical parlor games (pending of course, me actually reading the book).

I don't think that mere complexity necessarily leads to consciousness, but I don't see why it couldn't. I don't see why we couldn't consider consciousness--and life itself, for that matter--as emergent systems.

Kevin Clark said...

Admittedly, I am not an expert on emergent systems, but let's just consider a while the idea that emergent systems could account for actual free will.

An emergent system is basically a set of rules that work against a large number of agents, and by these interactions and rules, the system appears to be centrally guided. (Your favorite author, Michael Crichton, actually has an interesting book on emergent systems called _Prey_.) So, in order to achieve what we would call free will, we would have large numbers of neural interactions, each guided mostly by rules, but partly by quantum uncertainty. The question is, could these rules and uncertainty create free will?

In order for us to answer that they could create free will, we would have to say that at some point the system passed from being entirely rules/randomness based to being rules/randomness/non-rules based. In order for the will to be free, it must at least partly be based on non-rules, where it could choose one act but actually chooses another. How does a system go from rules-based to non-rules based? Indeed, it seems particularly difficult to envision such a thing with an emergent system, whose very existence is based upon rules. The emergent system would have to qualitatively change into something else in order to achieve free will.

Another point: if human free will is solely an emergent system, there is theoretically no difficulty with reproducing it in a computer. You can think of how you would start programming this. You would give the computer certain basic axioms to follow, and would have the computer "learn" more as it went. But the problem is, at some point, you would need the computer to stop using the rules and start decided things outside the rules, or outside the parameters of its programming. If it always stayed within the parameters of its program, it could never be free. Even though it might look like it made "decisions", its decisions would always be completely rules/randomness based. No matter what happened, there is no theoretical way for it to break outside of its programming, because it doesn't "know" anything but the rules.

Archaeopteryx said...

...it must at least partly be based on non-rules, where it could choose one act but actually chooses another...

I don't understand what you mean by "non-rules." If it means what I think it means, I can't see how it's indistinguishable from randomness.

I also think it would be possible to program a computer for "free will." It's certainly possible to program randomness into a computer--scientists use random number generators all the time. I think you may be drawing a line where none really exists.

Kevin Clark said...

Randomness isn't choice. You can flip a coin, and based upon all kinds of factors, the coin will come up either heads or tails. But the coin didn't make a choice. If free will were merely contingency, it would be meaningless.

Let's put this another way. You have another story about dog-fighting on your blog. If Michael Vick ran these things out of his house, did he _choose_ to do something wrong? Or is he merely the victim of an unfortunate confluence of neural pathways and quantum uncertainties that made him do it? In which case, why should he suffer any consequences for an act that he did not choose and could not have prevented?

Kevin Clark said...

By the way, just a note about "random" numbers used by computers. They are almost never random. They are what is called "pseudo-random". They are a number sequence which is almost the same when based upon the same "seed number." What one normally does is seeds the random number generator with the time of day. In Visual Basic, you would use:

RANDOMIZE TIMER

which would start the random number sequence with a seed of the number of seconds past midnight reported by the computer. Since the number of seconds is constantly changing, the random number generator usually returns different sequences. But given the same seed number, the random sequence is always identical.

You could theoretically create a computer that returned truly random numbers. To do so, you would need to base the randomness on some quantum source, such as the decay of a radioactive isotope. Whether such a computer has ever been built, I don't know.

Even if you built a truly randomness based computer, that wouldn't be free will, since the computer would be tied to something it could not control. It wouldn't really be free at all. Personally, knowing something of computer programming, I think that the likelihood of creating a computer with free will is somewhere between metaphysically impossible and extraordinarily unlikely.

Archaeopteryx said...

Are you saying that "free will" is the same thing as "control?"

Neurons in the brain are connected together with synapses that operate by the diffusion of molecules of neurotransmitters. Diffusion is the movement of molecules from a region of higher concentration to a region of lower concentration. It is based on random movement of molecules, but the law of averages says, given random movement, over time the "average" concentration will become consistent throughout the two regions. However, the movement of any one molecule of neurotransmitter cannot be predicted.

Levels of neurotransmitter needed to initiate a response in the post-synaptic neuron are pretty low, so that just a slight change in concentration determines whether or not a particular neuron fires. In other words, firing of neurons is controlled to a large degree by random events.

If the human brain is truly an emergent system, then there is a degree of randomness mixed in--in other words, neurons and thought processes follow rules, but also are subject to random events--random events based on molecules which follow rules.

I don't see how an arrangement like this precludes free will based on known physical processes.

I think Barr is drawing a false conclusion (again without having read his book). I don't think the fact that an event is random in its essence means that a free-willed human can't use it as the basis for some thought process, like the firing of a synapse.

Besides, this line of reasoning can be summed up thusly: Human beings cannot truly exercise free will without possession of a soul. Since I perceive myself to have free will, I therefore must have a soul. Certainly you can see the glaring fallacy inherent in that reasoning?

Kevin Clark said...

So what exactly is free will to you? Put in another context, did George W. Bush choose to invade Iraq, or was it simply the result of neural connections over which he had no actual control?

Archaeopteryx said...

You probably shouldn't use Bush as an example of anything that has anything to do with brains.

I take free will to mean pretty much what it sounds like--the entity that possesses can choose from various behaviors, without constraints by the laws of physics. Here is where we may have a problem--each of the choices must be subject to the laws of physics, but that doesn't mean that there can't be more than one choice. A person can run the various choices through his brain, and physics doesn't limit which of the choices the person chooses.

If that makes sense.

Kevin Clark said...

>>I take free will to mean pretty much what it sounds like--the entity that possesses can choose from various behaviors, without constraints by the laws of physics.<<

I think that's a pretty good definition. But taking that definition, how could you program a computer to have free will, as you have stated is possible? How exactly could a computer break free of the laws of physics?

Archaeopteryx said...

On the Human Nature Fray, there was a discussion of computers that play chess. These computers are programmed to select an optimum move from the options available to them. Of course, when playing chess, figuring the optimum move depends on analyzing the possible responses from an opponent, figuring the next move from there, reanalyzing, and so on. It turns out that the number of possible scenarios is nearly (but not quite) infinite, and there's no way a computer can go through them all.

All of the possible moves are within the laws of physics. Since there's no way the computer can consider all the possibilities, it has to select from between a more limited number. There are programs with which to do this, I'm sure, but at the base, doesn't the computer have to make some sort of choice without knowing for sure all the outcomes? Isn't that what free will is?

Kevin Clark said...

Regarding the computer, you have to start off with the fact that the computer itself does not know, nor analyze, nor consider anything. It merely executes a series of instructions which someone has programmed into it. At base, what the computer software is doing is picking the best course of action based upon the rules it has been given. I've never written a chess program, but it's really just a series of IF-THEN statements--a very complex series no doubt, but no different at base than a simple IF-THEN statement.

Suppose that I wrote a computer program in which a user can enter whether he would like to be given an even or odd number. I could program it so that the software could randomly choose an even or an odd number to return. I could even write it so that the computer would periodically tell the user that it did not feel like returning any number, or it felt like returning an odd number when the user asked for an even number, or vice versa. Now suppose that another computer terminal is linked to a human subject. When the user asks for a even or odd number, the human will decide what number to give or whether to give any number at all. Would the computer software and the human both be exercising free will in their responses? I would say the computer would clearly not be exercising free will, since it is operating according to strict rules over which it has no actual control. The question is whether the human is acting under the same strict rules and actually has no control either.

If your definition of free will as not being determined by the laws of physics is used, then it is simply not possible for a computer to have free will. A computer is merely an executer of instructions. For a computer to have free will, a human would have to develop a program which would instruct the computer to operate outside of the laws of physics. What instruction would this be? How could it be programmed in? If you can think of any way of doing this I'd be happy to hear about it.

Note: Actually it just occurred to me that you could say a computer is operating under free will if you say the operation of the computer is a reflection of the free will of the programmer. In other words, the instructions that a computer executes were freely chosen by a programmer at some point. Even the chess-playing computer isn't really playing chess. The programmer is really playing chess aided by the computational abilities of a computer. But in this sense you could say a lot of things reflect human free will.