Tuesday, May 20, 2008

Consciousness is Not Physical: Application

Wherein I slaughter the so called 'thoughts' of one of my betters. Wherein, copyright law is followed in spirit but shattered in letter. Here's a link to the original article. I have simply copied the whole thing here, because I'm responding to each paragraph.

Notably, I have proven that the brain is physically acausal, however this paper predates that proof by several years. I will mostly analyze it on the basis of knowledge known at the time. I will use it to demonstrate the explicative power of my theory.

Let us not mince words. The difference between something that is and is not conscious is that something's home in something that's conscious, something experiencing experiences, feeling feelings, perhaps even, though not necessarily, thinking thoughts. Don't be lured into details about "self-awareness" and "intentionality." If there's something home in there, something hurting when pinched, then that's a mind and we are faced squarely with two age-old philosophical problems:
There are two hallmarks of consciousness, feelings (or qualia) and decision. Without feelings there is no consciousness, and without decisions feelings have no function. (I cannot actually prove the second part.) In this context, I sometimes call feelings 'sensation,' 'emotion,' 'experience,' and the like, as each implies the rest.

It seems obvious that neither of these things are physical, but it also once seemed obvious that life couldn't be explained without some life-specific force. Nevertheless, this is as good a definition as any. (Consciousness isn't physical, and so the language we usually use, based on physical things, is almost certainly inadequate to precisely define it.)

The first is: How can we know whether or not something's home in there? We aren't mind-readers. Not even a brain surgeon can guarantee that a patient is conscious. This is called the "other-minds" problem, and it's important to note that it is unlike any other problem in science having to do with the existence or reality of something that is unobservable. Quarks, like consciousness, cannot be observed directly, but there are many things that follow from quarks' existing or not existing, and those things can be observed. Does anything follow from the existence of consciousness, that would not follow just as readily if we were all Zombies who merely acted exactly as if they were conscious?

Either consciousness has causes, or not. Either consciousness has effects, or it does not. If it has no cause, then we have a problem. If it has no effects, then we have a different and somewhat less serious problem. With no effects, we have to explain why evolution hasn't culled away the structures that support consciousness - its causes. (The brain in general, of course, though we know which signals actually lead to consciousness. We've found the specific neural modules which, if not stimulated, do not produce sensation.)

Though, if there are no effects, at least we can say for certain that philosophical Zombies are logically coherent. They may not be.

If consciousness does have effects, then we can test for them, we simply need to figure out what they are. Although if certain physical effects were synonymous with consciousness, then they would have to be qualitatively different from anything else we can physically describe, (like being acausal) for the simple reason that everything that we can describe so far is, definitely, not conscious.

Think about it: Zombies who acted exactly as if they were conscious: Acted for how long? Well, for a lifetime obviously. And what does "exactly" mean? It means that there is no way to tell them apart from one of us based on anything they do. Zombies are functionally equivalent to and functionally indistinguishable from ourselves.

As I mentioned, Zombies are not necessarily coherent, logically. It may be that the effects of consciousness are indistinguishable from consciousness itself, as indeed would be the case if consciousness were physical, as all of physics is defined by interactions. Zombies would be of necessity incapable of the interactions labelled 'consciousness.'

So cut them apart," you say, "and check what's inside. If it's different from what's in us, that's still an observable difference, and we could conclude from that that they were just unconscious Zombies." But could we really draw that conclusion if they were made of different stuff? What if they came from another planet: Would the fact that their innards were different be enough to convince you that they didn't feel pain when they were pinched and screamed? Would you yourself like to submit to such a verdict on another planet?
Never use emotional arguments like this. It's inherently dishonest. No, of course I wouldn't want to submit to such a test. So what? If this argument is actually valid, but in disguise, then he should explicitly lay out how it's valid.

In this case, his argument is that, given test such as this one, (a causal, structural test like the litmus test) the only way to verify that it tests for consciousness is to...test for consciousness.

This, I just realized, generalizes to all tests. There's exactly one known test, and it's in the form of a question. "Are you conscious?" Technically, the screaming test is simply a form of this question. Any objective test can only be verified in reference to this test. This means that without knowing what conscious actually is, something even I don't know, there's no way to test for it.

To put it another way; the problem is that it's obvious that you can script a robot to do anything a human can do, which means that there's no test that can distinguish a human's consciousness from that of a rock. (Aside from acausality.)
Or would you feel more comfortable pronouncing such a verdict if they didn't come from another planet, but were built in a lab here on earth? Is there something about that that guarantees that their screams are not genuine? If you feel there is, then you must feel that you know something about the solution to the second philosophical problem, the mind/body problem:
Again, this is not a valid tactic. A question proves nothing. Indeed, there appears to be no answer, but this is infinitely far from there actually being no answer.

What is consciousness? Let us assume that, whatever it is, it isn't an extra "force" in nature, on a par with electricity or gravity, for otherwise all our thoughts would be telekinetic, mind moving matter, and high energy psychic forces would be duelling with their "duals," high energy physics forces, not only in the world as a whole, but in the Academy in particular, with the prize being the truth or falsity of the laws of energy conservation and perhaps even causality itself.

If consciousness were an extra force, then it's physical, and defined by equations of the like we've seen before. There's no duelling going on. There would simply be a previously undetected force-carrier particle and associated charge that for some reason is only present or relevant to particles in brains. Again, while we can think of no reason for our brains to be special like this, (except acausality) this doesn't mean there cannot in principle be such an answer.

It's amusing that he's getting an inkling that consciousness violates physical causality. He sees the proof that I denoted, but only vaguely.

Notably, the law of energy conservation cannot be broken without breaking causality, nor vice-versa.

So we will assume, instead, that consciousness is not an autonomous force, but some property or aspect of the ordinary physical forces we already know. If so, then it is incumbent on anyone who thinks he can tell the Zombie from the real thing that he be able to say what this property is. This is a notoriously difficult thing to do; in fact, I'm willing to bet it's impossible, and will even say why:

Pick a property. Any property. It can be anatomical, physiological, chemical or even "functional." Suppose that that property is what determines whether or not something is conscious. Now answer the following two questions:

(1) How could you ever determine whether that supposition -- that that's the property that distinguishes conscious things from unconscious ones -- was correct? That's the other-minds problem again.

Hey! That's what I said. If he knew, why didn't he show knowledge of it above? Also, since I read this article beforehand, I can say for certain that this sentence does not adequately convey the underlying concept - that any normal test for consciousness can only be calibrated by our existing, inadequate test.

"Functional" properties are sometimes called "processes" or "emergent properties." Anatomical/physiological and chemical tests are all emergent, as is a lot of physics.

But now let's suppose that the supposition -- that that's the property that distinguishes conscious things from unconscious ones -- was, miraculously, true, even though there was no way we could know it was true:

Again, by taking a qualitatively different property (acausality) we could, conceivably, know it was true without specifically referencing our known test. (Physical acausality, especially the actual implementation, neatly explains many puzzling features of consciousness, including why it's so hard to come up with an objective test for consciousness.) (It's conceivable that there are other qualitatively different yet physical solutions to the consciousness problem, which would mean that I've made a mistake.)

(2) In what, specifically, would its truth consist? What is it that something would lack if it lacked consciousness yet had the property you picked out? For if you pick anything other than consciousness itself as the thing it would lack if it lacked that property that was supposed to be the determinant of consciousness (which would be a bit circular), then one can always say: why can't it have that property without the consciousness? And no one has even the faintest inkling of what could count as a satisfactory answer to that question.

Which, again, does not prove that no one will ever think of such an answer. This smacks of defeatism; we haven't yet, therefore, we can't. (Indeed there isn't anything physical that unconscious things lack, but they indeed lack something non-physical.)

Console yourself with the fact that you are not alone, in facing this problem. It's not just centuries of philosophers who have wrestled with it in vain (and don't let anyone tell you the problem's only as old as Descartes, or that it's Descartes' fault, or anything like that: the problem of mind is as old as philosophy and it besets anyone who reflects on the nature of the mind): In particular, it is not only neurosurgeons, experimental psychologists, and ordinary people who are not mind-readers: The Blind Watchmaker (Who designed us though trial and error based on random mutations and their consequences for survival and reproduction) is no mind-reader either. He could not have let the conscious ones through and exclude the Zombies, because the two are functionally equivalent and functionally indistinguishable, and survival and reproduction are purely functional matters!

I knew it didn't start with Descartes! Ha! Score one for instincts! This problem is as old as thinking, actually. The original spirits and gods were plausible for the simple reason that consciousness is so obviously not physical. Inevitably this means that it's plausible that non-brains also have spirits. (Is it plausible today? Check; do they have a mind node?)

And indeed, what possible purpose can consciousness serve to the Blind Watchmaker? Doesn't matter, actually, just that it has to serve some kind of purpose.

So what's a scientist to do, if he makes the mistake of staking out the mind as his terrain of inquiry? If the other-minds and mind/body problem are insoluble, does that mean that the mind is not scientifically investigable?

Only that it cannot be investigated directly, the way most things are investigated. It can be investigated indirectly, however, and perhaps eventually cornered by a series of approximations. Consider that we have been pretty cavalier about the problem of designing a Zombie: Doing it is not as easy as imagining it. There are plenty of formidable scientific problems to solve before we need to begin worrying about whether or not the functionally equivalent Zombies we've designed are conscious: We first have to generate their functional capacities.

It's strange that he's so bad above and yet so good right here. If we take 'direct' to mean 'objective experimentation' and 'indirect' to mean 'logic and subjective experimentation' then he's dead on.

Why a philosopher cares about engineering a Zombie I'm not entirely sure. The problem is whether Zombies are logically possible, not the specific inventions necessary to create one, although such inventions would be very helpful in narrowing the search.

Actually, I think scientific mind-theory is better described as reverse bioengineering: Ordinary engineering applies basic physics and engineering principles to the design of systems with certain functional capacities that are useful to us [bridges, ovens, planes, computers], whereas a scientific theory of mind would first have to successful second-guess what gives creatures like us, already ready-made by the Blind Watchmaker, our functional capacities.

Quite. Although, having reverse engineered ourselves, we would then be able to test other entities for consciousness (important for avoiding cruelty) and also in designing new consciousnesses or altering our own existing consciousness.

Further, we would be able to say for certain whether emotions are morally relevant or if they are simply an obstacle to be overcome. In other words, when we suffer, (in particular, not in general) does it mean that what happened to us was wrong? Should we work to overcome the suffering itself or simply its causes?

Right now, we have only non-credible (used to be incredible) ways to reduce suffering itself. In most cases, we wish to avoid the sources of suffering, which means we are validated in defending ourselves from aggressors, and usually correct in identifying such aggressors.

Why none of this is apparent to the good Stevan Harnad, I don't know.

So the road ahead of us is pretty clear for the time being, even though we have reason to believe there is a cloud at the end of it. For now, we need to devote our time and ingenuity to second-guessing those functional capacities until we manage to scale up to a Zombie. It should be some consolation that the usual rules of scientific inquiry are in effect for the functional part of our quest. It's easy to reverse-engineer a few isolated pieces of our functional capacity, and there are many different ways to do it, but as the functional chunk we take on gets bigger and bigger, the number of different ways it can be successfully generated gets smaller and smaller.

Weak. I respond with one of my favorite articles, specifically calling out the hard problem, which Stevan just spent several thousands words describing.

(Also, since I clearly took a very different tack in solving the problem, the dude is clearly wrong. I'm not sure why he has such a problem distinguishing practical and conceivable.)

This is ordinary scientific underdetermination: You can always predict and explain a small body of data in lots of ways, most or all of which have nothing to do with reality. But as you predict and explain more and more data, your degrees of freedom shrink and your theory gets more powerful and general. The hope, in all areas of science, is that when it is complete, and predicts and explains all observable data, then your theory will have converged on reality; it will be the true theory of the way things are. It might not be. Perhaps there will be another theory that explains it all too, and there won't be any way to know which one's true. (Even picking the simpler theory, if one of them is simpler than the other, may not be the right choice, because the world may simply not happen to be the simplest one it might have been, while still preserving all appearances.)

All of which have nothing to do with reality? How much science do you know that describes nothing real? (I ask because you might be able to answer.) Where I come from, science that describes non-real things is called fantasy. Entertaining, but not science.

Still, nice job in calling out physical equivalence. If two theories make the same predictions, it's impossible to determine between them. For instance, determinism vs free will. At this point they make the exact same predictions; they are the same theory; pick whichever one you like best.

Still, in the case of physics, the theory is getting so complicated and precise that equivalent theories are getting very, very hard to come by. It's likely to converge on a single theory at some point. I often wonder what will happen if physics actually ends. All I can say is that I hope the government isn't still paying for it, because it's not likely to stop just because physicists have nothing left to do. Destroying wealth like that is the government's job, after all.

Hey...isn't Stevan Harmad taking government money?

This is very much the way I think it will be at the end of the day (or at the end of the road, rather, if we stick to our previous metaphor), when we have reverse-engineered a complete Zombie, functionally equivalent to and functionally indistinguishable from us in any way. There is of course the possibility that there will be several, radically different, but equally successful Zombie designs. Cutting them (and ourselves) up, at that point, may be the only remaining way to narrow down the differences. We could insist that in the case of the reverse engineering of the mind, "all the observable data" means not only all the behavioral data, but all the neural data too, and we may want to put our money only on the Zombie that is indistinguishable from us in both respects.

Again, Stevan has a problem with conceivable and practical. Obviously, in principle, a Zombie's brain has to look pretty much indistinguishable from a human's brain as well. Cutting them up is not a worthwhile endeavor. This is the reason we know that Zombies are only possible if consciousness doesn't actually do anything.

In other words, the last half of that paragraph logically eliminates the need for the first half. We'll call the first half the government half. This, unfortunately, is almost certainly an apt name.

I somehow doubt that will be necessary though. I really think that the task of generating our full Zombie capacity probably already narrows the degrees of freedom enough to exclude all nonconscious candidates. I draw some solace, for example, from the fact to which I have already drawn attention, namely, that the "forward engineer" (the Blind Watchmaker) whose work we are reverse engineering had nothing stronger to go by either. But does this mean that the mind/body problem is really just another example of scientific underdermination that will be settled by whatever candidate makes it to the home stretch at the end of the day?

I feel no embarrassment when I admit that I have no idea what the above paragraph is supposed to mean. With reference to my proof it might mean something, but without such, it's probably insane.

Not quite. For that would be all there was to it if consciousness were like quarks, that other example of an unobservable that I mentioned earlier. One can, without too much loss of sleep, accept that if the winning theory says there are quarks -- because with quarks it can predict and explain all the observable data, whereas without them it can't -- then it's safe to accept that there are indeed quarks.

But I have to remind you that our complete reverse engineering theory, the one that generates our full Zombie capacity, will be entirely mute about consciousness, and will be just as capable of predicting and explaining all the observable data with or without the supposition that the Zombie is conscious.

Perhaps another way of putting it is that the complete Zombie theory will explain all the data except one: The fact of the existence of consciousness itself. This fact is at the heart (or rather the mind) of the very idea of "observation," and it's a fact that each of us can "observe" to be true in his own particular case.

Again, hard problem of consciousness vs. the easy problem. Yeah, we know. What we want to know is the actual logical relationship between the concept Zombie and the concept consciousness, and to know if such assertions are bold, or merely illogical.

Good job in pointing out how we know quarks exist, and why it has no particular ramifications that we didn't already know. Yes, we can't observe them directly. No, that doesn't mean all those other things we can't observe directly are any more plausible.

Also, good job pointing out that objectivity is only experienced through subjectivity. In other words, consciousness exists even if nothing else does. (Other things exist, I'm just saying.) However, the only people who I've seen run through the ramifications of this got it completely wrong. Still, Stevan doesn't do so either.

So clearly the Zombie theory has left something out. Hence there is still something different here, something special about the mind/body problem, and something that eludes a scientific theory of mind unlike anything analogous in a scientific theory of matter. Maybe it's safe to assume that consciousness will somehow piggyback on Zombie capacity; maybe not. It might be some consolation that if it doesn't, we can never hope to be the wiser. But I think it's nothing to lose sleep about, at least not for a long time to come.

Long time to come, eh? Ha! Eat it, Stevan Harnad of 1995.

No comments: