Sunday, July 24, 2011

Comment Consciousness

Wherein I test myself by explicitly writing down my answers and checking for integrity.


An open thread developed a decidedly philosophical tone. The commentary was pretty clueless. I wonder if I've been fooled, if humans don't much care about these issues, and actually it is just a vehicle for some subtext I'm missing entirely. (Bonus topic: for delusion as a political resource, ctrl-F exhibit C.)

That would be lonely for me. I care about the formal issue a great deal.

It seems many threads are prone to developing a philosophy growth. Do you see it a lot too? Yet, intentionally opening the topic is like pulling teeth. Could that be because it gets opened a lot, and strong feelings fly everywhere, but it doesn't get resolved, teaching the wisdom of forbearance?



Justin:
"OTOH, what if the computer were physically integrated into your brain? Then it seems reasonable that you would share consciousness."
Yes, consciousness is shareable. Experiments show that cutting the corpus callosum results in twin consciousnesses. Any reverse procedure will unify consciousnesses. Presumably even unconscious hardware can be made conscious by hooking it into an existing consciousness.


daedalus2u replies:
"Consciousness is an illusion, albeit a persistent one. The entity that is “you” is not self-identical with the “you” of a week ago or a year ago. "
If consciousness is an illusion, who is being deluded? The second sentence is the illusion; it has nothing to do with the first, proving that,


PeterW:
"What I think this proves is that we have really crappy intuitions and ways of thinking about consciousness"
Indeed true.

If you could define consciousness it would be possible to prove that it is an illusion. You could list the properties it has, and then show that in humans think they have these properties, but in fact don't.

However, there's a very good reason it is impossible to define consciousness. Consciousness is definition itself, concepts, ideas - information. It is impossible to recognize unless you already have some. If this doesn't make sense to you - try to define language in a way understandable to someone who cannot use language.

Try this for scaffolding: language is any transmittable encoding. The purpose is to symbolize a thought and give it away, so that the thought can be decoded from the symbol and duplicated. Now, try to transmit that thought to someone who can't already use language. Have fun!

I could also attack this from the angle of the nature of subjectivity, but daedalus provides a better opportunity, immediately below.

As for the second sentence, it's pretty well true that I'm not the same me I was a moment ago. Both of us were conscious, though. Whether I'm confused about being meaningfully the same or not is irrelevant, because there was a me there to get confused about it.


daedalus2u continues:
"You could of course program a computer to “think” it was you, and to respond every time you asked it who it was to blurt out your name. "
Here I get the disturbing impression that daedalus is a real live philosophical zombie. To respond by saying I'm conscious is not the same thing as to be conscious. To give the right answer is not the same as to understand the right answer.

I don't merely produce symbols which you can decode into meanings. Those symbols correspond to actual sensations I'm experiencing. Consciousness is subjectivity itself. Those symbols mean something to me whether I'm uttering them or not. Try programming your computer to not to emit the symbols, but to mean them anyway.

(Question that interests me: can the symbols I'm thinking be objectively picked out of my brain, or are they epistemically unavailable until I utter them?)

Ultimately, consciousness can't be an illusion because there's an unexplained phenomenon which I experience - namely, that I experience anything at all. You don't get to explain the phenomenon of sensation by denying the phenomenon of sensation:

"Hmm, that's weird. What is this experience stuff, qua experience?"

"Oh, it isn't. You're not really experiencing anything. You only experience yourself as experiencing things."


IVV:
"Thus, consciousness must instead be an algorithm, independent of a physical basis. And if consciousness is an algorithm, the computer you is no less you than the meat you."
A valiant attempt. Doomed by monism, but quite valiant.
This is an epiphenomenal philosophy, and thus not actually naturalist at all, but there's a side quest here I find interesting...
If consciousness is an algorithm, everything is conscious. Indeed, everything is multiple consciousnesses.
Math can't tell the difference between parts of itself, because there there's no qualitative differences between one equation and another. If one algorithm is conscious, then all of them are, to some degree. All I have to do is set the variables in your putative consciousness function such that it evaluates to one, and I can divide it out of any other equation.

Second, while trying so very hard to be a naturalist philosophy, it fails and ends up being property dualism. The math resolves to be what it resolves to be regardless of whether the equations are conscious. Therefore, this consciousness hypothesis predicts nothing at all. It is epiphenomenal.

It gets worse. If everything is conscious, then the universe as a whole is conscious. So - the consciousness of the entire universe...you don't mind if I just call it 'God,' do you?


Justin:
"I think a more reasonable alternative is that consciousness is an emergent property that supervenes on the physical."
(Another QTIM: These opinions are presented as rationally reached by the presenter, instead of being received; is there any significance here? Yet, the incarnation are identical across instances...)

There are two kinds of relationships in the world - identity and causation. Supervenience is just a vague mash up of particular causal interactions, which muddles perception rather than refining it.

Emergent properties are not real properties. Merelogical nihilism is the only reasonable viewpoint.

Assume you have a model of a brain and its behaviour, fully accurate and vetted. It clearly shows what we consider and agree to be evidence for conscious behaviour.

I take your model. I simply say, 'no it isn't.' I believe all your predictions except the idea that it is conscious. I wouldn't use the word, I would simply describe the behaviours you call 'consciousness.' Can you prove me wrong? What prediction am I contradicting with my stance? I have all the observable facts that you do, but one less entity. Emergence yields to Ockham like butter to a knife.

If you actually had an omniscient brain model, you'd find that consciousness is a fundamental property. (Sort of - ask if you want my explanation.) You'd find that without inserting it as an axiom, you can't get your model brain to function like a real brain.


Justin continues:
"But I think Constant is correct below – once you really start to think about identity you must recognize that if naturalism is true, that a persistent identity of any type is an illusion."
I check on Constant:
"The appeal is that a lot of people think you’re wrong about the persistence of personal identity. So your opinion (that it would not actually be you) is simply not shared. I’m among those, but the topic simply can’t be done justice in blog comments."
I guess Justin's reference was an error on his part.

Actually serious is Justin's error about identity. While it's pretty well true I'm not the same me as I was, it is only trueish in a sense. Sure, I react differently now than I used to. Elements have been added and removed.

But whether I'm essentially still me depends only on how I define identity. Identity isn't an actual physical thing, it is an idea. To be me is just to closely follow the idea of myself.

For most it is easy - by 'me' they just mean 'the body that uttered this symbol.' This folk definition has been good enough ever since there was a folk to have definitions. Shockingly, it isn't complete, but it is a consequence of the complete idea.

This is one of the many puzzles substance dualism instantly solves. I am my consciousness, its properties and contents. It feels like the same consciousness as before, therefore it is - sensation is the basis of existence in consciousness terms. If I can't feel it, it isn't me.

Hopefully, working out specifically what I mean temporally will make this clearer. A good opportunity is next.


Stephen R. Diamond:
"We consider ourselves the same entity tomorrow as today because we care about tomorrow’s version. We don’t care about it because we’re the same entity, about which there’s no fact of the matter."
So close. I will feel myself tomorrow, therefore it is still me.
No fact of the matter? Ouch, that sounds like a painful philosophical knot.

It is shockingly simpler than expected. (This issue is explored again, below.) A time ago, a consciousness contained some sensations. Later, that consciousness contained the sensations I'm feeling now. In the future, if my body is cut, that consciousness will contain the sensation of bleeding. What makes it the same? Simply the fact it is. Sheer law of identity. What makes 'I' different from 'other' is simply that were a cut to be caused in the future, if I'm cut I'll feel it, and if anything else is cut, I won't.

And that's it. That's the whole issue.

One of the reasons I would like to see dualism taken seriously is because consciousness need not respect physical time, which may make the entire question moot. Your consciousness disappears when you sleep? Perhaps it doesn't - consciousness may not care one whit about physical time periods it isn't connected to. (This is one reason I'm a presentist - otherwise, consciousness could easily travel through time. Luckily, the physical past and future don't exist for it to connect to.)

Though ironically this definition shows that I'm not really my body. I am instead closely synchronized with my body - I am the conscious representation of it.


Stephen continues:
"It’s an interesting empirical question then whether we care about this uploaded entity"
If you prick it, will I feel bleeding? This further question was answered by the corpus callosum data, above.


Stephen continues:
"I do not find that I care in the least, and I wonder whether those who do are merely telling themselves they ought to care, on account of their (metaphysical) theory of identity."
Stephen's knot is apparently in a thorny vine.
So his empirical question is whether you care about an uploaded entity, not whether you in fact feel that entity's pleasures and pains.

Again, ouch. That sounds like some seriously painful confusion to be in.


daedalus2u replies:
"I can care about an entity without thinking it is self-identical to a version of me. Actually I care more about entities that I know are not self-identical to me at any time, my children."
If caring about a consciousness were enough to be that consciousness, we'd be joining and leaving all these quasi-gestalts all the time as our sympathies ebbed and flowed.

As clarified below, Stephen means 'care' as in the same way you care about yourself now. Or, monist epicycles, exhibit A.

(daedalus repeated this kind of misunderstanding with me about the word 'voluntary.') 


janos replies:
"You care more about the person you’ll wake up as tomorrow than the brainscan-cloned copy of you because you’ve learned to; every day of your life you start a day sharply influenced by a you-before-going-to-sleep, so you learn that you’re better off being nicer to you-waking-up when you’re you-before-going-to-sleep."
For janos' analysis to work, the person you woke up as today has to care about today. Seems obvious, yeah? Only, that means you have to actually feel the events of today. You have to be conscious. I can easily extrapolate that I'll feel the events of tomorrow too, which is why I plan for it.

Or, monist epicycles, exhibit B.

Consciousness seems irreducible, therefore, it is. The path of accepting your own observations is much easier than trying to fight them.


janos:
"Interesting empirical data might come from creating an experimental society of people who never sleep, and then telling them that they will be forced to sleep and seeing whether they identify with the post-sleep versions of themselves, and feel that they should care about them. Perhaps some variation of this could be tried with animal subjects?"
Monist epicycles, exhibit C.

You might care to consider the idea that when you start rejecting your own observations, you tie yourself in knots, and it never stops. Those knots are cancerous corruptions in your beliefs, and like cancer, they spread. Truths further and further away from the original knot will conflict with conclusions spreading from it...and lose, unless the knot itself is dealt with.

Then, typically, an unscrupulous person takes advantage of that spreading corruption to manipulate you to serve their goals above your own. For example, I could apply for a grant to do that useless animal study, use janos as support, and thereby con a sinecure out of the taxpayers, including janos. I would fudge the data to give the answer janos would find most enthralling. (Remember that 'thrall' means servant or slave.)

If you reject your own observations rather than explaining them, you give others the opportunity to substitute false observations for your own. By now, this is widely known and I regularly see propagandists attempting to get a target to reject their own observations. These propagandists are often called 'teachers.' They're the ones who do the grunt work of breaking down the resistance to being fooled, so that twats like the original 'consciousness is an illusion' guy don't have to work so hard at it.

If you tie a child in enough knots, they lose the ability to untangle them. And indeed, most public school victims never do.

One of them is named janos.


Stephen replies:
"Perhaps I should have been more precise. The point is that you care for your future self in the same way as your present self. One of the distinguishing characteristics of that kind of caring is its ineliminability: in principle, you could stop loving your children; you could never stop caring about yourself (even if that caring consists in wanting to die)."
Ineliminability? Stephen admits to one of the properties of consciousness, without actually admitting to consciousness itself. While ignorance must be accepted in general, any knowledge in specific should, I think, be celebrated.


Stephen continues:
"I wouldn’t [upload]. It just doesn’t seem like it would be me (and what it seems like is all there is to the question of whether it is)."
It wouldn't be you. Unless the reverse corpus callosum thing was performed.
Stephen gets exactly the right description of perception. I'm not sure if Stephen got the context right, but that's definitely the right track.

(QTIM: Necessarily I have to evaluate 'Stephen as right' equivalently as 'Stephen agrees with me.' Us both rationally explaining the same data should look similar to us both receiving ideas from the same sources...but perhaps, not identical? I can check my ideas as being more consistent, but I can't easily test that for two not-me thinkers.)

Also, it wouldn't be conscious, because consciousness isn't an algorithm. Basically Decartes was right, there's an organ in the brain that tethers consciousness to it. Though not the pineal gland - more probably an organelle that lies in every neuron, or possibly a particular subtype of glial cell. Since we don't yet know what it is or how it works, it can't possibly be placed in a computer to which consciousness is supposed to be uploaded.

Though I should stop giving this whole uploading thing the benefit of the doubt. It's absurd, even tiresome.

Consciousness isn't physical, but it is tethered to particular chunks of physics. It won't ever be possible to un-tether it and re-tether it elsewhere, because that physical presence is intimately connected and affects the consciousness itself. The un-tethering process is indistinguishable physically from dying, and will be just as irrevocable. Although, consciousness will one day be copyable, if neither our energy nor computational capacities hit a wall. Those copies will either continue a separate existence - better than pure annihilation, but it won't seem like part of you - or else it will be basically a computer bluetooth linked to your brain, and you'll still object to your brain dying the same way you object to having digits cut off.

'Course if we can link brains with bluetooth we'll probably end up in a hive mind anyway. Well...multiple competing hive minds.


daedalus2u replies:
"When I see an optical illusion, I know the unrealistic object is due to a defect in my visual perception. I don’t change my perception of reality because I see an optical illusion. Why would I change my perception of my conscious reality if I observed a consciousness illusion of the type you describe?"
An optical
illusion is a representation in consciousness which reflects
a physical object
that does not exist.

A consciousness
illusion is a representation in consciousness which reflects
an aspect of consciousness
that does...not...exist?

No, wait...something's...wrong...

A logical
illusion is a representation in consciousness which reflects
a logical structure
that proves something that is false.

The reality is that meta-cognition can misrepresent cognition, but you cannot mistake cognition itself. If you perceive a red ball, that is 100% reliable evidence that you perceive a red ball, regardless of what you 'should' be perceiving. This is simple law of identity stuff, folks. ("What it seems like is all there is to the question of what it is.") Once again: if consciousness is an illusion, who is being fooled?


Justin replies:
"The correct response is to cease caring for your future self at all, except to the degree that evolution has bestowed some irrational, non-truth seeking sense of regard for one’s future self."
Remember what I said about delusions spreading?
It's irrational to care whether you're cut tomorrow, apparently. It's not true that you'll feel bleeding...I- I guess?


Constant helpfully piles on:
"I don’t think that the persistence of self, either in one’s own body or as an upload, is a delusion, because in order for it to be a delusion it must be a false belief, and in order for it to be a false belief there must be truth conditions which are not satisfied, and I don’t think it’s the sort of thing that has truth conditions, at least not in the usual sense, at least not in this situation."
No truth conditions! What an elegant evasion; naturalism gets nonsense, therefore we better give up. What a gorgeous example of craftsmanship. The craft in question is sophistry, but still...beautiful. Just beautiful.


Constant continues:
"But once we introduce science fictional elements such as uploads or star trek transporter accidents (where two Kirks are produced instead of one) and so on, then our ordinary methods of checking produce conflicting results."
Conflicting results? May I suggest that when your methods give conflicting results, most likely your methods are broken?


daedalus2u replies:
"A delusion is a belief in the face of overwhelming evidence that the belief is wrong."
Does reality check whether you've seen overwhelming evidence before it decides whether to kick you in the teeth? Every strong false belief is a delusion.


daedalus2u continues:
"I don’t have a delusion about the persistence of identity because I don’t think that identity does persist. I don’t have a false belief about it. My belief follows the evidence. There is no evidence that self-identity persists, I don’t believe that self-identity does persist."
The irony. It burns.

The deluded: believes they have a true belief, or a false belief? The deluded: can tell they've seen overwhelming evidence, or cannot?


daedalus2u continues:
"There is gigantic evidence that there is no continuity of self-identity. Why would anyone think that there is such a thing if not for some pretty strong illusions?"
"Well, according to my delusions, there's tons of evidence for what I think you should think. All evidence to the contrary must be due to illusions."

Apparently daedalus feels no need to share any of this gigantic evidence. (Tvtropes calls it informed ability.)

The sad thing is that not everyone recognizes how absurd such an argument is. Indeed, if your goal is mere persuasion it can even be effective. I don't think that is due to ignorance or illusions; I would bet it is due to training.

Every strongly held false belief is an indication of epistemic failure. In fact, the self-deluding parts of the brain take energy and are not perfect. Most delusions get efficiency by being weak, which is why the deluded feel the need to lash out and insulate themselves against evidence.

This can be used to recognize delusion in first person. If a belief is weak and contrary evidence inspires nervousness, it may be a case of not accepting an ignorance. Can you reject "I don't know," the ultimate null hypothesis?

(QTIM: Have you ever felt a mellow confidence in a belief that turned out to be wrong to the point of uselessness? Because I haven't, as far as I can tell. Indeed, I find it so reliable that if my confidence is not mellow, I assume I'm trying to talk myself into it.)


The question that interests me most is why all the commentators, including me, can't come to agreement.

I know I'm biased toward blaming the other guy, but at some point, I should accept it just is the other guy. For example, I'm writing this months after the thread - I still care, but if I posted any of this in the actual thread, it would get no reply.

Is this enough evidence? What else do I need to come to agreement in good faith? At what point must I stop giving the benefit of the doubt about good faith?

Either I'm more right, or they are. If they are, I want to change my mind. (This sounds like bragging, but I mainly mean to expose the idea to explicit falsification.) I want to hear them out and recognize better reason. But if I am, what can I do to change theirs? Do they even want or need their minds changed?


(My answers seem to hang together. I can't find any inconsistencies. If you do, I'm eager to know.)

Thursday, July 7, 2011

The Dynamic Mind; With Open Borders

First, a question: dear reader, can you point me to examples of bloggers or other writers openly and confidently changing their minds? I will demonstrate what I'm looking for.


Two weeks ago, I lost almost all sympathy for libertarian views. Legalize marijuana? Well, of course. Restrict immigration? On what grounds? Then I came to understand anarchism better.

The government is not legitimate. No decision it makes is moral. It has the right neither to condone nor condemn immigration controls.

True freedom is having the acknowledged right to make either or any decision, if it is about your own property. It is no more immoral to bar settlement on your property than it is to have rules against bubble gum.

Who exactly can claim legitimate title to the G8 borders in dispute? As far as I can tell, nobody. The ones in control took the borders by force, negating their own claims, and the ones they wronged are dead. But, if someone did perform the necessary trades to own an entire country-sized land mass...well, morality isn't scale-dependent. If they say you can't cross, you can't legitimately cross, end of story.

Even assuming I'm wrong, and that the mythical 'public good' security is necessarily or inevitably supplied by a state, the simple fact that libertarians find such a minarchist state legitimate gives them the right to secure the libertarian against any harm they see fit to. If their evidence shows that immigration is harmful or has a notable downsides, they can secure the citizen against it. If their evidence shows marijuana causes externalities strong enough to justify the cost of prohibition, then they can secure the citizen against potheads. Security can't be neatly chopped up into military and non-military. Negative externalities recognize no such distinction.

(This is one reason I buy the argument that minarchism is unsustainable. The night-watchman state has an interest in and the ability to use negative externalities the way USG4 uses the commerce clause to justify unlimited expansion.)

While I happen to think pot (and most drugs) can't possibly cost more than the war on drugs, in a minarchy the question is one of cost and benefit, not of morality.