I found a hole in my last post, so I'm going to pick at it to see if I can destroy my own logic. If you would rather skip the preamble, it's contained in 1.x. The conclusion is 2.3.1.
I wonder if I can reasonably summarize. Consciousness, the unexplained phenomenon, is inherently epistemically subjective - the explanandum is epistemic subjectivity. Epistemology implies ontology, which means it is also ontologically subjective. As physics is ontologically objective, consciousness cannot be physical.
I'm concerned that if your mind is objective relative to me, I might be able to transform the viewpoint so that it is objective relative to you, without losing features from the description.
(To avoid the flinch away from being wrong, I note that even if I disprove myself this way, I've still solved the problem.)
1.1 Time to double check. Is that really the explanandum?
I think I learn that by attempting the proposed transformation and, if it works, then it must not be the explanandum. If I'm truly begging the question, then I should run into a contradiction when I try to enter the logic complex by the other doorway.
For now, there's four entities. The red lamp, the photons the lamp emits and that excite my eye, my perception of the red lamp, and my interpretation of the perception.
If I am dreaming, there may not be a red lamp at all.
In a photon vacuum, the lamp only emits infra-red, which I cannot see.
When I close my eyes, it shuts off the perception of the red lamp even in a lit room.
If I do not pay attention to the sensation of a red shape, I am unable to conclude, "I have a red lamp."
(Technically I can break it down further, but not into independent bits. For example, I can tease apart the photon-eye-visual cortex causal chain, but, as a sufficiently healthy human, I cannot shut off the visual cortex except by shutting off the eye.)
Consciousness is the third entity.
I can be dreaming, I can fail to see the lamp, I can screw up the logic and fail to conclude I have a red lamp. However, if I am dreaming or seeing a red lamp shape, I cannot also be failing to see a red lamp shape - this is epistemic subjectivity, by definition: if I conclude I see a red lamp shape, I cannot be wrong.
188.8.131.52 What is a 'you'? What constitutes a perspective?
Already answered. A you is a set of epistemically subjective entities.
184.108.40.206 Can subjects partially overlap?
The homunculus fallacy is indeed a fallacy. Perceiver and perception don't have independent existences, consciousness is fully constituted by the subjective sensations. As a result, there would be a synchronization issue if consciousnesses tried to overlap. If the non-overlapping parts had any causal influence, then the overlapping parts, having no way to know what the non-overlapping bits were doing,
diverge instantly, contradicting the presumption of unity.
It's of a piece to assume subjective entities get entangled (red + lamp shape) or to assume a single consciousness is a single subjective experience. (Red-lamp-shape.)
1.2.2 Is that what is really bothering me about the idea of perspective?
That plus 1.1, I think so. (I had to try a few times to get it right.) I need to know what I'm going to try to transform, especially as I'm pivoting across a second perspective.
If I'm wrong about 1.1, then 1.2 will topple like a domino. This is good - it means that I don't feel like my supposedly dependent clauses will survive the death of their superiors. If I so felt, it would indicate that I was lying to myself about my justification.
220.127.116.11 For my purposes, the key apparent feature of the subjective ontology is control of the properties of the entities. Is this really key or even relevant?
I think it's key because it makes the ontology clear. If you can change the perception by will - stop thinking about red lamps and move onto blue mugs - then to prove the epistemic premise, the experiment is simply to switch back and forth a couple times. External opinions go from true to false and back, while internal opinion remains true.
18.104.22.168 The control may be determinism from the environment.
Ultimately this is irrelevant, because if the subject is indeed inherently subjective, it will remain inherently ontologically subjective. Therefore, 3.1.1 must also be irrelevant, however handy as a thought experiment.
22.214.171.124 Given that I can't find a problem with the foundation, can I pivot the subjective into the objective
First, I should figure out what that would mean. New hypothesis: consciousness is objective.
126.96.36.199 Could consciousness be cloaked, like the black hole's singularity?
Not if it is causally linked, as we could measure its downstream effects, if so. The cloak hypothesis reduces to non-physical consciousness.
188.8.131.52 Consciousness is objective.
I can confirm the contents of your consciousness, in principle, by measuring your effects on your brain. The contents, decided by will or determinism, nonetheless are knowable and mistakable by me. Therefore, they are similarly related to you. My observation that my own thoughts are epistemically subjective must be mistaken.
184.108.40.206 I can state that the brain and mind are different, but that is begging the question.
It is begging the question to say that I can't know what is in your consciousness without comparing it to mine, and matching your brain measurements against mine. By assumption, consciousness is objectively knowable.
2.2.2 This means when you're thinking of a red lamp, there is only one way the red lamp can causally influence your brain.
Since I don't require a conscious comparison, a unconscious observer can (and therefore I can) work out that you're thinking of a red lamp because a brain with certain correlates can be thinking about a red lamp and only a red lamp; the red lamp is the only possible explanation.
2.3.1 Unfortunately, to clearly state the case is to disprove it. While begging the question in this context, 2.1.2 is true - I can only investigate your consciousness because I have a consciousness.
220.127.116.11 Encoding is arbitrary.
For a brain, like any computer, to enact the action of reaching out and turning on the lamp, the only requirement is that the input code causes that action. As a basic fact about wiring, any code can be converted to any other code, and arbitrary input codes can lead to arbitrary output codes.
So, for example, imagine a 'real' red-lamp code that, when fed into the motor cortex, causes lamp-turning-on. Imagine it must be first converted to a different, arbitrary code, to interface property with the motor cortex. Now imagine the visual cortex simply produces that different code in the first place. (Standardization across computers is hard. Standardization across brains is much harder.)
18.104.22.168 Put another way, can thought-codes be absolute? Can I wire up a red-lamp circuit and have it continually think of a red-lamp based on a constant input?
Objective consciousness can be removed from the description without loss of information. You conclude the chunk of silicon is thinking of a red lamp. I reply by simply describing it at the electronic level. You say, "But that is a red lamp!" I reply, "No it isn't." However, my electronic description fully describes and predicts the circuit - the postulate of consciousness is in fact meaningless in this case.
2.3.3 The conscious sections of human brains would have to be wired identically.
Were someone's brain wired differently, I would have to conclude they are unconscious, or perhaps insane - even if they acted identically due to having properly-adjusted unconscious sections. To assume that the mind is the same as the brain is indeed to assume away the explanandum.
2.4.1 To check: is it indeed impossible to mistakenly observe epistemic subjectivity?
The assumed facts: I perceive epistemic subjectivity, and I am wrong.
2.4.2 Am I perceiving that I perceive epistemic subjectivity? Can I be wrong about that?
I think I'm perceiving that I perceive subjectivity, but I'm in fact perceiving that I perceive objectivity.
Am I in fact thinking that I perceive that I perceive subjectivity, or am I wrong? Etc...
In other words, it is indeed a contradiction. The chain either terminates at the epistemic subjective level, or runs off into an infinity of mistakenness, meaning nobody is in fact thinking the supposed thought.