This post is subject to editing. Hopefully for clarity. Or I may simply merge it with part 2.
The fun part is it can be built with current technology. The sad part is I have no idea which order to explain the thing in.
This is it:
Take an FPGA and hook it up to a true-random source like a collapsing superposition. The FPGA's input is the bitstream representation of the random source. It then processes it and outputs a transformed bitstream. This bitstream is fed into the FPGA's configuration register, re-programming itself, and then used to bias the true-random source for the next cycle.
(Caveat: ideally the FPGA would have infinite gates so it can produce infinitely long unique bitstreams. I have a counter you don't care about.)
So how does this work? Double recursed probabalistic input, technologically violating the independence clause of the probability laws.
What is the probability this machine will put out any particular bitstream? Say, 10000101000101? If it's this cycle, it's simply the probability the true-random source will choose that. If it has five output states, it will be 20% or so depending on biasing. If it can only get to 10000101000101 on the next cycle, it's 20% squared, 4%. Because it's randomly re-programming itself, it can reach any bitstream. (Given a good choice of interpreter/code for the configuration register.) The probability of it being in any particular state drops very quickly with time, and is entirely path dependent.
It's all path dependent, but the device cannot know how much path it has already traversed, and since it reaches undefined probability in the future, it has undefined probability right now.
It's a machine with no defined probability at a finite point in the future, because it has too many states to each get a quantum of probability. If you won't buy quantized probability, then at t=0 infinity it has infinite possible states of each infinitesimal probability, and infinitesimal is physically identical to zero.
Since time is relative, the machine cannot tell how far down the probability tree it is; the undefined-probability future must be exactly the same as the present, as far as it knows. It has no probability right now.
Why is it a consciousness machine? Because it's not a physical machine, and the two leftover possibilities are it's a consciousness machine or causality isn't closed. I'm betting on the former. The machine still operates, it still has an output, despite having no physical probability of having one. The causal hole is plugged by non-physics. If it were possible to access consciousness directly - by being the person in question, for example - then you could calculate the probability. But you can't, so it doesn't have one.
Moreover if the brain is indeed using a device, it makes the world less mysterious, not more. Implemented in neurons, this kind of device will consume your brain like cancer unless it is periodically reset, by for example sleep. This is due to a bias towards growing the bitstream rather than remaining stable. Since it is the consciousness machine, these resets will look like dreams. Further, neurons are perfectly designed for this kind of feedback, and the brain is immunoprivileged because there's scraps of random DNA floating around in there, possibly related to memory. Lossy DNA copying would be a perfect random source.
It's a quantum effect but warm and wet aren't a problem since it uses the collapse of superposition instead of trying to maintain it.
There's more, but I feel I've already spent too long on this.
This is Descartes' pineal gland.
What's worth saying regardless of how much I've already wasted?
This: 'perception' is when the brain sends consciousness data by biasing the true-random source. 'Decision' is when consciousness sends the brain data by choosing a formerly-thought-to-be-random outcome.
Also, the noosphere project. If the consciousness machine is real, then basically your brain is telepathic with itself, which unifies all the little quantums of consciousness.
Obviously the bog-standard silicon machinery isn't the critically conscious part. It has to be the randomness. Which means every single quantum particle is a rudimentary consciousness. An electron has no memory, it has no way to bias itself, so it's basically many of the same consciousness, but it's still conscious. (Like any good extension of knowledge, this theory reduces to the previous theory given certain limits.)
When is telepathy allowed? Can it leak? Probably, it can leak. And if it does, it would look exactly like what the noosphere project is measuring. It will leak when many people are paying attention to the same thing, causing reinforcing interference.
The function of the pineal gland circuitry, then, is consciousness amplification. And it doesn't have to be hooked up directly by wire to the random source, any form of information transfer will do. But perhaps weaker transfers result in weaker telepathy and thus weaker amplification. Unless many minds are all amplifying at the same time in the same way.
There is a problem, in that I don't yet know what the output is supposed to look like. If it's like the noosphere project, it will be easy. Despite having no well-defined probability, you can still compute a local probability. The machine will flagrantly ignore that. It will repeatedly choose a single path or a similar path, ignoring others.
But the machines aren't directly comparable, because present decisions are a function of all past decisions. The path space is huge. It would take an immense number of trials to see any kind of small bias. Especially if it's small, or if the machine takes some cycles to 'wake up.' (Phrencellulate, for when you feel pretentious.)
Moreover, since the machine has no probability, the mere fact that it doesn't destroy the universe may constitute evidence that it's working as I describe. If we accept that quantum bits are conscious, then having a low probability of your next action is just what high consciousness looks like from the outside.