Saturday, November 22, 2008

Mind Nodes: A Loophole

Or possibly just a caveat.

Also Planck probabilities, Restricted Boltzmann Networks, a technique for the indecisive, and a tentative solution to the mind-body problem.

It may be impossible in practise to build a mind node because exceeding the wake-up threshold is not possible. (The machine remains a non-determinism simulator.) I'm going to backtrack to considering the universe but first I'm going to refine the terms for the threshold.

Planck
Because probability is conserved, there must be a minimum probability, by the NIP. Because there's also a minimum meaningful time and displacement, I'm going to name it after them; the Planck Minimum Probability or PMP. (Someone who is actually well versed in the conservation of probably could give it a better name.) The number of possible states, which is 1/PMP, I shall call the Planck entropy, in that the Planck entropy is the maximum number of possible next states a particle can have without running out of probably to assign to them. (It corresponds to a possible theoretical maxima for entropy; there may also be stricter constraints.)

Universe
Notice I said "a particle;" now consider the universe. The entropy of the entire universe considered as a whole is easily many orders of magnitude above the Planck entropy. However, we're talking about conservation of probability; given two electrons, the probability of finding an electron is greater than one. As a result, I have to divide the entropy of the whole by the number of fundamental particles, and this number is doubtless orders of magnitude under the Planck entropy.

I realized this because the idea of the universe as a whole being physically impossible struck me as ridiculous. (Basically my options were: solve this problem; discard the NIP.)

So unfortunately I've learned that mind nodes may need many more possible states than I thought. Also, I've realized that the pentagon must have enough states to outpace the growth in fundamental elements as the interpreter grows to accommodate more possible outputs.

The relevant factor here are the number of degrees of freedom of the output, and not necessarily the number of particles. This means the critical number is the ratio between the dimensionality, or the degrees of freedom, of the output and the number of possible states of the pentagon. If the ratio is too high, the rate of increase of elements will outpace the rate of increase of possible states.

It doesn't actually matter what's going on inside the mind node because, given an effect with two possible causes, there is no way to distinguish which caused it without measuring the cause directly. I don't have to consider subatomic particles directly because the mind node isn't truly physical and the proof of non-determinism is entirely abstract.

Of course I may have missed something again.

Practical Considerations
There are further problems. Actually implementing f_r may be technically unfeasible or impossible. Creating a function that can meaningfully change itself repeatedly as a result of random input, without crashing, nor failing to halt, is not easy. A similar problem exists in making the pentagon accept arbitrary inputs and turn them into meaningful probabilities. In this second case, I can see that this problem is compounded by the fact that it is easy to accidentally create a trivial solution; such that many inputs lead to identical outputs. If not all inputs lead to identical outputs, it becomes very difficult to tell that some of them do. (Similarly, non-debugged halting problems may exist long after a working f_r is found. However, this is not a problem with neuron-based computing; the computations tend to damp down naturally.)

However, if my hypothesis of consciousness is true, then any inputs with identical outputs effectively blind the consciousness, resulting in a partially insane mind node. (It means there at least two stimuli that it cannot distinguish. Perhaps this will be familiar from people in your own life... Chalk up another point for 'things suddenly explained if the mind node hypothesis is true.')

Now, the mind node in its simplest form is not a very useful form of consciousness. It has no memory nor is it affected by the outside world. Making an f_r that could handle not only the pentagon's output, but the world's input as well, is something at least as hard as the Easy Problem of Consciousness.

Instead, use Restricted Boltzmann Networks. The RBN will, because of its logical nature, encode an abstract model of the world it is in contact with. Also use a memory, that records outputs the mind node has generated, that the mind node can feel out and, say, some outputs of the pentagon will replace the interpreter's output with a memory call. So, the mind node can repeat itself if it can learn how, plus it has this automatically-abstracted world that it can actually interact with. (Also, the RBN can output anything it has found as an input, which means the mind node can now imagine things if it wants, by getting it to output to the pentagon.)

(Incidentally, in the brain the hippocampus repeats new memories over and over again...exactly as is needed to get an RBN to learn nicely. Also, earworms are most likely leakage; the musical nature resonates in a way with neurons outside your hippocampus, and so instead of the usual silent repetition you get noticeable repetition.)

Here's a pretty picture:

Memory usually just records the output and sends it through, but can also repeat previous output. The RBN links the system to the senses. If you watched the video you can see why I've drawn the RBN as two circles. Also, I wanted to represent that this is like a broken-open mind node. It's not a closed system anymore; it has grown a pair of arms and embraced the world.

I've numbered the steps for clarity. The time these steps take will set the clock speed of the mind node, generating a conscious temporal scale. The ghostly grey lines are a representation of a memory call, where the output is a repeat of a previous output instead of a new one. They're ghostly because the pentagon cannot actually call memory directly; it must figure out how to get the interpreter to call memory for it.

You can also see why I called it the 'interpreter.' It is the tool the mind node uses to interpret the world, and the tool the world uses to interpret the mind node.

There's some arbitrariness to the connections. As long as something feeds the output of the interpreter to itself - altered or not - then the mind node should still work. Also, it may be handy to have two RBNs - one to encode the mind node's behaviour, and the other to encode the world. These would be questions for experimental research.

Other Objections
Consider a particular electron in the universe. The probability of its exact history is far below the PMP.

The first pin for this balloon is that it doesn't have an infinite canvas.
The second pin, which is really the same pin from another angle, is that the past doesn't exist.

It doesn't matter that this electron's exact history is hugely improbable, because its history doesn't exist. There are actually numerous ways that electron could have gotten to that same state. (This is related to entropy, though I'm tired at the moment and can't say exactly how.) Enough, in fact, that since we can't pin down a particular history for it, the probability of it being here, now, and with this energy, is more than the PMP. Because it doesn't have an infinite canvas, the probability of it getting to any present moment will always be more than the PMP. (Also, its mathematics don't match those of spontaneity.) I don't even need to prove this; breaking physics is not something that physics does, only consciousness.

Creativity
Having addressed that, I now propose a solution to the mind-body problem that the mind node has suggested to me.

Preferences are a quale. They are not irrational, they are arational. And further, if I compare any two things, I can find that I prefer one over the other, though indeed often it is not worth doing any work to achieve one over the other. For instance, I mildly prefer 'i' over 'm.' But if I were in a marathon denominated by letter instead of number, I would not do very much to get 'i' instead of 'm.' Nevertheless, I can basically rank all the letters in order of preference. How's that for irrelevant!

I suspect everyone can preference everything like this.

(Situation; "I honestly don't care which restaurant we go to." Most likely you mildly anti-prefer both of them, but don't want to go to the work of finding out which one you dislike the least. [You're going anyway because of some ancillary benefit.] Sometimes, though, someone will insist that you choose one.

(If someone's on your case like this, [or indeed anytime you find yourself indecisive] try just starting a pro vs con list, but only compare the first comparable pair of features that pop into your head. "I prefer the sign Papa Mori's has over Nelther's. Let's go to Mori's."

(Note that in this situation, the problem isn't anymore which restaurant to go to, but rather someone is being an ignorant ass and you want them off your back quickly and easily.

(Similar; you can't decide which car you want, despite making an extensive pro/con list, you found each pro got checked of against a parrallel pro, and each con had a partner. "Well, from this angle, I prefer the green one's wing mirrors. Let's go." Whatever jumps into your head first.)

It appears that preferences are universal to consciousnesses. As such, a mind node in contact with a sensory arena (VR or regular R) will creatively order it.

As such, I propose a test of consciousness; consciousnesses creatively order their surroundings.

Because all consciousnesses have a system of (physically arbitrary) preferences, they will naturally gravitate to those things they prefer, and order the world, to the best of their ability, to create more of those things, and of course the opposite for the opposite.

Because these preferences cannot be predicted in advance, this order will be unique, and thus creative. No one will program these preferences, nor give them some kind of list to choose among,* instead each creates a new order, or at least attempts to.

*(Would just invoke preferences of preferences anyway.)

Technically because consciousness is direct, this doesn't fully solve the mind-body problem, as to truly measure another mind you have to know it directly, which basically means you didn't measure it, you became it. Although if the mind node hypothesis is true, you can just jack into someone else's output to see what their input is, while this doesn't stop you from just being them, and as such not solving the problem philosophsically, you're allowed to stop and at least solve the problem empirically.

Preferences; The Problem
I could not find a single example of preferences being truly arbitrary. For example, the preference for bilateral symmetry. Your legs have to be the same height, which eliminates the other options.

So it seems that aside from a bit of noise, human preferences are entirely formed by evolution. As such, building a mind node in a lab may be completely pointless, because it will lack any kind of preference-forming mechanism, and will output hypothesis-neutral noise and only such noise.

Incidentally, this is why I like to write my conclusion down in a public forum. I take them much more seriously once I've done so.