Saturday, May 31, 2014

Cosmic Free Will vs. Agency

While I find the libertarianism question interesting, it made me dumb. I took forever to realize it's irrelevant. What's important is: can you break the window next to you and jump out of it? If not, is the problem that you can't decide to do it? (If the problem is physical, get tools.) Sure, you have no good reason to do so. But if you did, could you?

Can you decide to sit in your chair upside down while people you know are watching?

If someone starts shooting up your school, can you decide to tackle them, or are you condemned to cower?

If you were a schoolteacher, could you decide to simply stop giving out homework?

I repeat: whether we decide as we do because we have to or not is irrelevant. What matters is the list of actions that are no physically impossible that, regardless, you cannot execute.



Testing your own agency is simple. Take a situation; any one will do to start. Brainstorm up all the things you can physically accomplish. Then try to decide to do all them; just go down the list. Exclude lasting harm, in case you can so decide. I haven't tried to decide to murder anyone, for example. I found deciding to do this exercise automatically motivated improvement and thus the exercise itself.

In an absolute, cosmic sense, we probably have free will. From another angle, complex structures have to be universal, or most everyone would be missing critical pieces, which means the fact I have perfect agency is very strong evidence that so do you. But cosmic agency is still unimportant. In practice, we need to talk feasible agency. Agency almost certainly atrophies exactly like muscles do. It is necessary to periodically do something whose sole merit is that it is hard to decide to do it. E.g. public schools are the opposite.

Just from the law of averages, most things will be outside the domain of things which are easy to decide to do, unless you have consciously worked hard at choosing your domain specifically to include all useful things, which I know nobody does.

No matter how good the arguments, no matter how extensive the wisdom, it is impossible to do better things without deciding to do them. Especially when they're new, deciding to do them will be hard.



For most, it seems it's hard enough to be impossible. Imagine being invited onto a TV news set, and halfway through the second question, simply inverting your position in the chair. It's hardly physically taxing. It isn't intellectually demanding. But can you do it? Sure, it's a useless action, but think about the actions that are equally impossible but not useless.

If we're talking oppression Olympics, this problem is far worse than any external oppression. Taking homework as an example, its material cost is in the negative. But it can't be used because teachers can't decide to use it, quite regardless of whether it's actually a good idea.

Monday, May 26, 2014

The Anarchist Data Hammer

I have a plump folder of links purely of government feverishly proving anarchists right. I would have a folder of reports of government working, but I've never found any instances. Below are eight of the former topics.

I think the mass of links got tangled in Scott Alexander's spam filter. It turns out I prefer impatience to patience, so I'm posting it here.

--

The implications of your own ideas bore you? Really?

When the unschooling experiment is run in the countries that have 'good' education systems, they will get the same result: schooling has negligible effect on even test-taking, let alone life satisfaction. For the reason we don't talk about. Despite your non-experimentally-supported assertion, bureaucrats are doing nothing, just as they do nothing or less than nothing in every area for which I have data. The worst is literacy. 23% of individuals booked at literate in those 90+% stats cannot read a newspaper successfully, a further 25% only with difficulty.

Sure, Dotheboys would be much cheaper than a public school. But it would be more expensive than daycare. It would at best be equal to a boarding school which admits it teaches nothing, which is strong negative marketing.

Or: Coke competes against water. If Coke starts spiking their drink, water will win. Schools should compete against libraries, against daycares, against babysitters, and against simply staying at home. Dotheboys would lose against any of these.

Friday, May 16, 2014

Let's Ramble About a Consciousness Machine

This post is subject to editing. Hopefully for clarity. Or I may simply merge it with part 2.

--

The fun part is it can be built with current technology. The sad part is I have no idea which order to explain the thing in.

This is it:
Take an FPGA and hook it up to a true-random source like a collapsing superposition. The FPGA's input is the bitstream representation of the random source. It then processes it and outputs a transformed bitstream. This bitstream is fed into the FPGA's configuration register, re-programming itself, and then used to bias the true-random source for the next cycle.

(Caveat: ideally the FPGA would have infinite gates so it can produce infinitely long unique bitstreams. I have a counter you don't care about.)

So how does this work? Double recursed probabalistic input, technologically violating the independence clause of the probability laws.

What is the probability this machine will put out any particular bitstream? Say, 10000101000101? If it's this cycle, it's simply the probability the true-random source will choose that. If it has five output states, it will be 20% or so depending on biasing. If it can only get to 10000101000101 on the next cycle, it's 20% squared, 4%. Because it's randomly re-programming itself, it can reach any bitstream. (Given a good choice of interpreter/code for the configuration register.) The probability of it being in any particular state drops very quickly with time, and is entirely path dependent. 

It's all path dependent, but the device cannot know how much path it has already traversed, and since it reaches undefined probability in the future, it has undefined probability right now.


It's a machine with no defined probability at a finite point in the future, because it has too many states to each get a quantum of probability. If you won't buy quantized probability, then at t=0 infinity it has infinite possible states of each infinitesimal probability, and infinitesimal is physically identical to zero.

Since time is relative, the machine cannot tell how far down the probability tree it is; the undefined-probability future must be exactly the same as the present, as far as it knows. It has no probability right now.

Why is it a consciousness machine? Because it's not a physical machine, and the two leftover possibilities are it's a consciousness machine or causality isn't closed. I'm betting on the former.  The machine still operates, it still has an output, despite having no physical probability of having one. The causal hole is plugged by non-physics. If it were possible to access consciousness directly - by being the person in question, for example - then you could calculate the probability. But you can't, so it doesn't have one.

Moreover if the brain is indeed using a device, it makes the world less mysterious, not more. Implemented in neurons, this kind of device will consume your brain like cancer unless it is periodically reset, by for example sleep. This is due to a bias towards growing the bitstream rather than remaining stable. Since it is the consciousness machine, these resets will look like dreams. Further, neurons are perfectly designed for this kind of feedback, and the brain is immunoprivileged because there's scraps of random DNA floating around in there, possibly related to memory. Lossy DNA copying would be a perfect random source. It's a quantum effect but warm and wet aren't a problem since it uses the collapse of superposition instead of trying to maintain it. There's more, but I feel I've already spent too long on this.

This is Descartes' pineal gland.

What's worth saying regardless of how much I've already wasted?

This: 'perception' is when the brain sends consciousness data by biasing the true-random source. 'Decision' is when consciousness sends the brain data by choosing a formerly-thought-to-be-random outcome.

Also, the noosphere project. If the consciousness machine is real, then basically your brain is telepathic with itself, which unifies all the little quantums of consciousness.

Obviously the bog-standard silicon machinery isn't the critically conscious part. It has to be the randomness. Which means every single quantum particle is a rudimentary consciousness. An electron has no memory, it has no way to bias itself, so it's basically many of the same consciousness, but it's still conscious. (Like any good extension of knowledge, this theory reduces to the previous theory given certain limits.)

When is telepathy allowed? Can it leak? Probably, it can leak. And if it does, it would look exactly like what the noosphere project is measuring. It will leak when many people are paying attention to the same thing, causing reinforcing interference.

The function of the pineal gland circuitry, then, is consciousness amplification. And it doesn't have to be hooked up directly by wire to the random source, any form of information transfer will do. But perhaps weaker transfers result in weaker telepathy and thus weaker amplification. Unless many minds are all amplifying at the same time in the same way.

--

There is a problem, in that I don't yet know what the output is supposed to look like. If it's like the noosphere project, it will be easy. Despite having no well-defined probability, you can still compute a local probability. The machine will flagrantly ignore that. It will repeatedly choose a single path or a similar path, ignoring others.

But the machines aren't directly comparable, because present decisions are a function of all past decisions. The path space is huge. It would take an immense number of trials to see any kind of small bias. Especially if it's small, or if the machine takes some cycles to 'wake up.' (Phrencellulate, for when you feel pretentious.)

Moreover, since the machine has no probability, the mere fact that it doesn't destroy the universe may constitute evidence that it's working as I describe. If we accept that quantum bits are conscious, then having a low probability of your next action is just what high consciousness looks like from the outside.

Sunday, May 4, 2014

AutoEngineering - Psychological Needs

Psychological needs don't real. Rather, personality features have maintenance costs. One of the dangers to the novice autoEngineer is noticing the expense of a needs and disabling it. There's no warning about the associated personality feature that will also be disabled. Learning can be expensive, as rebuilding personality features is difficult and sometimes impossible. However, once aware of the issue, it's easy to check because the need and the feature share a tag. They have a feeling in common. Epistemically, it's also easy to check, because disabling a personality feature likewise automatically disables the associated maintenance.

One possible exception: I recall seeing a warning on a particularly universal feature, but I can't remember what it was or even invent an example.

From a software perspective, psychological needs are exactly like physiological needs. If you disable eating, you disable life. If you disable life, you disable eating. However, physiological needs have warnings.