Wednesday, June 4, 2008

Cognitive Scientists on Machine Intelligence


(People who are adamant about their physical consciousness may want to skip to about half way down, where I start butchering some of their more specific ignorances. Specifically, start with the bit about Repo Man.)

Although... In a comment in my apparently unique theory, ("...considering that at the moment, nobody really knows exactly what consciousness is.") Matt Norwood questioned the existence of specific structures underlying consciousness, so it's nice that this article repeats the data for the existence of such structure.
"In humans and animals, we know that the specific content of any conscious experience—the deep blue of an alpine sky, say, or the fragrance of jasmine redolent in the night air—is furnished by parts of the cerebral cortex, the outer layer of gray matter associated with thought, action, and other higher brain functions. If a sector of the cortex is destroyed by stroke or some other calamity, the person will no longer be conscious of whatever aspect of the world that part of the brain represents."
In other words, without that specific module of the cerebral cortex, you would have no consciousness at all. It is a structure, requiring resources, and thus has evolutionary adaptive pressure to reduce in size, and can only be countered by an opposite adaptive advantage.

Similarly, can you function without your cerebral cortex? Are you unconscious because you fall asleep or asleep because you fell unconscious? Is consciousness inevitable to complex brains and the apparent phenomenon of decision, or can it be bypassed? (Not that it matters - either way we have nonphysical consciousness). In my upcoming post about alien-hand syndrome vs the experiment that predicted decisions, I'll show that yes, you can function. You are unconscious because you fell asleep - you don't have to be asleep to be unconscious. You can also consider sleepwalking.

Regardless, while as usual they are supremely proficient in their profession - data gathering - as usual they are equally deficient in logical skill.
"Consciousness is part of the natural world. It depends, we believe,"
'We believe' is a euphemism for 'we assume without evidence.' It is faith and nothing more. As we can see in the rest of the paragraph;
"only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality. That's good news, because it means there's no reason why consciousness can't be reproduced in a machine—in theory, anyway."
However, consciousness must depend on some otherworldly quality, and be reproducible in a machine. This is the only way to reconcile the existence of consciousness with the mathematical nature of physics. My proof, linked above, plus the mind node, shows how this is necessary and possible.

Consciousness is, as it so obviously appears to be, nonphysical. However, to access the physical world, it must have a consistent set of rules for interaction with the physical world - in other words, have its own kind of physics. Similarly, part of this interaction must be mathematical, or it would be unable to interface, or the interaction would violate causality and destroy our universe. Without this upload-download system, consciousness would be eternally unreachable, relegated to upload only, or completely separate. Without being nonphysical, consciousness would serve no function and would never have evolved.
"Nevertheless, some in the singularity crowd are confident that we are within a few decades of building a computer, a simulacrum, that can experience the color red, savor the smell of a rose, feel pain and pleasure, and fall in love. It might be a robot with a “body.” Or it might just be software—a huge, ever-changing cloud of bits that inhabit an immensely complicated and elaborately constructed virtual domain."
I saw once the statement, paraphrased, 'It's the philosophy that quantum mechanics is weird, and consciousness is weird, so they must be the same.' This is simply a socially respectable version of the same thing - we don't understand emergent software, and we don't understand consciousness, so they might be the same thing. The fact is that there is no qualitative difference that can determine one equation to be 'conscious' while leaving the rest unconscious. The only way we could determine that this equation is conscious is to use a reference to our only existing test for consciousness - experience itself. It is impossible to rule out other formulae as conscious.

Similarly, even if we did find such an equation, analogous to the wave equation, we would have explained nothing of consciousness. Symbolize the equation with the simple y=mx+b. One of the questions we could ask it is, 'which y corresponds to blue?' And yet, encoding is arbitrary - a simple change in the optical system, perhaps inserting a NOT gate into the output of blue cones, could still encode reality correctly, but rewire the equation completely, having a different y for blue. Imagine we inserted this NOT gate into you. Would your sensation of blue change? Would your sensation of other colours change? Sure. But how?

Here's the key: what is the relationship between the math and your experience? How does the math determine the blueness of blue? Which kind of equation describes this relationship?

None. And no equation can. Encoding is arbitrary. This is the endpoint of determinist logic; 'this y corresponds to blue in most people.' It is a purely descriptive, it is postdictive, and can't ever be predictive, because the mechanisms are purely nonphysical, and can't be described by our mathematical physics. (It may still be somehow mathematical.) There is no possible physical reason one y corresponds to blue and not another.

The only way we can, with physics, find out objectively which y corresponds to each sensation is to experience it ourselves - to actually be the person we are supposedly investigating objectively.

For this reason, equations can never explain qualia, because there's no logical connection between the equation and the sensation. In physics, things with no apparent logical connection cannot have a logical connection. They are manifested either as randomness or non-interaction, depending on exactly how unrelated they are. The sensation of blue is neither random, nor is it unrelated to blue photons.

(Also consider: a shark, with an electromagnetic sense, must have a very different equation of consciousness than we do, and yet we must be able to say that this equation is also conscious, without having to actually be a shark. We have to be able to tell which y corresponds to a strong electromagnetic pulse and which a weak one, and moreover for it to be truly objective we would have to somehow determine what each would feel like.

Also consider: extra colour receptors. Do they squeeze the existing set of colour qualia or do they add new qualia? Encoding is arbitrary - each person could have a completely different set, actually. What is my red is your up? My black is your hungry? My purple is your Republican? It's impossible to verify without actually becoming one another.)
"We are still a very long way from being able to use this knowledge to build a conscious machine."
The specification for a mind node is pretty short. Plus the work of verifying the math is moments long. Wish I could somehow get it to them.
"How about emotions? Does a conscious being need to feel and display them? No: being conscious does not require emotion. People who've suffered damage to the frontal area of the brain, for instance, may exhibit a flat, emotionless affect; they are as dispassionate about their own predicament as they are about the problems of people around them. But even though their behavior is impaired and their judgment may be unsound, they still experience the sights and sounds of the world much the way normal people do."
That's a pretty fuzzy definition of emotion. Let me examine my emotions and my sense of touch, right now.

It appears I can find no a priori way to delineate one from the other. My emotions are located in roughly my stomach region, and appear to be similar but slightly more complicated than my sense of touch. In fact, when I'm hungry it can be difficult to differentiate the two, because the sensations appear in the same place and interfere. I can overwhelm both, except in extreme cases, by clenching my abs.

There isn't, as of yet, any coherent way to categorize sensation, except secondarily by position. Yet the experience of sight seems to be located outside our bodies - as a result, this categorization is clearly a bit weird, and needs more work.
"Shown this frame from the cult classic Repo Man [top], a conscious machine should be able to home in on the key elements [bottom]—a man with a gun, another man with raised arms, bottles on shelves—and conclude that it depicts a liquor-store robbery."
Really? How does it do that? How do we know consciousness is required to do this? It appears 'consciousness' is defined as 'a computer that can recognize things relevant to the researchers.'
"People can attend to events or objects—that is, their brains can preferentially process them—without consciously perceiving them. This fact suggests that being conscious does not require attention."
I didn't know this.

"Primal emotions like anger, fear, surprise, and joy are useful and perhaps even essential for the survival of a conscious organism. Likewise, a conscious machine might rely on emotions to make choices and deal with the complexities of the world. But it could be just a cold, calculating engine—and yet still be conscious."
In other words, being conscious of our emotions is not necessary. We could process the unconsciously, (a hard-scripted instinct to yell and wave our arms, for example) but there is some advantage to doing it consciously.

To prove that consciousness doesn't require language: "And infants, monkeys, dogs, and mice cannot speak, but they are conscious and can report their experiences in other ways. "

Really? That means we must be homing in on a definition of consciousness. How does a mouse report its experience in a way that could not be simulated without experience?

Not that I doubt that mice are conscious, but this assertion requires some extraordinary support, which I do not see, nor have I ever seen.
"We're going to assume that a machine does not require anything to be conscious that a naturally evolved organism—you or me, for example—doesn't require. "
Since magic doesn't exist, I'd say that's a pretty safe assumption. Atoms can't tell if they're part of a machine as opposed to an organism. Those are arbitrary concepts defined by human consciousness.
"If that's the case, then, to be conscious a machine does not need to engage with its environment, nor does it need long-term memory or working memory; it does not require attention, self-reflection, language, or emotion. Those things may help the machine survive in the real world. But to simply have subjective experience—being pleased at the sight of wispy white clouds scurrying across a perfectly blue sky—those traits are probably not necessary."
We will eventually find that anything we wish to point to as an objective outward measure of consciousness isn't necessary for consciousness.

An interesting question; how exactly can we tell that anything is conscious, then? We have exactly no tests for it.
"The key difference between you and the photodiode has to do with how much information is generated when the differentiation between light and dark is made. Information is classically defined as the reduction of uncertainty that occurs when one among many possible outcomes is chosen. So when the screen turns dark, the photodiode enters one of its two possible states; here, a state corresponds to one bit of information. But when you see the screen turn dark, you enter one out of a huge number of states: seeing a dark screen means you aren't seeing a blue, red, or green screen, the Statue of Liberty, a picture of your child's piano recital, or any of the other uncountable things that you have ever seen or could ever see. To you, “dark” means not just the opposite of light but also, and simultaneously, something different from colors, shapes, sounds, smells, or any mixture of the above."
And, uh, which part of that exactly can't I reproduce by adding parts to the photodiode? I'm going to install a circuit that detects piano recitals, and then the diode will then be conscious! Yay! (Despite the fact that it can't use the diode to detect such - but we've already established that outside input is unnecessary for consciousness.) Looking at the above examples eliminated above, in the article, we can also see it isn't qualitatively different than any of the traits already eliminated. It's just another guess based on things our particular consciousness obviously does.

Luckily for them, there's more.
"However, the 1-megapixel sensor chip isn't a single integrated system but rather a collection of one million individual, completely independent photodiodes, each with a repertoire of two states. And a million photodiodes are collectively no smarter than one photodiode."
I don't see, then, why it's so hard to make something conscious. Just make those photodiodes dependent and whoosh! Consciousness! Yay! Seems like a simple enough procedure...why hasn't it been done yet?
"By contrast, the repertoire of states available to you cannot be subdivided. You know this from experience: when you consciously see a certain image, you experience that image as an integrated whole. No matter how hard you try, you cannot divvy it up into smaller thumbprint images, and you cannot experience its colors independently of the shapes, or the left half of your field of view independently of the right half."
Well as long as we're admitting 'know this from experience' I 'know from experience' that consciousness can be divided from physics. Wow, that proof was a lot shorter than my original...

The flaw here is that you also can't image something independent of its spatial arrangement of photons. To talk about a photon independently of wavelength is meaningless, as is to talk about a colour with no spatial extent, or it's 'shape.' To figure out a colour, you have to measure a wavelength, and to do that you must have some kind of detector of finite size. These things aren't logically independent so it seems unlikely we'd somehow be able to make them consciously independent.
"To be conscious, then, you need to be a single integrated entity with a large repertoire of states."
Um, the weather is made up of a bunch of multi-state particles that interact with each other in an integrated whole...

In fact, everything is made up of interacting multi-state particles. Whee, everything is conscious!
"Let's take this one step further: your level of consciousness has to do with how much integrated information you can generate."
Oh, I see. There's some kind of output, like a factory of consciousness. But, um, where do we output this 'integrated' information? We can't output it to consciousness itself, as that definition would be circular. We can't output it to memory, because we've established that memory isn't necessary. Outputting it to any I/O device, such as our hands, could be completely simulated by a photodiode that triggers a device that paints a Dali when it sees dark and a Rembrandt when it sees light.

The flaw here is that information isn't real. It's an arbitrary abstraction, like 'machine' and 'organism.' There are, by their assumption, only atoms (well, particles) and their interactions. These interactions cannot tell that they are part of a machine, or part of an organism, or supposed to be computing information. They are the same interactions hooked up in different ways, and that is all.
"According to IIT, consciousness implies the availability of a large repertoire of states belonging to a single integrated system. To be useful, those internal states should also be highly informative about the world."
Again, I don't see how we are supposed to be "...a very long way from being able to use this knowledge to build a conscious machine." I can do it right now if IIT is true.

Build a huge database of truths about the world. Hook it up to whatever it is that 'generates' this information. Poof! Many states, highly informative, integrated (presumably by this 'generator' thing) and we have a consciousness!

Do note that if we don't know how the information is 'generated' then IIT is completely useless. It depends on the definition of a thing that cannot be described.
"One test would be to ask the machine to describe a scene in a way that efficiently differentiates the scene's key features from the immense range of other possible scenes. Humans are fantastically good at this: presented with a photo, a painting, or a frame from a movie, a normal adult can describe what's going on, no matter how bizarre or novel the image is."
Those are two different tests. In the first, they talk about a scene's fingerprint, the things about it that are unique. This test would be passed, barely, if there are even two scenes that the tested system can differentiate. In the second, they talk about diversity and flexibility, the ability to interpret novel input. This isn't a definitive test, and would have to be repeated many times to reduce uncertainty.

Neither test, when put clearly like this, shows consciousness. The fingerprint could done with completely human-irrelevant factors, like single pixels in the corner. Dealing with novel input is more complicated, but a simple set of rules can do this - a computer can solve literally any solvable numerical equation if given a few tools.

By their assumption, everything is physical, and everything can thus be represented as a numerical equation. Thus, computers are already conscious, even though they "clearly" aren't.

The first flaw is the use of the word 'clearly.'

The second is that I believe they are trying for the use of abstraction. As I'm fond of saying, life is an arbitrary abstraction that isn't necessary to describe the universe, and doesn't actually exist. If an entity could create the concept 'life' it would most likely seem conscious to us. Similarly, they are trying to say in their example that a conscious machine would be able to abstract out several things from the photo, such as the concept 'gun' the concept 'robbery' and the causal relationship between the concepts 'robber pointing gun' and 'man with hands in air.'

Nevertheless, if memory isn't necessary for consciousness, I don't see how abstraction has any particular weight behind it, other than that it hasn't been ruled out, yet. Such a thing could easily be handled by a Bayesian filter, and indeed it's well-known that much of the brain's behavior is Bayesian in nature.
"Unless the program is explicitly written to conclude that the combination of man, gun, building, and terrified customer implies “robbery,” the program won't realize that something dangerous is going on."
Would such a program be 'conscious?' If so, since you can explicitly write a program to do literally anything, we could make AI in principle just by explicitly programming in everything a human can do.

Alternatively, what's the difference between such a hard-scripted program and one that does the exact same thing but through emergence? The second is, quite literally, just a compression of the first.
"And even if it were so written, it might sound a false alarm if a 5‑year-old boy walked into view holding a toy pistol. A sufficiently conscious machine would not make such a mistake."
The difference between a conscious machine and an unconscious one is that the first doesn't make mistakes. Got it. So you don't make mistakes.

Second reading. The difference between a conscious machine and an unconscious one is that the unconscious one hasn't been programmed to exclude five-year-old boys yet.

Yes, I could extract what they're trying to get at. Nevertheless, they aren't getting at it, and since they're supposed to be respected scientists, they need to learn to do it without my help.
"Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions."
Hint: mind node? A mind node would use quantum decoherence, which isn't included in neural wiring diagrams.

For those of you who like to harp at me about falsifiability; exhibit A, Caenorhabditis elegans. If the mind node makes the model work, then we have a confirmed theory. If it breaks the model even further, we have a falsified theory. Now please learn to logic properly instead of wasting my time.

That goes for these so-called 'experts' as well.

No comments: