A computer carries out the tasks you set for it faster than you can. That's it. Basically a car for thinking. Gets you from A to B faster than your own legs/brain can carry you.
Note that you have to be able to fully teach the computer how to do the task. Perhaps a train rather than a car, as the rails have to be laid down in advance. The computer can't figure anything out on its own. If it's figuring anything out, it's because you told it how to figure things out, and it's simply following your instructions quickly. (More later: semantic problem.)
There's nothing special about following instructions especially quickly. A "bullet" train isn't meaningfully different than a regular train, it's just faster. If you run/think really really really fast, it doesn't stop being running or thinking.
How does an "AI" identify faces? Someone painstakingly instructed it how to identify faces. It does so very awkwardly and inefficiently because it had to be tasks the instructor verbally understood well enough to write down accurately, but it makes up for the awkwardness by doing many different tasks, and the inefficiency by doing them many many times.
Ironically it's 100% phrenology. I haven't checked, but I don't need to. AIs identify faces by busting out the calipers and reducing the face to a series of measurements. With enough measurements, faces are identified uniquely. Imagine having to do that yourself, lol. You have face blindness so you get the micrometer and spend a half-hour writing down numbers and checking them against your list of known associates. Finally, you can say hi...
From writing this down, I now verbally understand how to explain why no AI will ever be able to do art. Art's purpose is subjective. AI can generate any artwork, it is true. Ultimately all art is nothing but a series of mechanical brush strokes. Literally in the case of paintings, but abstract/metaphorical brush strokes in all cases. However, the artist has to not only make brush strokes, but choose the right ones. Since the point is a subjective impression, the artist needs subjective impressions to check the stroke options against, to figure out which strokes produce the desired result. The AI doesn't have them and can only choose at random, incoherently. Perhaps a ludicrously effort-intensive AI can copy things which match previously-used subjective impressions, but this kind of machine can't ever make anything new.
This [choose the right ones] is the general purpose of subjective processing, as far as evolution is concerned. The consciousness transceiver is extremely expensive. It is used because it is that much better than mechanical computation at choosing the right option.
E.g. consciousness is used to identify faces, because similarity comparison in consciousness is automatic. If you hold two faces in mind at the same time, you can't not know how they are similar and different. This is a property of the mind, not the brain. Hence, conscious recognition is extremely fast, efficient, and reliable, as compared to trying the caliper method. So much faster that it's worth downloading the face to a mind and uploading the result rather than attempting to do it unconsciously.
It's still worthwhile even when the mind sometimes deliberately messes with the upload. Yeah, you make bad decisions, and your brain knows better. However, even so, it's worth it to get face recognition. And also everything else recognition.
Mind the phrase, [desired result]. Without a mind, AIs can't have desires. Even if explicitly instructed on goals, they can't recognize outcomes as similar to their goals. They can only identify inputs as precisely identical to some instructed standard. Heed the fact this includes probabilities, e.g. they can recognize that 8 is within 25% of 10, because it is precisely identical to the condition [between 7.5 and 12.5]. If the instructor forgets to mention that something is similar, the computer can't figure it out on its own. This is the semantic problem: is 5+5 like 10? ¯\_(ツ)_/¯ The computer doesn't know, unless the instructor explicitly told it to do the addition. The instructor has to separately tell it to do multiplication, 2*5, if it wants the computer to recognize that too.
The semantic problem: a computer cannot recognize things are similar if they are not identical. You can try to hack this by perturbing the input and trying to measure the perturbation. Extremely laborious. Is 7.4 close enough to within 25% of 10? The computer has to try all perturbations within some bound and check if one of them is identical to 7.4. Ultimately a computer can never know the meaning of the symbols it processes.
Physically, initiative looks like free will. The machine starts doing things for almost no physical reason. Mechanically the consciousness transceiver looks like a random number generator. If atheism were true it would produce random behaviour, not directed behaviour.
(P.S. this is why Atheists can't produce artificial consciousness; it turns out religion is empirical, and it would disprove their religion. Notably Christians can't produce artificial consciousness either, because it would disprove the specialness of humanity. Ref: grandiose narcissism. In both religions, any attempt that is likely to succeed will be sabotaged by the person attempting it, ideally in such a way as to "prove" it is impossible to do the thing they almost did.)
Unlike a computer or brain, a mind can see similarities and produce its own instructions. A mind can not only recognize that multiplication is like addition without being told, a mind can come up with multiplication. Without being presented with multiplication, it can generate, on initiative, all thoughts similar to adding, and the transceiver can convert them to similar physical computations. A computer would have to try every single possible operation and see that one of them produces 10. Then someone would have to tell it to look for the cheapest kind of operation that produces 10, because the computer has no desires and doesn't care about wasting time. Then, on a different set of numbers, 3*3.33, it would have to try every operation again, to see which one produces ten.
Put another way, an AI recognizes similar paintings because the colour addresses of the pixels aren't very different or the pixels of similar colours aren't very far away.
Subjective similarity is much more like logical similarity. To an AI, a black cat and a white tiger look very different. To a mind, they both look like cats, and thus are likely to do catlike things.
Has subjective similarity been deliberately arranged so that logically related things look similar? (At least, more often than they do to a computer. Ref: adversarial patterns.) Has logic been deliberately arranged so that things that look subjectively similar happen to be logically related?
Language, certainly, has been arranged so that subjectively similar things are linguistically similar. We can easily imagine a different language which deliberately forges groups of things similar as far as binary silicon is concerned. Society has been deliberately arranged so that signs and symbols pointing to similar things look subjectively similar. A computer without subjectivity will never be able to deftly navigate these things.
You can't make a driving robot because it can't deal with any situation not anticipated by its designer. You can't even make a dishwashing robot that won't lose a dish when the plate exits its design envelope.
I suppose that's the killphrase: exiting the design envelope. That's understanding and/or creativity. Produce meaningful art (that is, not deep dream noise) that isn't a linear recombination of training data. Make a dish that the dishwashing robot understands as a dish and a human doesn't. Make a dishwashing robot that can spontaneously understand not to fight with other dishwashing robots over a dish. E.g. imagine it drops a dish and they both pick it up.
How many ways can you break a dishwashing robot? Give it a broken dirty dish that you don't want stinking up the garbage. Give it a broken dish that's already clean. Give it a dish with a picture of stale food on it. Give it a dish with a different picture of stale food. Give it a dish that's not conventionally dish-shaped, but is still a ceramic with a liquid-holding side, e.g. a squashed vase. Give it a dish with a hologram of some other shape on it. Give it a porous dish that can't reasonably be cleaned. Give it a sieve dish that can't hold liquid. Give it a dish with a hole that's supposed to be used as a set with another. Try for a tactile illusion so it looks right but doesn't feel like a dish.
Try to get the dishwashing robot to self-clean or self-repair.
You know, the brain clearly does use 3D modelling, but doesn't use polygons. It's clearly far more efficient than a graphics card, since the brain uses at most 20 watts. Maybe try copying the brain's basic functions before trying to copy the consciousness transceiver without resorting to consciousness.
No comments:
Post a Comment
New failcomment system also fails to publish my comments, it's not limited to yours. Keep trying, it will usually work, eventually.
Blogger deliberately trying to kill itself, I expect.
Captchas should be off. If it gives you one anyway, it's against my explicit instructions.