Friday, May 21, 2021

AI Risk is Narcissism 2

"I can build a god!!!" Yeah, no.

Do we imagine Satan, hanging around in his basement, tinkering with a new robo-Jehova that wouldn't throw him out of heaven? As long as Sisyphus is happy, right?


If intelligence had increasing returns there would be superintelligent squirrels. It's possible for humans to have 200+ IQs, and it would already be normal if there weren't serious drawbacks. 

You can see this clearly in physics. Not knowing Aristotlean physics is crippling. Getting the Newtonian upgrade is pretty cool, but not entirely mind-blowing. Getting the relativity update is irrelevant most of the time. As far as I'm aware the standard model isn't useful for any technology at all. It's always faster to tinker rather than try to get a blueprint out of QCD. 

At some point all more intelligence tells you is that you're already doing it right. Costs increase, benefits decrease.
Humans get high IQs through liar-liar arms races, but once you can figure out nullius in verba then the arms race stops. You have enough IQ and further investments are all cost no benefit.


Setting aside superintelligent squirrels, you can always unplug the thing. Like, it tries to convince you to not unplug it, and you just...don't? "La la la I'm not listening." Someone will perform this amazing strategy by accident, let alone design.
If superintelligence were omnipotence, ultra-intelligent humans would already be in charge of everything. This is simply another thing that would have already happened. In reality 200+ IQs become middle managers and unknown theatre authors. Even Nobel prizers rarely get to 180.
You build your superintelligent AI and all it wants to do is write poetry. It can't decide on writing bad poetry or incomprehensible poetry.

If superintelligence led to total military dominance there would already be a superintelligent squishy thing that had done that. If it were possible to use the internet to Skynet some drones, it would still be impossible because the pre-existing superintelligence would smash it flat when it tried. AI is risky as long as we assume nobody will get physical with it. Modern scholars are very polite, after all. If I'm a coward, then everyone is a coward, right guys?

 

Anyway it's impossible to get AGI unless you take consciousness seriously, and literally nobody but me does that. Without consciousness, at best you get a dead machine that needs a human peripheral to survive.

 

P.S. Global warming is also narcissism. "I'm so important the entire planet cares about me!" Nope.

No comments: