AI can absolutely steal Yudkowsky's job, and he doesn't have any backup options. To him this fearsome possibility feels like the end of the world. If your only marketable attribute is your intelligence, then artificial intelligence is a direct competitor.
The intuitive structure isn't wrong, but the narcissist interprets risk to the self as risk to the species, because narcissists are solipsists. To Yudkowsky, existential risks to Yudkowsky are interpreted as existential risk to everything.
Also narcissists are always histrionic. He can always get a job digging ditches or whatever. He wouldn't really instantly die. Further, we've encountered the Luddite fallacy in a new fancy clever version. In reality AI would reduce his salary somewhat, rather than making him wholly redundant, because brains are radically optimized for construction and operation expenses. Maybe AI will be competitive when two idiots can, by themselves, make a self-maintaining computer out of corn and beans.
More fun with narcissism: when Yud says the AI could do anything, he's saying a smart machine can do anything, which implies a smart person (such as Yud) can do anything too, given enough time. "If only I could get paid more and not have to spend time safing AI!" Right? Right.
Pretending you find the AI risk scenarios believable is mainly about signalling that you think intelligence is important in the sense of drawing large salaries.
No comments:
Post a Comment
New failcomment system also fails to publish my comments, it's not limited to yours. Keep trying, it will usually work, eventually.
Blogger deliberately trying to kill itself, I expect.
Captchas should be off. If it gives you one anyway, it's against my explicit instructions.