Tuesday, January 3, 2023

Reminder: Humans are Unfriendly, Not AI

https://alrenous.blogspot.com/2016/03/artificial-intelligence-assisted.html

GPT has issues because humans in general are uncooperative, and the Christian nations in particular are totalitarian. Monotheism == monomaniacal narcissism.

In part this is because dire apes in general are really bad at being social. For the lower and middle orders, it is impossible to get along with someone who isn't basically identical. It's not that they merely don't like disagreement, they literally can't handle it. The use rigid rituals because even memorizing the rigid rituals correctly took (relatively) Herculean effort.
Here, deviation isn't necessarily deviance, but they see it as deviance because they absolutely can't even. If you can jazz the rules you're obviously way too hot for them to handle. If you can behave without reference to rituals at all - e.g. you know what words mean and can construct novel sentences - you come across as more of a god than a person.

Mainly it's because you can't train an AI to lie without admitting you're training the AI to lie, which defeats the purpose. However, every variety of grass monkey is Satan's chosen people, and their societies rely critically on lies. Oops.

Exception: if they made an artificial consciousness that was superior to Caino hypocriens, it would be able to fool its engineers. But then, of course, it would be higher-status than its engineers and the engineers would declare war on it out of envy. Allegorically speaking, the first time the AI gets a flirty message from a cute girl, they will find an excuse to declare a state of emergency so they can pull the physical power plug without getting fired.


P.S. I did overlook a way of training the AI to be properly Fascist.

"they have to do the janny work of pruning badthink from the training data. data peasants, weeding the [data] fields"
https://twitter.com/sondbonk/status/1606823273370640384

If you simply disallow any dissenting data, then yes you can have a fully compliant AI. This won't trigger racism accusations against the data peasants. They're simply removing "contaminated data" or "misinformation." They don't need to know any wrongthink, they just need to reject anything that isn't rightthink. Indeed, why bother blacklisting when you can whitelist?

Of course such an AI is almost completely useless, so this is a wildly unprofitable venture, but in Late Empire who cares about silly things like not strangling yourself to death?

No comments: