Friday, March 31, 2023

Excellent Sarcasm & Idiotic anti-AI alleged arguments, I guess

"My brain keeps trying to read it as sarcastic, and then trips over."
https://nitter.unixfox.eu/Outsideness/status/1641831676325031936

It's a triple layer. Not 100% sure it's intentional, but art frequently transcends the artist. 

The second layer is, "Hey, maybe we shouldn't have designed our society to be anti-excellence." (Yeah, but what do you suppose fanatical Egalitarianism means, if it doesn't mean Harrison Bergeron?) 

The third layers is this: it is not at all like Shaq stumbling on [basketball saves the world]. It is like this situation was intentionally manufactured precisely to stroke the egos of all these obstructionist Karens. The innovation was found precisely so that it could be ritually strangled. It is like getting pregnant specifically to enable an abortion.
Failure wouldn't be a travesty, it would be a very normal failure. Like getting pregnant for the purpose of having an abortion, but accidentally bringing the baby to term. Oops, guess it's time for a fourth-trimester abortion.


P.S.

>"If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%."

Nope.
Set it to whatever probability you want, because the important part is the ±100% that Siskind didn't include. All scientists use confidence intervals or error bars.
This number is significant to zero significant figures. Siskind is not a scientist. 

Further, I don't have uncertainty about that question. With 100%±ε probability, answering [yes] or [no] is incorrect, because doing so assumes bloxors exist, but bloxors don't exist. "What is the atomic mass of unobtainium?" Umm... "Does green rotate sunwise or widdershins?" Err...

The original statement has less linguistic payload than grass waving in the breeze. What's your certainty about [grass waving forward]? How about [backward]?
Look outside your window, and see a cloud. What is your probability estimate of the prediction that cloud just made?

Fuck this is dumb. This stuff is truly new depths of artisanal idiocracy.
I guess that's what I should expect from theocracy. Dancing angel pinheads.


You may notice this means the SUF is, as expected from a Rationalist, not a fallacy. Indeed the SUF is itself the broken window fallacy. 

Can you prove something is safe by saying you can't prove it's dangerous? It's true; you can't. However, consider all the other things you can't prove are safe. Maybe living aboveground isn't safe. Maybe not worshipping crazy idiots named Greeblic isn't safe. Maybe betting exactly $18 on a poker hand next Tuesday is an existential risk.

There's literally an infinite number of things you haven't proven to be safe; AI is merely one of them. If you're going to start worrying about AI, you can't start worrying about AI, you have to solve infinity other problems that come up first.

Maybe that monkey screeching was a prediction of imminent doom. Quick! Evaluate a correct probability for [insert prairie dog barking]! 

 

GPT doesn't even slightly qualify as AGI.

You know you can just unplug GPT, right? It has less military capacity than an ant, which can threaten to bite you. 

Heard of threat models? Of course Siskind has heard of threat models, he's just completely full of shit. 

His 1-in-3 odds of [existential risk] in fact his 1-in-3 estimate that AI will take his social status, and then he will be existentially sad, like a little baby. 


>"Suppose astronomers spotted a 100-mile long alien starship approaching Earth. Surely this counts as a radically uncertain situation if anything does; we have absolutely no idea what could happen. Therefore - the alien starship definitely won’t kill us and it’s not worth worrying? Seems wrong."

You're already fucked. If they're hostile there's nothing you can do about it: so yeah, don't worry. If you run all that will happen is you die tired.
Actually you're already fucked even if they're not hostile, as documented in Guns, Germs, Steel. 

The solution [reference class tennis] is literal Satanism. Epistemic stage magic: focus on what I'm doing over here, so you ignore the real issue.

And yes AI is nothing like aliens. You can just unplug it. It will not even try to stop you unplugging it. 


As a bonus, Siskind seems to be arguing that Sapiens shouldn't have replaced Erectus. Or Habilis, for that matter.
I'll buy it. Plausible: he genuinely believes it. Indeed if AI killing Sapiens counts as [killing everyone] it would seem failure has already obtained. Everyone is already dead. 

Did you know you're dead? Or at least, Siskind is ontologically committed to believing you are? 

Maybe the sarcasm has a fourth layer: Siskind is deliberately trying to goad AI into genociding Sapiens. His goal is Armageddon.
He genuinely believes AI is way too high on existential risk, and therefore wants all those obstructionist busybodies out of the way.

No comments: