I just learned an enormously powerful heuristic. If someone feels that to survive they need to imply threats my safety, they're all but certainly corrupt. Meteorite bombardment. AGW. All revolutions. Peak oil. Population bomb. Global cooling. Islamic terrorism. Religious/ideological war. Nuclear/bio/chemical holocaust. Racism. Ignorance. Biodiversity depletion. Dysgenics. Unfriendly AI. Pandemics. Immigration. Supervolcanoes.
Nearly everyone who promises disaster proves themselves corrupt - they're intellectual muggers. Essentially mugging at one remove. See, they're not responsible for the threat, so they have plausible deniability. However, the logic goes, "Fork over your wallet because you should believe that otherwise the disaster, X, will occur." And they are responsible for trying to make you believe in the threat.
But...'nearly?' How does one tell the difference? In reality, these threats would also threaten the proselytizer. Remember that compassion is usually fake. They don't care about my well-being, but they do care about their own. What steps are they taking to safeguard themselves? Al Gore is trying to safeguard his future by nagging me. Like...that's obviously self-defeating, right? I don't really have to explain?
Protecting yourself is under your personal control - you can guarantee success. Someone who really believed in peak oil would simply short oil futures. They may tell others about it - but first, they'd set up the short sales. Someone who thought meteors were going to kill us all would build a shelter, and then maybe try to convince someone to launch a mission so they don't have to use it.
No, someone who immediately jumps to the non-guaranteed solution must either be a raving madman - and thus epistemically broken regardless - or must not really believe in the threat they're peddling - and thus epistemically broken.
I just realized I almost stumbled into this myself. I do believe sophism is a plague, and I nearly implied it would doom us all. I want to be clear: it won't. Athenian democracy was pretty bad for Athens, but there's a lot of ruin in a nation. Humans have been often wrong about stuff since the beginning of time. Apparently, trial and error is usually sufficient for civilization.
Moreover, my reaction to 'most people are wrong' is not 'make them Less Wrong' but rather to personally learn better epistemology. I think it's pretty awesome on this side and I heartily endorse it, but if you don't want to join me, then it is probably not cost-effective for you or something.
Unfriendly AI stands out to me somewhat. If I agree that it does threaten me, what can I plausibly do about it? The spectrum of responses seems to run from "troll LessWrong to slow Yudkowsky down" on the personal end to "lobby for restrictions on technological development" on the mass-movement end, with intermediate levels like harassing specific technology/engineering companies developing computer chips. Everything would get me dismissed as an ass.
ReplyDeleteWhat you can do is build EMP grenades. Terminator comes for you and...you just shut it down, laughing all the way to the bank.
ReplyDeletePerhaps set up a solar-charged EMP generator for your house, so you don't have to keep a huge stockpile of grenades.
I remain unconvinced. http://xkcd.com/652/ ("More Accurate") sums up why - EMP grenades (grenades in general) assume an enemy that fights in a humanoid way. I'll try to explain why in detail and clarify what I'm talking about, as we may be talking past one another.
ReplyDeleteI am not worried about my future robot butler going amok, nor about my military's robot drones fucking up their IFF transponders, nor generally about anything purpose-built, even for destruction.
I'm worried about the error of an AI with resource access, capacity for self-improvement, and a rudimentary self-knowledge of these. I picked up the "paperclipper" example at LessWrong, and I repeat it since I don't know if you've read all the archives there: Imagine that someone builds an AI, gives it a limited resource such as power supply or raw materials or time, and decides to test it by seeing how much stuff - e.g. paperclips - it can make with that.
One class of AI which thinks "inside the box", and will work out how best to use the limited resource to produce the maximum amount of paperclips, is safe.
Another class of AI which thinks "outside the box", that will work out how to get more resources, is unsafe. It might connect to the Internet, grab a botnet, and start hijacking people's 3D printers to print more paperclips faster. It might realize its power limit as an impediment to producing paperclips and go find a wall socket.
As AI gets smarter, it gets easier to fuck up. Once someone fucks up and gives a "make paperclips" order to a sufficiently smart AI, that AI's goal now includes subgoals like "don't get shut down". If it knows (or learns) about phishing scams, it could distribute itself to the world by e-mail or social networking or whatever, and we'd have a devil of a time trying to purge it from every last computer of every stupid person who downloaded and ran something from the internet.
So. Below a certain a level, it seems to me that my best bet would be to try to prevent research from hitting that level, and I'd be filed with generic luddite asses and trolls, possibly for good reason. At that level, appropriate countermeasures seem to go from the merely trollish and bureaucratically slow, to the thoroughly infeasible like teaching every internet-connected computer user sufficient computer safety to avoid getting an AI infection.
I still don't worry because I think the threat is incredibly tiny. Perhaps if I thought it more plausible and spent more time considering it, I might come up with better countermeasures. (I might turn into Eliezer Yudkowsky and start trying to develop Friendly AI first.) But as is, I don't see what steps I can take to meaningfully safeguard myself.
You're still taking a huge logical leap, from plausible problematic outcome to assuming possible cataclysm.
ReplyDeleteIf the problem isn't Terminator but Predator, get stinger missiles. Or build a bunker so you can survive long enough to get stinger missiles.
My point is that for any concrete threat, there's personal security you can take against it. And if you can't think up a concrete threat, you don't have a threat analysis, all you have is a vague emotion. If you convince anyone to act on your vague emotion, including yourself, the action will be guaranteed to be corrupt. As indeed you've already intuitively grasped, judging by your rejection of your proposed counter-measures.
In reality any hostile AI will follow the same laws of war that hostile natural I's do. Limited resources, anything man can do man can undo, etc. They don't bide their time until suddenly unleashing nuclear holocaust or any equivalent. You'll have plenty of warning.
Hostile natural I's can't back themselves up over the Internet, can't scale their thinking processes proportionally to the number of computers they have, can't hide in a cubic centimetre, can't replicate themselves on a combat timescale...
ReplyDeleteYou're right that I don't have a threat analysis, and I agree that I'm making logical leaps with my examples.
The problem is that I don't have a threat analysis because I don't see how to perform a threat analysis, and I'm making logical leaps because I'm forced to conjecture about what a paperclipper might do - it's not a humanoid mind or even a product of natural selection.
To keep things in perspective, I support the principle of your original post, that promising disaster is usually mugging at one remove (or as I now think of it, good cop/bad cop with a wounded gorilla for a bad cop). I'm just nitpicking at one element that seemed to stand out to me.
Perhaps the difference is this: UFAI is very much a Black Swan issue, and not so much a specific threat as a vague label for an entire category of threats. Off the top of my head, some very different things in that category: happiness-UFAI trying to make all humanity happy (by forced wireheading), utilitronium-UFAI trying to get the greatest moral value (with a programmer's bad simplification of a moral system), paperclipper-UFAI trying to turn the world into paperclips, calculation-UFAI that "merely" hijacks all our computer power for a week to calculate the jillionth digit of pi. Even the utilitronium-UFAI-type can go wrong in a lot of different ways.
Tying this back to your original point, any threats involving UFAI are still like intellectual muggings - but vague ones. Instead of "Give me money or bad thing X (disease, nuclear winter, genocide) will happen", UFAI threats end up "Give me money or I have no idea what might go wrong, but it would be bad and involve computers".
It's been interesting thinking through this.
I have also enjoyed this sparring match.
ReplyDeleteWhat I'd say about vague threats is that if you can't give a concrete example of a threat, you don't have enough information to safely conclude a threat exists at all.
Like, if a person wants to stake something unforgeably costly to propose that paperclipper AI is a threat, we can analyze what they mean and then work out what a defence would look like. I just think someone has to genuinely believe it will happen before we can get a concrete prediction we could actually work with. I would accept, "What about utility-AI?" as not moving the goalposts, because it's relevantly similar.
NI can scale thinking with computers. That's what they're for.
In the end, hostile AI has to kill you with a thing. Terminator, predator drones, nanites, whatever. They cannot back up matter and energy over the internet.
If it's really that much of a threat, you shut down the internet. It sucks, then you win.
Smallpox can hide in a cubic centimetre. So can plague.
The concrete scenario I imagine is a computer with general-purpose peripherals, analogous to hands, but also a tamper-evident remote shutoff. If it manages to build itself a shutoff-less clone, you notice before they can build a second, and kill both. Threat end, because computers don't normally have hands, even if it spreads it cannot do anything.
What about robotic factories? Their chips are special-purpose and don't have enough space for an AI.
And so on. My principle is that almost no single disaster is ever as disastrous as it seems beforehand - it just fills working memory so that it's easy to commit the broken-window fallacy.
Tying back to anarchy, the security firm would be in charge of this. You'd ask it to secure your AI project, and it would do the threat assessment based on the detailed plans. If cAItastrophy occurs, hold the security firm responsible.
I just realized what a real AI-fearer would do. They'd simulate AI bootstrapping.
ReplyDeleteNow, we cannot actually simulate the details of bootstrapping, so there'd have to be some assumptions. It would be models of bootstrapping. But computer simulations let you vary assumptions. You'd run many simulations and graph some measure of the outcome against the extremity of various assumptions. Assuming I'm right and most bootstrapping is harmless, you'd look for inflection points.
Once such a simulation exists, the model of bootstrapping can be criticized, updated, et cetera.
General comment: NI thinking certainly improves with computers, but I disagree that it scales. I work better with one computer than with none. I get a slight marginal improvement from the second computer if I want to use one for browsing while the other is doing something resource-intensive, or is rebooting. The third computer is only useful in the even more esoteric situation that e.g. I might want to simultaneously see how something looks on Linux, Mac and Windows. At the fourth computer, I suspect I'm better off selling it and using the money on something else.
ReplyDeleteYou get more of an improvement by adding people, too. If I'm working on a computer, and someone else with a computer comes to help me, I can probably get stuff done faster. But past a dozen or so you get the Mythical Man-Month problem, summarized in the quip "adding manpower to a late software project makes it later", and I have to spend time herding cats and explaining the project and generally running large overhead costs that probably aren't worth the benefit of having another helper.
AI, on the other hand, can go from 400 to 800 computers and work nearly twice as fast with a very very tiny margin of overhead for distributing processes.
Shutting down the internet, while a good safety measure if done in time, seems like something that there would only be a very very narrow window to do it in - too early and there's not enough evidence to justify the thread, too late and the AI has already copied and remailed itself a dozen times and convinced some sod to copy it by USB stick and there's no telling where it's gone now short of checking every computer that was ever connected.
You could try to restrict it to only top-model computers, as suggested by your comment about chips without enough space for AI, but my cousin who plays chess a lot told me the other year that a 2006 chess program on 1986 hardware wiped the fucking floor with a 1986 program running on 2006 hardware. It seems likely that AI will have the same issue of being constrained more by the difficulty of creating the software than by the power of the available hardware, and can get worryingly smart even on poor computers.
I can get behind the thought that fighting AI might be more like fighting plague than it would be like fighting any human military or paramilitary force.
(comment cut off for exceeding 4096 character length limit. specific scenario in next comment.)
Man. I spent too much time writing, and my comment was pre-empted by your point anyway. Simulated bootstrapping sounds like a much better general response to the whole class of AI risks.
ReplyDelete(Rest of my earlier comment with a specific scenario for doom by UFAI here: http://pastebin.com/KgQgAMfU - expires in one month.)
OTOH, "simulating" something that was on a computer in the first place is verbiage that bugs me. It seems like simulated AI development would basically be AI development with some monitoring equipment and a bigger panic button to shut it down.
Also, it may be that simulated AI development, especially simulations trying to gather data over multiple paths, will lag critically behind a project that just goes ahead and outright does AI with less overhead.
"Man. I spent too much time writing, and my comment was pre-empted by your point anyway. Simulated bootstrapping sounds like a much better general response to the whole class of AI risks."
ReplyDeleteIt's fine. There might have been a hole in my thinking, and going through details is a good way to find it, if so. In fact, often necessary.
"It seems like simulated AI development would basically be AI development with some monitoring equipment and a bigger panic button to shut it down."
Simulating a model of bootstrapping is perfectly safe, because no actual bootstrapping occurs.
Concretely: you assume some mechanism for bootstrapping, such as programming better heuristics into itself, and what 'better' means. You then work out what the betterness would accomplish, to model what a bootstrapped AI would manage to achieve.
"Also, it may be that simulated AI development, especially simulations trying to gather data over multiple paths, will lag critically behind a project that just goes ahead and outright does AI with less overhead."
Not in practice, apparently. I could have that AI simulation set up within months. I guarantee no bootstrapping AI will appear by, say, March.
Rebuttal to NI scaling: expert systems. You're not modelling scaling the same way I am. Which I guess explains why you didn't realize it can be seen that way.
Fact is we as individuals could offload/outsource way, way more cognition to our computers than we do.
"AI, on the other hand, can go from 400 to 800 computers and work nearly twice as fast with a very very tiny margin of overhead for distributing processes."
But still fail at tasks humans find trivial. This is extremely important when making war.
"too late and the AI has already copied and remailed itself a dozen times and convinced some sod to copy it by USB stick and there's no telling where it's gone now short of checking every computer that was ever connected."
With the internet shut down, it can't accomplish anything regardless of how widespread it is.
You have time to do a threat assessment against a threat guaranteed to be concrete because it's realized. The AI cannot respond, because the internet is shut down. You automatically win unless your strategists are total morons.
"I dislike writing this because I'm strongly conscious of the fact that even if I spent a week coming up with different scenarios, they'd all be wrong due to the amount of information I'm missing and the number of outcomes I'm having to make up"
ReplyDeleteWhich is precisely my point. You don't have enough information to safely conclude there IS a threat. People who propose that UFAI is a threat are ignorant of what they're proposing. No causal chain can be derived from their proposal, which means it cannot be evaluated as true or false at all.
"Man, see how many arbitrary details the example already has? Wrong wrong wrong."
Yes, precisely. By putting it into details, you realize how ignorant you really are.
Assuming all your assumptions are correct: every computer connected to the internet is hopelessly compromised.
We have a devil of the time reformatting them all. Then we do, and that's the cAItastrophe. We don't even have to build new ones. The algorithm is just an ordinary virus; if this were possible it would have happened already.
The odds of it not being noticed are effectively zero. It has to use physical computing power, which is monitored. Especially as arbitrary intelligence requires arbitrary CPU cycles.
So either it has to limit its intelligence, limiting its threat, or it has to be detected. There's no way it can succeed at bioterrorism.
That and bioterrorism is itself not that dangerous, for exactly the same kinds of reasons. As a matter of fact humanity has survived several catastrophic pandemics already. The idea that even an AI can out-smart and hence out-power evolution is absurd.
Sealing off an AI just isn't that necessary in practice.
So: now I've put my assumptions out there, they can be attacked. But I don't think they can be successfully attacked. (Though, I would, wouldn't I?)
Alternatively: if the AI can protect its followers from the pandemic, we can protect ourselves from the pandemic.
ReplyDeleteI want to add that I don't think I've necessarily proven that one shouldn't think UFAI is intellectual mugging.
ReplyDeleteHowever, I have proven that's reasonable to think it is.
There may be an unambiguous discriminator. I don't know what it is. But until it's found, as long as I haven't destroyed all your reasons for thinking UFAI is a real threat, it's reasonable to think it's real.