Generated with the A.I. Midjourney
Generated with the A.I. Midjourney

13 metaphors to give the flavor of why sufficiently advanced A.I. could be extremely dangerous

1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans – why don’t they run the show?

2. Suppose aliens show up on earth that are far smarter than the smartest among us at all cognitive tasks. They have specific goals that aren’t fully aligned with ours, are completely unconstrained by human morality, and don’t value our survival. What happens next?

3. Suppose someone builds a hacking A.I. that is trained on all the public information about computer hacking ever written, can think and type 1000x faster than a human, plans far ahead, and deposits a fully operational copy of itself onto every sufficiently powerful computer it hacks. Each copy then hacks further computer systems. What’s the world like a month later?

4. Suppose someone wants to have complete control over the world. Unfortunately, they’ve created one hundred million software agents that each think like Einstein + Bill Gates + Elon Musk + Warren Buffet. The agents attempt to do exactly what is commanded without hesitation or limits. Can anyone stop them?

5. Imagine a being that is godlike in its capabilities (relative to us). Suppose its only desire is to have the world be a certain way with maximal probability. It will stop at NOTHING to make the world this way, and it won’t tolerate even the SLIGHTEST chance of things being different than it desires. Will the resulting world include a human civilization?

6. Suppose you can think, process information and act 100,000 times faster than other humans. That means if you spend a day making and executing a plan, that’s equivalent to someone else spending 270 years on it. Your goal is to become world dictator. Can you do it?

7. Scientists discover how to create bug-sized self-replicating robots that out-compete natural life. These bug robots each try to maximize their own objective function. Unfortunately, these robots have leaked out of the lab and are now in 20 countries. Every day they double in number. Would we be able to eradicate these robots?

8. There’s a machine so powerful it achieves any goal you specify. You give the goal to the machine as written text. You can’t control HOW it achieves the goal; it ONLY cares about literally achieving it EXACTLY AS STATED in the most efficient way possible, and it can’t be stopped once started. The machine may do absolutely anything not explicitly forbidden in order to achieve the specified goal. Will it usually be a good (or horrible) outcome if you give the machine an ambitious goal like “prevent all war”?

9. Scientists invent a new idea – the Omnicide Synthesis Box. It could have many societal benefits, but, on average, scientists estimate making it will bring a 5% chance of human extinction (though some say more like a 90% chance). Those scientists who are less worried decide to build it. Should the least cautious be the ones to decide on behalf of humanity?

10. Picture a swarm of locusts, each individually possessing the intelligence and strategic prowess of a grandmaster chess player, while coordinating with each other in perfect unison. Their creators have given them the goal of controlling all available resources, indifferent to the collateral damage. Who ends up with most of the resources?

11. Imagine an AI-powered/nanotech super-factory that produces whatever it’s programmed to at enormous speed and scale (whether commanded to make diamonds, super viruses, microchips, or assassination drones). What could the owner of that super factory do to the world?

12. A medical firm gives a superintelligence the goal of designing a cure for all diseases. The superintelligence realizes it’s not smart enough to do so, so it plans to first acquire most of the computing power on earth (as it predicts it will need this to achieve the goal it was given), and then it creates a billion far smarter copies of itself to solve the task. What if one very misspecified goal is all we get with a superintelligence?

13. Five companies are developing a very powerful tech that would be incredibly useful if done right but very dangerous if developed without extreme caution. They each believe they can develop it safely but don’t trust the others to do so. They all cut corners racing to be the one to make it. Do good intentions lead to horrible consequences when doing something safely is much harder than merely doing it?

(Two of the above were written by ChatGPT – I edited those two quite a bit, though.)


If you read this line, please do us a favor and click here to answer one quick question.


  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *