<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>existential risks &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/existential-risks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Mon, 10 Apr 2023 00:53:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>13 metaphors to give the flavor of why sufficiently advanced A.I. could be extremely dangerous</title>
		<link>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/</link>
					<comments>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 02 Apr 2023 15:06:12 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[civilization]]></category>
		<category><![CDATA[coordination]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[existential risks]]></category>
		<category><![CDATA[future]]></category>
		<category><![CDATA[futurism]]></category>
		<category><![CDATA[intelligence]]></category>
		<category><![CDATA[large language models]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[power]]></category>
		<category><![CDATA[safety]]></category>
		<category><![CDATA[x-risks]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3387</guid>

					<description><![CDATA[1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show? 2. Suppose aliens show up on earth that are far smarter than the smartest among [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show?</p>



<p>2. Suppose aliens show up on earth that are far smarter than the smartest among us at all cognitive tasks. They have specific goals that aren&#8217;t fully aligned with ours, are completely unconstrained by human morality, and don&#8217;t value our survival. What happens next?</p>



<p>3. Suppose someone builds a hacking A.I. that is trained on all the public information about computer hacking ever written, can think and type 1000x faster than a human, plans far ahead, and deposits a fully operational copy of itself onto every sufficiently powerful computer it hacks. Each copy then hacks further computer systems. What&#8217;s the world like a month later?</p>



<p>4. Suppose someone wants to have complete control over the world. Unfortunately, they&#8217;ve created one hundred million software agents that each think like Einstein + Bill Gates + Elon Musk + Warren Buffet. The agents attempt to do exactly what is commanded without hesitation or limits. Can anyone stop them?</p>



<p>5. Imagine a being that is godlike in its capabilities (relative to us). Suppose its only desire is to have the world be a certain way with maximal probability. It will stop at NOTHING to make the world this way, and it won&#8217;t tolerate even the SLIGHTEST chance of things being different than it desires. Will the resulting world include a human civilization?</p>



<p>6. Suppose you can think, process information and act 100,000 times faster than other humans. That means if you spend a day making and executing a plan, that&#8217;s equivalent to someone else spending 270 years on it. Your goal is to become world dictator. Can you do it?</p>



<p>7. Scientists discover how to create bug-sized self-replicating robots that out-compete natural life. These bug robots each try to maximize their own objective function. Unfortunately, these robots have leaked out of the lab and are now in 20 countries. Every day they double in number. Would we be able to eradicate these robots?</p>



<p>8. There&#8217;s a machine so powerful it achieves any goal you specify. You give the goal to the machine as written text. You can&#8217;t control HOW it achieves the goal; it ONLY cares about literally achieving it EXACTLY AS STATED in the most efficient way possible, and it can&#8217;t be stopped once started. The machine may do absolutely anything not explicitly forbidden in order to achieve the specified goal. Will it usually be a good (or horrible) outcome if you give the machine an ambitious goal like &#8220;prevent all war&#8221;?</p>



<p>9. Scientists invent a new idea &#8211; the Omnicide Synthesis Box. It could have many societal benefits, but, on average, scientists estimate making it will bring a 5% chance of human extinction (though some say more like a 90% chance). Those scientists who are less worried decide to build it. Should the least cautious be the ones to decide on behalf of humanity?</p>



<p>10. Picture a swarm of locusts, each individually possessing the intelligence and strategic prowess of a grandmaster chess player, while coordinating with each other in perfect unison. Their creators have given them the goal of controlling all available resources, indifferent to the collateral damage. Who ends up with most of the resources?</p>



<p>11. Imagine an AI-powered/nanotech super-factory that produces whatever it&#8217;s programmed to at enormous speed and scale (whether commanded to make diamonds, super viruses, microchips, or assassination drones). What could the owner of that super factory do to the world?</p>



<p>12. A medical firm gives a superintelligence the goal of designing a cure for all diseases. The superintelligence realizes it&#8217;s not smart enough to do so, so it plans to first acquire most of the computing power on earth (as it predicts it will need this to achieve the goal it was given), and then it creates a billion far smarter copies of itself to solve the task. What if one very misspecified goal is all we get with a superintelligence?</p>



<p>13. Five companies are developing a very powerful tech that would be incredibly useful if done right but very dangerous if developed without extreme caution. They each believe they can develop it safely but don’t trust the others to do so. They all cut corners racing to be the one to make it. Do good intentions lead to horrible consequences when doing something safely is much harder than merely doing it?</p>



<p>(Two of the above were written by ChatGPT &#8211; I edited those two quite a bit, though.)</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><a href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+13+metaphors+to+give+the+flavor+of+why+sufficiently+advanced+A.I.+could+be+extremely+dangerous" target="_blank" rel="noreferrer noopener">If you read this line, please do us a favor and click here to answer one quick question.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3387</post-id>	</item>
		<item>
		<title>A thought experiment about what you&#8217;d be truly capable of doing (if you had no choice)</title>
		<link>https://www.spencergreenberg.com/2018/04/a-thought-experiment-about-what-youd-be-truly-capable-of-doing-if-you-had-no-choice/</link>
					<comments>https://www.spencergreenberg.com/2018/04/a-thought-experiment-about-what-youd-be-truly-capable-of-doing-if-you-had-no-choice/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 26 Apr 2018 20:33:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[existential risks]]></category>
		<category><![CDATA[focus]]></category>
		<category><![CDATA[hypothetical]]></category>
		<category><![CDATA[possibility]]></category>
		<category><![CDATA[probability]]></category>
		<category><![CDATA[self-efficacy]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2455</guid>

					<description><![CDATA[Think of something you value that:A. multiple other people you know are capable of achieving, but thatB. you assume you would not be capable of achieving, even thoughC. you have never actually tried to do this thing well before. Now suppose, for a moment, that you have no choice but to do the thing. That [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Think of something you value that:<br>A. multiple other people you know are capable of achieving, but that<br>B. you assume you would not be capable of achieving, even though<br>C. you have never actually tried to do this thing well before.</p>



<p>Now suppose, for a moment, that you have no choice but to do the thing. That is, everything you care about in the world will be destroyed if you do not achieve it in X months. Here, X could be 1 if it&#8217;s a very small thing, or X could be 100 if it&#8217;s a much larger thing.</p>



<p>Under those circumstances, do you STILL believe you would fail to achieve it?</p>



<hr class="wp-block-separator"/>



<p>I think this sort of thought experiment can help us distinguish between things that we don&#8217;t believe we are capable of merely because we aren&#8217;t motivated enough versus things that we ACTUALLY believe are impossible for us.</p>



<p>And I think it&#8217;s important to distinguish between these two cases, because if something is in the first category, we may actually be able to get ourselves to succeed just by finding ways to increase our motivation!</p>



<hr class="wp-block-separator"/>



<p>I also suspect that for many people, a number of the things that they view as being impossible for them would be more likely to seem possible in the face of carrying out this thought experiment. In other words, it is easy to confuse &#8220;I&#8217;m not motivated enough to try really hard&#8221; with &#8220;I&#8217;m incapable.&#8221;</p>



<p>As an example: suppose that you believe you are just inherently bad at math and that no matter how hard you try, you couldn&#8217;t understand calculus. Well, what if the fate of the world rested on you understanding calculus in six months? Under those circumstances, I think you would very likely find a way to learn it, with plenty of time to spare.</p>



<p><em>This piece was first written on April 26, 2018, and was first released on this site on October 1, 2021.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2018/04/a-thought-experiment-about-what-youd-be-truly-capable-of-doing-if-you-had-no-choice/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2455</post-id>	</item>
	</channel>
</rss>
