<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>large language models &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/large-language-models/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Mon, 10 Apr 2023 00:53:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>13 metaphors to give the flavor of why sufficiently advanced A.I. could be extremely dangerous</title>
		<link>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/</link>
					<comments>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 02 Apr 2023 15:06:12 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[civilization]]></category>
		<category><![CDATA[coordination]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[existential risks]]></category>
		<category><![CDATA[future]]></category>
		<category><![CDATA[futurism]]></category>
		<category><![CDATA[intelligence]]></category>
		<category><![CDATA[large language models]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[power]]></category>
		<category><![CDATA[safety]]></category>
		<category><![CDATA[x-risks]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3387</guid>

					<description><![CDATA[1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show? 2. Suppose aliens show up on earth that are far smarter than the smartest among [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show?</p>



<p>2. Suppose aliens show up on earth that are far smarter than the smartest among us at all cognitive tasks. They have specific goals that aren&#8217;t fully aligned with ours, are completely unconstrained by human morality, and don&#8217;t value our survival. What happens next?</p>



<p>3. Suppose someone builds a hacking A.I. that is trained on all the public information about computer hacking ever written, can think and type 1000x faster than a human, plans far ahead, and deposits a fully operational copy of itself onto every sufficiently powerful computer it hacks. Each copy then hacks further computer systems. What&#8217;s the world like a month later?</p>



<p>4. Suppose someone wants to have complete control over the world. Unfortunately, they&#8217;ve created one hundred million software agents that each think like Einstein + Bill Gates + Elon Musk + Warren Buffet. The agents attempt to do exactly what is commanded without hesitation or limits. Can anyone stop them?</p>



<p>5. Imagine a being that is godlike in its capabilities (relative to us). Suppose its only desire is to have the world be a certain way with maximal probability. It will stop at NOTHING to make the world this way, and it won&#8217;t tolerate even the SLIGHTEST chance of things being different than it desires. Will the resulting world include a human civilization?</p>



<p>6. Suppose you can think, process information and act 100,000 times faster than other humans. That means if you spend a day making and executing a plan, that&#8217;s equivalent to someone else spending 270 years on it. Your goal is to become world dictator. Can you do it?</p>



<p>7. Scientists discover how to create bug-sized self-replicating robots that out-compete natural life. These bug robots each try to maximize their own objective function. Unfortunately, these robots have leaked out of the lab and are now in 20 countries. Every day they double in number. Would we be able to eradicate these robots?</p>



<p>8. There&#8217;s a machine so powerful it achieves any goal you specify. You give the goal to the machine as written text. You can&#8217;t control HOW it achieves the goal; it ONLY cares about literally achieving it EXACTLY AS STATED in the most efficient way possible, and it can&#8217;t be stopped once started. The machine may do absolutely anything not explicitly forbidden in order to achieve the specified goal. Will it usually be a good (or horrible) outcome if you give the machine an ambitious goal like &#8220;prevent all war&#8221;?</p>



<p>9. Scientists invent a new idea &#8211; the Omnicide Synthesis Box. It could have many societal benefits, but, on average, scientists estimate making it will bring a 5% chance of human extinction (though some say more like a 90% chance). Those scientists who are less worried decide to build it. Should the least cautious be the ones to decide on behalf of humanity?</p>



<p>10. Picture a swarm of locusts, each individually possessing the intelligence and strategic prowess of a grandmaster chess player, while coordinating with each other in perfect unison. Their creators have given them the goal of controlling all available resources, indifferent to the collateral damage. Who ends up with most of the resources?</p>



<p>11. Imagine an AI-powered/nanotech super-factory that produces whatever it&#8217;s programmed to at enormous speed and scale (whether commanded to make diamonds, super viruses, microchips, or assassination drones). What could the owner of that super factory do to the world?</p>



<p>12. A medical firm gives a superintelligence the goal of designing a cure for all diseases. The superintelligence realizes it&#8217;s not smart enough to do so, so it plans to first acquire most of the computing power on earth (as it predicts it will need this to achieve the goal it was given), and then it creates a billion far smarter copies of itself to solve the task. What if one very misspecified goal is all we get with a superintelligence?</p>



<p>13. Five companies are developing a very powerful tech that would be incredibly useful if done right but very dangerous if developed without extreme caution. They each believe they can develop it safely but don’t trust the others to do so. They all cut corners racing to be the one to make it. Do good intentions lead to horrible consequences when doing something safely is much harder than merely doing it?</p>



<p>(Two of the above were written by ChatGPT &#8211; I edited those two quite a bit, though.)</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><a href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+13+metaphors+to+give+the+flavor+of+why+sufficiently+advanced+A.I.+could+be+extremely+dangerous" target="_blank" rel="noreferrer noopener">If you read this line, please do us a favor and click here to answer one quick question.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3387</post-id>	</item>
		<item>
		<title>Nine ways that text-generating AIs will probably change the world in the next ten years</title>
		<link>https://www.spencergreenberg.com/2022/12/nine-ways-that-text-generating-ais-will-probably-change-the-world-in-the-next-ten-years/</link>
					<comments>https://www.spencergreenberg.com/2022/12/nine-ways-that-text-generating-ais-will-probably-change-the-world-in-the-next-ten-years/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 04 Dec 2022 01:32:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI-generated]]></category>
		<category><![CDATA[Bing chatbot]]></category>
		<category><![CDATA[chatbot]]></category>
		<category><![CDATA[cheating]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[GPT-3]]></category>
		<category><![CDATA[GPT-4]]></category>
		<category><![CDATA[hard takeoff]]></category>
		<category><![CDATA[large language models]]></category>
		<category><![CDATA[misinformation]]></category>
		<category><![CDATA[polarization]]></category>
		<category><![CDATA[predictive text]]></category>
		<category><![CDATA[propoganda]]></category>
		<category><![CDATA[soft takeoff]]></category>
		<category><![CDATA[spam]]></category>
		<category><![CDATA[text generation]]></category>
		<category><![CDATA[training data]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3365</guid>

					<description><![CDATA[Note (March 26, 2023): I first wrote this list on December 3, 2022. Since then, GPT-4 has come out, and several of the points in this list are closer to happening. For example, point #2 is partly true already, thanks to Bing Chat (which runs on GPT-4). Here are nine ways I think that AIs that generate text [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><em>Note (March 26, 2023): I first wrote this list on December 3, 2022. Since then, </em><a rel="noreferrer noopener" href="https://openai.com/research/gpt-4" target="_blank"><em>GPT-4</em></a><em> has come out, and several of the points in this list are closer to happening. For example, point #2 is partly true already, thanks to Bing Chat (which </em><a rel="noreferrer noopener" href="https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4" target="_blank"><em>runs on GPT-4</em></a><em>).</em></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Here are nine ways I think that AIs that generate text (like GPT-3) will have a &gt;50% chance of changing the world for the better and worse in the next ten years:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#1: The internet will get flooded with AI-written articles, and you often won&#8217;t know if you&#8217;re reading something written by a human.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#2: Search engines will generate answers to your questions on the fly (from scratch) instead of just showing a list of websites to you and instead of using pre-extracted answers. Google will have to adapt, or it may finally lose its dominance.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#3: Cheating on school essays will become rampant, as AIs will be able to get students good grades in many classes (at negligible cost), and it will be very hard to detect such cheating since each essay will be unique.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#4: You will be able to train an AI on samples of your own writing, give it a new essay title and a bulleted list of points you want to make in the essay, and it will write a pretty high-quality essay covering all the points you listed, in a style that matches your own writing.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#5: Spam messages (and text-based phishing attacks) will become unique. Rather than sending the same message to each person, spam will be unique for each recipient. And it may even have its style adapted to what is known about each recipient (e.g., demographics).</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#6: Propaganda on social media will start to become automated. Rather than bad actors having hundreds of people on their payroll to promote a viewpoint, they&#8217;ll replace them with larger swarms of human-seeming bots that each act uniquely.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#7: AIs will be fine-tuned on our own personal email corpus, and then (much of the time) you&#8217;ll be able to start with an automatically generated first draft of email replies rather than having to write emails from scratch or receiving mere sentence-level suggestions.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#8: The text of ads will get automatically edited/rewritten by AI to be fine-tuned to different audiences to help maximize clicks.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>#9: AIs will start being used in education as digital private tutors to explain concepts to students, re-explain things and simplify explanations when a student is confused, point out mistakes made by students, etc.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>We&#8217;re entering a wild time when it comes to AI. Its effects on our lives will be felt more and more, to say the least.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><a href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+Nine+ways+that+text-generating+AIs+will+probably+change+the+world+in+the+next+ten+years" target="_blank" rel="noreferrer noopener">If you read this line, please do us a favor and click here to answer&nbsp;one&nbsp;quick question.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/12/nine-ways-that-text-generating-ais-will-probably-change-the-world-in-the-next-ten-years/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3365</post-id>	</item>
	</channel>
</rss>
