<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>existential risk &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/existential-risk/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Thu, 05 Sep 2024 16:09:53 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Understanding the Landscape of Viewpoints on the Risks and Benefits of AI</title>
		<link>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/</link>
					<comments>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 28 Jul 2024 00:16:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[good governance]]></category>
		<category><![CDATA[hype]]></category>
		<category><![CDATA[long-term risk]]></category>
		<category><![CDATA[longtermism]]></category>
		<category><![CDATA[near-term risk]]></category>
		<category><![CDATA[neartermism]]></category>
		<category><![CDATA[new technologies]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[societal risks]]></category>
		<category><![CDATA[stewardship]]></category>
		<category><![CDATA[superintelligence]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4057</guid>

					<description><![CDATA[I&#8217;ve seen seven main viewpoints on AI and the future from those who spend a lot of time thinking about it: (1) Superintelligence Doomers &#8211; they believe we are likely to build AI that&#8217;s superintelligent (i.e., that surpasses human intelligence in all respects) and that once we do, it will kill or enslave humanity. See: [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>I&#8217;ve seen seven main viewpoints on AI and the future from those who spend a lot of time thinking about it:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(1) Superintelligence Doomers &#8211; they believe we are likely to build AI that&#8217;s superintelligent (i.e., that surpasses human intelligence in all respects) and that once we do, it will kill or enslave humanity.</p>



<p>See: Eliezer Yudkowsky</p>



<p>&#8220;The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(2) AI Corrosionists &#8211; they believe AI, while it could be beneficial, has a very substantial likelihood of making things far worse. Unlike doomers, they see most risk coming from other processes rather than a sudden AI takeover or annihilation: for instance, it could be that humans lose control over the future slowly (by ceding more and more control over to AIs that are optimizing for things that are different than human values), or it could be that AI de-stabilized aspects of society that make other large risks (like nuclear war) more likely</p>



<p>See: Paul Christiano</p>



<p>&#8220;If you imagine a society in which almost all of the work is being done by these inhuman systems who want something that&#8217;s significantly at cross purposes, it&#8217;s possible to have social arrangements in which their desires are thwarted, but you&#8217;ve kind of set up a really bad position. And I think the best guess would be that what happens will not be what the humans want to happen, but what the systems who greatly outnumber us want to happen.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(3) Near Risk Doomers &#8211; they believe AI will have very bad effects (e.g., increasing unfairness, authoritarianism, climate change, unemployment, or concentration of power) but that we&#8217;re not going to build superintelligence anytime soon.</p>



<p>See: Cathy O&#8217;Neil</p>



<p>&#8220;The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(4) Current Paradigm Doubter &#8211; they believe the current AI paradigm is overhyped and isn&#8217;t going to change things all that much (some of them see the current paradigm as net negative; others see it as positive but only as one useful and overhyped tech among many other useful technologies). Some, like Marcus, hope that future paradigms might be more trustworthy and beneficial.</p>



<p>See: Gary Marcus</p>



<p>&#8220;What a strange world… All the major AI companies spending billions producing almost exactly the same results using almost exactly the same data using almost exactly the same technology, all flawed in almost exactly the same ways. Historians gonna be scratching their heads&#8230;The only way we will move significantly forward is to develop new architectures—likely neurosymbolic—that are less tied to the idiosyncrasies of specific training set.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(5) AI Stewardists &#8211; they believe AI advancement is important to humanity&#8217;s future, but it&#8217;s not necessarily going to be good or bad: it will be what we make of it, so it should be developed thoughtfully based on what we want to achieve.</p>



<p>See: Kevin Kelly</p>



<p>&#8220;AI could just as well stand for &#8216;alien intelligence.&#8217; We have no certainty we&#8217;ll contact extraterrestrial beings in the next 200 years, but we have almost 100 percent certainty that we&#8217;ll manufacture an alien intelligence by then. When we face these synthetic aliens, we&#8217;ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for?&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(6) Near Benefit Boosters &#8211; they believe AI will be very useful, impactful and important as a technology, and they also believe superintelligence is a ridiculous thing to worry about.</p>



<p>See: Yann LeCun</p>



<p>&#8220;AI is intrinsically good, because the effect of AI is to make people smarter&#8230;.AI is an amplifier of human intelligence and when people are smarter, better things happen: people are more productive, happier and the economy thrives.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(7) Superintelligence Boosters &#8211; they believe we are likely to build superintelligent AI and that it will usher in an incredible and positive new era.</p>



<p>See: Ray Kurzweil</p>



<p>&#8220;By the time of the Singularity, there won&#8217;t be a distinction between humans and technology. This is not because humans will have become what we think of as machines today, but rather machines will have progressed to be like humans and beyond. Technology will be the metaphorical opposable thumb that enables our next step in evolution.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>How do we best organize the positions on risks from AI?</p>



<p></p>



<p>I think the two most informative spectrums to think about are</p>



<p>(1) How *substantial* the near-term impacts of AI are expected to be </p>



<p>and</p>



<p>(2) Whether the effects of AI are likely to be *good* or *bad*</p>



<p>The image shown is my attempt to place these different thinkers who talk about AI on this two-axis system. Though note that this just reflects my best guesses, some of these thinkers have very nuanced views, and I&#8217;m not certain that I&#8217;m placing each of them in the right place.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on July 27, 2024, and first appeared on my website on August 4, 2024.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4057</post-id>	</item>
		<item>
		<title>The benefits and soul-crushing downsides of A.I. progress</title>
		<link>https://www.spencergreenberg.com/2024/04/the-benefits-and-soul-crushing-downsides-of-a-i-progress/</link>
					<comments>https://www.spencergreenberg.com/2024/04/the-benefits-and-soul-crushing-downsides-of-a-i-progress/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 11 Apr 2024 16:07:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[art]]></category>
		<category><![CDATA[creativity]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[human labor]]></category>
		<category><![CDATA[humanity]]></category>
		<category><![CDATA[job loss]]></category>
		<category><![CDATA[redundancy]]></category>
		<category><![CDATA[transformative]]></category>
		<category><![CDATA[transformative AI (TAI)]]></category>
		<category><![CDATA[uncertainty]]></category>
		<category><![CDATA[uniqueness]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3898</guid>

					<description><![CDATA[There are many benefits to A.I., such as being able to generate beautiful art, inspiring music, captivating writing, and mesmerizing videos. It democratizes creation (people can now create what’s in their minds), lowers costs (replacing human labor with algorithms), and enables hyper-personalization (works can be made just for you). The benefits are big and important. [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>There are many benefits to A.I., such as being able to generate beautiful art, inspiring music, captivating writing, and mesmerizing videos. It democratizes creation (people can now create what’s in their minds), lowers costs (replacing human labor with algorithms), and enables hyper-personalization (works can be made just for you). The benefits are big and important.</p>



<p>But there is also something soul-crushing about it. People spend decades learning a craft and then see an A.I. make something in a few seconds or minutes that other people see as comparable to their own work.<br>And it will be widely misused: to create ripoffs of copyrighted works without giving credit, to generate billions of pages of low-quality content in order to game search engines, to make hyper-personalized spam and commit phishing and fraud, to mislead the public with misinformation and customized persuasion and outrage bait, and to create even more intense social media and video addiction.</p>



<p>Then, there are the longer-term consequences of this kind of technology, which are hard to predict but may be truly catastrophic (even according to some of the leaders who run these very companies).</p>



<p>Humanity sometimes creeps along cautiously &#8211; as with the development of new medicines, where we (arguable) allow millions to die based on the fear of releasing something unsafe (which is, to a meaningful extent, a legitimate fear &#8211; getting the tradeoffs right is hard). And in the case of nuclear power, where regulation impedes its development due to (mostly) ungrounded fears.</p>



<p>Yet with A.I., humanity plunges forward into the abyss at full speed, with no breaks, with the train conductors announcing to the passengers that we can’t predict where the trains are going, that this all may end up a disaster, perhaps cataclysm, as they shovel exponentially more coal into their white-hot furnaces.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on April 11, 2024, and first appeared on my website on April 16, 2024.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/04/the-benefits-and-soul-crushing-downsides-of-a-i-progress/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3898</post-id>	</item>
		<item>
		<title>Valuism and X: how Valuism sheds light on other domains &#8211; Part 5 of the sequence on Valuism</title>
		<link>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/</link>
					<comments>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Wed, 19 Jul 2023 13:09:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[anxiety]]></category>
		<category><![CDATA[depression]]></category>
		<category><![CDATA[economics]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[GDP]]></category>
		<category><![CDATA[happiness]]></category>
		<category><![CDATA[human development index]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[life philosophy]]></category>
		<category><![CDATA[mental health]]></category>
		<category><![CDATA[philosophies]]></category>
		<category><![CDATA[suffering]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<category><![CDATA[utopias]]></category>
		<category><![CDATA[valuism]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3084</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the fifth and final part in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, second, third, and fourth parts. In previous posts, I&#8217;ve described Valuism &#8211; my life philosophy. I&#8217;ve also discussed how it could serve as a life philosophy [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="375" data-attachment-id="3167" data-permalink="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/dall%c2%b7e-2023-02-05-15-50-14-a-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?fit=2048%2C1024&amp;ssl=1" data-orig-size="2048,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="DALL·E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?fit=750%2C375&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=750%2C375&#038;ssl=1" alt="" class="wp-image-3167" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=1024%2C512&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=768%2C384&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=1536%2C768&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?w=2048&amp;ssl=1 2048w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>



<p style="font-size:15px"><em>This is the fifth and final part <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>, <a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>, <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a> parts. </em></p>



<p>In previous posts, I&#8217;ve described Valuism &#8211; my life philosophy. I&#8217;ve also discussed how it could serve as a life philosophy for others. In this post, I discuss how a Valuist lens can help shed light on various fields and areas of inquiry.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and Effective Altruism</h3>



<p>Effective Altruism is a community and social movement <a href="https://www.centreforeffectivealtruism.org/ceas-guiding-principles">about</a> &#8220;using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.&#8221;</p>



<p>Effective Altruists often operate from a hedonic utilitarian framework (trying to increase happiness and reduce suffering for all conscious beings). But Effective Altruism can alternatively be approached from a Valuist framework. </p>



<p>You can think of Valuist Effective Altruism as addressing the question of how to effectively increase the production of one&#8217;s altruistic intrinsic values within the time, effort, and focus you give to those values (as opposed to your other intrinsic values). If you&#8217;re an Effective Altruist, chances are two of your strongest intrinsic values are related to reducing suffering (or increasing happiness) and seeking truth. </p>



<p>For people with certain intrinsic values, Effective Altruism is a natural consequence of Valuism. To see this, consider a Valuist whose two strongest values are the happiness (and/or lack of suffering) of conscious beings and truth-seeking. Such a Valuist would naturally want to increase global happiness (and/or reduce global suffering) in highly effective ways while seeing the world impartially (e.g., by using reason and evidence to guide their understanding). This is extremely aligned with (and similar to) the mission of Effective Altruism.</p>



<p> For more on the relationship between Effective Altruism and Valuism, see <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">this post</a>.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and existential risk</h3>



<p>Potential existential risks (such as threats from nuclear war, bioterrorism, and advanced A.I.) are a major area of focus for many Effective Altruists. According to most people&#8217;s intrinsic values, existential risk is also incredibly bad. Existential risks threaten many of the things that humans value (happiness, pleasure, learning, achievement, freedom, longevity, legacy, virtue, and so on). So for most people&#8217;s intrinsic values, Valuism is compatible with caring about existential risk reduction (depending on one&#8217;s estimates of the relevant probabilities).</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and utopias</h3>



<p>Utopias <a href="https://www.spencergreenberg.com/2017/11/16-potentially-dystopic-utopias/">are hard to construct</a>. Sure, we pretty much all want a world without poverty and disease, but it&#8217;s hard to agree on the specific details beyond avoiding bad things. If we go all-in on one intrinsic value, we end up with a world that seems like a dystopia to many. For instance, a utopia, according to hedonic utilitarianism, might look like attaching each of our brains to a bliss-generating machine while we do nothing for the rest of our lives, or it might look like or filling the universe with tiny algorithms that experience maximal bliss per unit of energy. Of course, these are horrifying outcomes for many people. </p>



<p>If we maximize utopia according to one or a small set of intrinsic values, it will very likely seem like a dystopia according to someone with other intrinsic values. To construct a utopia that is not a dystopia to many, we should <strong>make sure that it includes high levels of a wide range of intrinsic values</strong>, keeping these in balance rather than going all-in on a small set of values.</p>



<p></p>



<p>Put another way, if we preserve a wide range of different intrinsic values in our construction of potential utopias, we protect ourselves against various failure modes.&nbsp;For instance:</p>



<ul class="wp-block-list">
<li>The intrinsic value of avoidance of suffering protects us from a world where there is a lot of pain and suffering.</li>



<li>The intrinsic value of freedom helps protect us from a failure mode of a world of forced <a href="https://en.wikipedia.org/wiki/Wirehead_(science_fiction)">wireheading</a>.&nbsp;</li>



<li>An intrinsic value of truth helps protect us from a failure mode where we&#8217;re all unknowingly in the matrix (e.g., being used for a purpose unknown to us) or living under an authoritarian world government that tries to keep the populace happy through delusion.</li>
</ul>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and worldviews</h3>



<p>Worldviews usually come with a set of shared intrinsic values. These are the strong intrinsic values that most (though not all) people with that worldview have in common. Of course, in most cases, in addition to these shared intrinsic values, each individual will also have other intrinsic values that are not shared by most people with their worldview. You can learn more about the interface between worldviews and intrinsic values in <a href="https://www.clearerthinking.org/post/understand-how-other-people-think-a-theory-of-worldviews">our essay on worldviews here</a>.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and mental health&nbsp;</h3>



<p>Mental health may have interesting connections to intrinsic values. For instance, here&#8217;s <a href="https://www.clearerthinking.org/post/understanding-the-two-most-common-mental-health-problems-in-the-world">an oversimplified model of anxiety and depression</a> that I find usefully predictive (I developed this in collaboration with my colleague Amanda Metskas):</p>



<p><strong>Anxiety</strong> occurs when you think there is a chance that something you intrinsically value may be lost. Anxiety tends to be worse when you perceive the chance of this happening as higher, when you perceive the intrinsic values as more important, or when the potential loss is nearer in time. </p>



<p></p>



<p><strong>Depression</strong> occurs when you&#8217;re convinced you can&#8217;t create sufficient intrinsic value in your future. This could be because you think the things you value most are lost forever, because you see yourself as useless at achieving what you value, or for other reasons.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and animals</h3>



<p>What do animals care about? While some animals (e.g., some insects) may not be conscious (i.e., they may lack something that it&#8217;s like to be them), and therefore it may not matter what they care about, for conscious animals, it may be important to understand what they intrinsically value so we know how to treat them ethically. </p>



<p>An intrinsic value perspective on animal ethics is that we should not deprive animals of the things they intrinsically value (and we should help them get the things they intrinsically value, at least when they are easy to provide). So, for instance, we can ask how much a chicken that lives almost its whole life in a small cage (as many chickens raised for food in the U.S. do) is able to have its intrinsic values met. The answer is probably very little.</p>



<p>But what are the sorts of things that animals may intrinsically value? I suspect there are a wide variety of animal intrinsic values and that they depend on species, but here are a few that may be especially common in mammals:</p>



<ul class="wp-block-list">
<li>Pleasure</li>



<li>Not suffering</li>



<li>Not experiencing large amounts of fear, stress, and anxiety</li>



<li>Surviving</li>



<li>Agency (e.g., the ability to choose)</li>



<li>Bonding with other animals</li>



<li>Protection of their offspring</li>
</ul>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and economics</h3>



<p>Economics often operates under the assumption that each person has a &#8220;utility function&#8221;: i.e., a function that maps states of the world into how good or bad the person thinks those states are and that describes the choices people make. According to this frame, if a person chooses A over B, that means that their utility function assigns a higher value to A than B. For example, if I buy a Mac rather than a PC, and they are the same price, this must mean that I predict the Mac gives me more utility (according to my utility function). </p>



<p>Valuism, on the other hand, says that when A is more intrinsically valuable to us than B (and equivalent along other dimensions such as price), we often will choose A over B because A produces more of what we intrinsically value; however, sometimes we choose B over A instead because we confuse instrumental value with intrinsic value, or we have a habit of doing B, we feel social pressure to do B, etc. </p>



<p>In other words, <strong>choosing something is not the same as intrinsically valuing something</strong>, <strong>and ideally, we want to construct a society where people get more of what they intrinsically value</strong>, not merely giving people more of what they would <em>choose</em>.&nbsp;</p>



<p>A classic example where intrinsic value and choice come apart is addictive products like cigarettes or video games with upsells: people sometimes choose to pay for them and use them way past the point of benefit, according to their own intrinsic values. </p>



<p>A similar issue comes up when people slip into treating every dollar of GDP or each unit reduction of &#8220;<a href="https://en.wikipedia.org/wiki/Deadweight_loss">deadweight loss</a>&#8221; as though they are equally valuable. Imagine that an influencer gets all the hottest celebrities to start wearing the hair of a rare species of sloth and that buzz convinces millions of people that it’s really cool, so consumers spend billions of dollars buying these sloth hair pieces. Unfortunately, the sloth hair is really aesthetically ugly, uncomfortable, and expensive, and making clothes out of it requires torturing the sloths. This will probably increase GDP, yet (on net) intrinsic value will almost certainly have been destroyed. There is no good reason to care about GDP for its own sake, but intrinsic values are precisely the things we care about for their own sake. While increasing GDP may often be aligned with producing more of what people intrinsically value (both now and potentially in the future), in cases when GDP and the long-term production of intrinsic values are out of alignment, I would argue that GDP is no longer a good measure of societal benefit.</p>



<p>Going back to the sloth hair example, having a free market for this sloth hair would, according to simple economic theory, reduce &#8220;deadweight loss&#8221; (relative to having restrictions on their sale). And yet, the production of this sloth hair will likely be net destructive to what people intrinsically value. We can imagine a multi-faceted accounting of how society is doing that takes into account productivity and wealth but goes beyond it to consider the extent to which people are creating their intrinsic values; productivity and wealth would be viewed as being in the service of intrinsic value production.</p>



<p>As a complement to GDP, we can think about measuring how well the people of a society get the things that they intrinsic value. For instance, attempting to measure:</p>



<ul class="wp-block-list">
<li>How happy are they?&nbsp;</li>



<li>To what extent are they accomplishing their goals?&nbsp;</li>



<li>How free are they?</li>



<li>How meaningful are their relationships?&nbsp;</li>



<li>How much are they suffering?</li>
</ul>



<p>This is related to the <a href="https://hdr.undp.org/data-center/human-development-index">Human Development Index</a>, though that index includes items that are not intrinsic values, and it doesn&#8217;t cover all intrinsic values.</p>



<p>If we had such an accounting, different people would naturally rank societies differently (in terms of how good they are overall) because they value these intrinsic values to different extents.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<p>As you can see in this post, a Valuist perspective may have something to say about many other topic areas, giving us a different way to look at topics like Effective Altruism, utopia, animal ethics, worldviews, mental health, and economics.</p>



<p></p>



<p><em>You&#8217;ve just finished the fifth and final part in my sequence of essays on my life philosophy, Valuism &#8211;</em> <em>here are the <a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>, <a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>, <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a> parts. </em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3084</post-id>	</item>
		<item>
		<title>13 metaphors to give the flavor of why sufficiently advanced A.I. could be extremely dangerous</title>
		<link>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/</link>
					<comments>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 02 Apr 2023 15:06:12 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[civilization]]></category>
		<category><![CDATA[coordination]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[existential risks]]></category>
		<category><![CDATA[future]]></category>
		<category><![CDATA[futurism]]></category>
		<category><![CDATA[intelligence]]></category>
		<category><![CDATA[large language models]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[power]]></category>
		<category><![CDATA[safety]]></category>
		<category><![CDATA[x-risks]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3387</guid>

					<description><![CDATA[1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show? 2. Suppose aliens show up on earth that are far smarter than the smartest among [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>1. Suppose a new species evolves on earth with the same intellectual, planning, and coordination abilities relative to us that we have relative to chimps. Chimps are faster and stronger than most humans &#8211; why don&#8217;t they run the show?</p>



<p>2. Suppose aliens show up on earth that are far smarter than the smartest among us at all cognitive tasks. They have specific goals that aren&#8217;t fully aligned with ours, are completely unconstrained by human morality, and don&#8217;t value our survival. What happens next?</p>



<p>3. Suppose someone builds a hacking A.I. that is trained on all the public information about computer hacking ever written, can think and type 1000x faster than a human, plans far ahead, and deposits a fully operational copy of itself onto every sufficiently powerful computer it hacks. Each copy then hacks further computer systems. What&#8217;s the world like a month later?</p>



<p>4. Suppose someone wants to have complete control over the world. Unfortunately, they&#8217;ve created one hundred million software agents that each think like Einstein + Bill Gates + Elon Musk + Warren Buffet. The agents attempt to do exactly what is commanded without hesitation or limits. Can anyone stop them?</p>



<p>5. Imagine a being that is godlike in its capabilities (relative to us). Suppose its only desire is to have the world be a certain way with maximal probability. It will stop at NOTHING to make the world this way, and it won&#8217;t tolerate even the SLIGHTEST chance of things being different than it desires. Will the resulting world include a human civilization?</p>



<p>6. Suppose you can think, process information and act 100,000 times faster than other humans. That means if you spend a day making and executing a plan, that&#8217;s equivalent to someone else spending 270 years on it. Your goal is to become world dictator. Can you do it?</p>



<p>7. Scientists discover how to create bug-sized self-replicating robots that out-compete natural life. These bug robots each try to maximize their own objective function. Unfortunately, these robots have leaked out of the lab and are now in 20 countries. Every day they double in number. Would we be able to eradicate these robots?</p>



<p>8. There&#8217;s a machine so powerful it achieves any goal you specify. You give the goal to the machine as written text. You can&#8217;t control HOW it achieves the goal; it ONLY cares about literally achieving it EXACTLY AS STATED in the most efficient way possible, and it can&#8217;t be stopped once started. The machine may do absolutely anything not explicitly forbidden in order to achieve the specified goal. Will it usually be a good (or horrible) outcome if you give the machine an ambitious goal like &#8220;prevent all war&#8221;?</p>



<p>9. Scientists invent a new idea &#8211; the Omnicide Synthesis Box. It could have many societal benefits, but, on average, scientists estimate making it will bring a 5% chance of human extinction (though some say more like a 90% chance). Those scientists who are less worried decide to build it. Should the least cautious be the ones to decide on behalf of humanity?</p>



<p>10. Picture a swarm of locusts, each individually possessing the intelligence and strategic prowess of a grandmaster chess player, while coordinating with each other in perfect unison. Their creators have given them the goal of controlling all available resources, indifferent to the collateral damage. Who ends up with most of the resources?</p>



<p>11. Imagine an AI-powered/nanotech super-factory that produces whatever it&#8217;s programmed to at enormous speed and scale (whether commanded to make diamonds, super viruses, microchips, or assassination drones). What could the owner of that super factory do to the world?</p>



<p>12. A medical firm gives a superintelligence the goal of designing a cure for all diseases. The superintelligence realizes it&#8217;s not smart enough to do so, so it plans to first acquire most of the computing power on earth (as it predicts it will need this to achieve the goal it was given), and then it creates a billion far smarter copies of itself to solve the task. What if one very misspecified goal is all we get with a superintelligence?</p>



<p>13. Five companies are developing a very powerful tech that would be incredibly useful if done right but very dangerous if developed without extreme caution. They each believe they can develop it safely but don’t trust the others to do so. They all cut corners racing to be the one to make it. Do good intentions lead to horrible consequences when doing something safely is much harder than merely doing it?</p>



<p>(Two of the above were written by ChatGPT &#8211; I edited those two quite a bit, though.)</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><a href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+13+metaphors+to+give+the+flavor+of+why+sufficiently+advanced+A.I.+could+be+extremely+dangerous" target="_blank" rel="noreferrer noopener">If you read this line, please do us a favor and click here to answer one quick question.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/04/13-metaphors-to-give-the-flavor-of-why-sufficiently-advanced-a-i-could-be-extremely-dangerous/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3387</post-id>	</item>
	</channel>
</rss>
