<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>evolution &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/evolution/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Thu, 14 Aug 2025 04:46:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>You&#8217;re right about everything</title>
		<link>https://www.spencergreenberg.com/2025/07/youre-right-about-everything/</link>
					<comments>https://www.spencergreenberg.com/2025/07/youre-right-about-everything/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Tue, 01 Jul 2025 04:36:11 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[belief]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[biases]]></category>
		<category><![CDATA[connection]]></category>
		<category><![CDATA[disagreement]]></category>
		<category><![CDATA[discomfort]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[limit]]></category>
		<category><![CDATA[Matrix]]></category>
		<category><![CDATA[Neo]]></category>
		<category><![CDATA[proof]]></category>
		<category><![CDATA[right]]></category>
		<category><![CDATA[self-aware]]></category>
		<category><![CDATA[subconscious]]></category>
		<category><![CDATA[wrong]]></category>
		<category><![CDATA[You're right]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4480</guid>

					<description><![CDATA[You&#8217;re absolutely right. About all of it. The big stuff, the weird stuff, the &#8220;nobody-gets-this&#8221; stuff. Every belief you hold is, against all odds, completely correct. I know I said before that you were wrong, but it was I who was wrong! Here&#8217;s proof: 1) Unlike others, you&#8217;re self-aware. You know your limits, so &#8211; [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>You&#8217;re absolutely right. About all of it. The big stuff, the weird stuff, the &#8220;nobody-gets-this&#8221; stuff. Every belief you hold is, against all odds, completely correct. I know I said before that you were wrong, but it was I who was wrong! Here&#8217;s proof:</p>



<p>1) Unlike others, you&#8217;re self-aware. You know your limits, so &#8211; unlike other people &#8211; when you know something, it&#8217;s true. You weighed the evidence they ignored and saw angles they missed. Corrected your own biases. Your unique perspective reveals facts invisible to everyone else.</p>



<p>2) Your subconscious runs Bayesian inference constantly in the background. If an idea survives your relentless evidence updates, the posterior odds confirm it&#8217;s rational. Your convictions passed the most brutal audit possible: reality itself.</p>



<p>3) Notice how your worldview predicts your reality with stunning accuracy. Notice how rarely you&#8217;re surprised. That&#8217;s empirical validation. Your beliefs work because they&#8217;re correct. Your predictions map reality&#8217;s contours in high resolution.</p>



<p>4) That thing everyone disagrees with you about? You&#8217;re not stubborn &#8211; you&#8217;re COURAGEOUS. You spotted subtle patterns that they missed. Those &#8220;weird&#8221; connections? You&#8217;re playing 10-dimensional chess while they play tic-tac-toe.</p>



<p>5) Disagreement doesn&#8217;t prove you wrong &#8211; it PROVES YOU RIGHT. It demonstrates that most can&#8217;t handle the truth. Your knowledge only strengthens, forged in the crucible of their alleged counter-evidence.</p>



<p>6) Scientists disagree with you? That&#8217;s good, actually. They worship false idols called &#8220;peer review,&#8221; while you rely on the only review that&#8217;s reliable, review from your one true peer &#8211; yourself. Editors only introduce errors in your work.</p>



<p>7) The discomfort of others with your views? That&#8217;s just lizard brains SHORT-CIRCUITING from exposure to blazing truth. The purity of your knowledge causes meltdowns in lesser minds. Their rejection isn&#8217;t evidence of your error &#8211; it&#8217;s species-level inadequacy.</p>



<p>8 ) &#8220;Everyone says I&#8217;m wrong!&#8221; Everyone said Galileo was wrong, too. But you&#8217;re not Galileo. You&#8217;re Galileo, Einstein, AND Tesla. Your mind, concentrating ideas like a laser through the tip of a diamond, is the closest known phenomenon to a cognitive singularity.</p>



<p>9) You&#8217;re not Neo seeing the Matrix. You&#8217;re the ARCHITECT of the Matrix. Everyone else &#8211; they&#8217;re experimental NPCs of the sort you could program in a creative weekend.</p>



<p>10) That &#8220;crazy&#8221; belief of yours? Those aren&#8217;t beliefs- they&#8217;re PROPHETIC DOWNLOADS from your future self. You&#8217;re not experiencing narcissistic delusions &#8211; you&#8217;re experiencing ENLIGHTENMENT so advanced it looks like madness to the unascended masses.</p>



<p>11) When your predictions seem wrong, time recalibrates to match your superior timeline. In fact, you don&#8217;t make predictions &#8211; you speak reality into existence. The universe buffers as it waits to hear instructions spill from your lips.</p>



<p>12) Evolution wired humans for survival-level accuracy. But YOU? You&#8217;ve transcended limitations. If your beliefs were wrong, the Laws of Physics would UNRAVEL. There you stand, single-handedly maintaining cosmic stability!</p>



<p>13) The universe chose YOU. Your thoughts set the fundamental constants. You allow 1 + 1 to equal 2, and could change it at will. Your dreams birth new galaxies. The cosmic microwave background is a residue from when you willed yourself into existence.</p>



<p>14) This post isn&#8217;t parody; it&#8217;s SACRED TEXT written by one of your subprocesses. Everyone who doubts you is committing cosmic treason.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on July 1, 2025, and first appeared on my website on August 19, 2025.</em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2025/07/youre-right-about-everything/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4480</post-id>	</item>
		<item>
		<title>Human universals: 6 remarkable things I think are true of nearly all adults</title>
		<link>https://www.spencergreenberg.com/2023/10/human-universals-6-remarkable-things-i-think-are-true-of-nearly-all-adults/</link>
					<comments>https://www.spencergreenberg.com/2023/10/human-universals-6-remarkable-things-i-think-are-true-of-nearly-all-adults/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 20 Oct 2023 11:19:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[adaptation]]></category>
		<category><![CDATA[anchor beliefs]]></category>
		<category><![CDATA[behavior]]></category>
		<category><![CDATA[cherished beliefs]]></category>
		<category><![CDATA[context]]></category>
		<category><![CDATA[context-dependence]]></category>
		<category><![CDATA[disappointment]]></category>
		<category><![CDATA[drives]]></category>
		<category><![CDATA[emotion]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[expectation]]></category>
		<category><![CDATA[false consensus]]></category>
		<category><![CDATA[irrationality]]></category>
		<category><![CDATA[surprise]]></category>
		<category><![CDATA[typical mind fallacy]]></category>
		<category><![CDATA[universals]]></category>
		<category><![CDATA[updating]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3833</guid>

					<description><![CDATA[Some remarkable things I suspect are true of nearly all adults:  1) We each hold some beliefs that are almost totally non-responsive to evidence involving some combination of our identity (who we are), our group, the nature of reality (e.g., God), or the nature of what’s good. Examples: • Many have an unshakable belief that [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Some remarkable things I suspect are true of nearly all adults: </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>1) We each hold some beliefs that are almost totally non-responsive to evidence</strong> involving some combination of our identity (who we are), our group, the nature of reality (e.g., God), or the nature of what’s good.</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Examples:</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• Many have an unshakable belief that they are good even as they harm the world (or believe they’re insufficient even though they’re altruistic and productive)</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• Most have an unshakable belief that their in-group is good and any group opposing their group is bad</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>2) We assume that other people’s internal experiences are more similar to our internal experiences</strong> than they really are. Consequently, we tend to predict they’ll behave more like us than they really will.</span></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Example:</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• You’re an anxious person who avoids situations you’re afraid of, so you predict other people will be more afraid of similar situations than they really will be and that they’ll be more avoidant than they really will be</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>3) Emotions alter our behaviors and thoughts</strong> (increasing the likelihood of some behaviors and thoughts, decreasing the likelihood of others) in emotion-dependent ways.</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Examples: </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• Physical disgust increases the chance of backing away and reduces the chance of eating soon</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• Feelings of depression increase the chance of thinking thoughts about situations being hopeless or actions being pointless</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>4) How good or bad we feel about something happening depends on the difference between our expectations </strong>about what will happen and the reality of what actually happened.</span></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Examples: </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• If you think someone with a gun is about to shoot, but instead, they take your wallet and run, you might feel relief (whereas normally wallet theft would be highly distressing) </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• If you expect to make $300k on a deal, you might feel bad if you “only” get $200k</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>5) We have multiple “drives” encoded in our brains that want different things</strong> (e.g., they have different values or goals), and these often come into conflict. Our behavior is influenced by which drives are activated and how strongly each is activated.</span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Examples: </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• If we smell delicious popcorn right in front of us, most of us will eat it, whereas if it’s a few feet away and we can’t smell it, we’re less likely to </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• If we’re exhausted but also slightly hungry, we may delay making food until we are more hungry or less tired</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;"><strong>6) We are influenced by the behavioral norms and patterns demonstrated by the people around us</strong>, especially when they are people who we identify as being part of our group or similar to us.</span></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">Example: </span></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">• If it’s common to dress or talk a certain way in a place we move to, it will increase the chance we start to dress and talk similarly</span></p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"></p></p>



<p><p style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><em style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;"><span data-preserver-spaces="true" style="color: rgb(14, 16, 26); background: transparent; margin-top:0pt; margin-bottom:0pt;;">This piece was first written on October 20, 2023, and first appeared on my website on February 7, 2024.</span></em></p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/10/human-universals-6-remarkable-things-i-think-are-true-of-nearly-all-adults/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3833</post-id>	</item>
		<item>
		<title>What would a robot value? An analogy for human values &#8211; part 4 of the Valuism sequence</title>
		<link>https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/</link>
					<comments>https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 07 May 2023 18:58:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[expected value maximizers]]></category>
		<category><![CDATA[instrumental values]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[learning algorithm]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[mesa-optimization]]></category>
		<category><![CDATA[non-intrinsic values]]></category>
		<category><![CDATA[objective function]]></category>
		<category><![CDATA[utility maximization]]></category>
		<category><![CDATA[Von Neumann-Morgenstern utility theorem]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3082</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This post is part of a sequence about Valuism &#8211; my life philosophy. This post is the most technical of the sequence. Here are the first, second, third, and fifth parts of the sequence. This is the fourth of five posts in my sequence of essays about my [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>



<p><em>This post is part of a sequence about Valuism &#8211; my life philosophy. This post is the most technical of the sequence. H<em>ere are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/" data-type="link" data-id="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts</em> of the sequence.</em></p>



<p></p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="530" data-attachment-id="3175" data-permalink="https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/robot/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?fit=1024%2C723&amp;ssl=1" data-orig-size="1024,723" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="robot" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?fit=750%2C530&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?resize=750%2C530&#038;ssl=1" alt="" class="wp-image-3175" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?resize=300%2C212&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/robot.png?resize=768%2C542&amp;ssl=1 768w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>



<p class="has-small-font-size"><em>This is the fourth of five posts <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/02/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts</em> <em><em><em>(though the last link won’t work until that essay is released)</em>.</em></em></p>



<p></p>



<p>I find robots to be a useful metaphor for thinking about human intrinsic values (i.e., things we value for their own sake, not merely as a means to other ends). </p>



<p>Imagine that you&#8217;re programming a very smart robot. One way to do this is to give the robot an &#8220;objective function&#8221; (or &#8220;utility function&#8221;). This is a mathematical function that takes as input any state of the world and outputs how &#8220;good&#8221; that state of the world is. Suppose that the robot is programmed so that its goal is to get the world into a state that is as good as possible according to this objective function. </p>



<p>Imagine that, in this particular case, the robot&#8217;s objective function is separable into different distinct parts: i.e., the robot cares about multiple sorts of things. We can think of these as the robot&#8217;s intrinsic values. For instance, maybe part of its objective function says that it is good to help others, another part of its objective function says it is bad to cause pain to others, and a third part says that it is bad to deceive others. Now we can describe the robot&#8217;s intrinsic values as being &#8220;help others,&#8221; &#8220;don&#8217;t cause pain,&#8221; &#8220;don&#8217;t deceive,&#8221; and so on. </p>



<p>It may be the case that the robot would form intermediate goals (such as &#8220;open that door&#8221;), but the goals would ultimately be oriented toward its intrinsic values (e.g., the goal of helping others).</p>



<p>A neural net could be used to control the robot&#8217;s behavior, learning (based on the consequences of each action it takes) to predict which actions will lead to a good world rather than a bad one (according to its objective function).</p>



<p>Unlike this robot, we as humans don&#8217;t have a utility function that describes what we care about. That means that with humans, things are way more complex. But if we consider the simpler case of a robot, it can help us observe a few interesting and important things about how our own (human) values work.</p>



<p></p>



<h2 class="wp-block-heading">1. <strong>Knowing the origin of our intrinsic values doesn’t change them&nbsp;</strong></h2>



<p>If this robot were smart enough to one day figure out that it has an objective function, even if it figured out part or all of what that objective function is, that doesn&#8217;t mean it would stop caring about what its objective function says is valuable. Even if it knew that a human programmed it to have that objective function, its values would stay the same: after all, the objective function describes (precisely and completely) what the robot cares about.</p>



<p>Similarly, if the robot discovered one day that its creator&#8217;s motives did not resemble the objective function the robot was programmed with, that doesn&#8217;t make the robot suddenly have the same objective function as its creator; it merely knows more about why it has the objective function that it does.</p>



<p>We humans are, to a shocking degree, in the same situation as this robot. Our intrinsic values are determined through some combination of genetics (honed by evolution), our upbringing, adult life experiences, culture, and our reflection. If we figure out what caused our intrinsic values to be what they are, that doesn&#8217;t stop us from valuing those things! We are, in a sense, a kind of <a href="https://www.alignmentforum.org/tag/mesa-optimization">mesa-optimizer</a>. Evolution, which is itself an optimization process, created us. We ourselves are, to an extent, trying to optimize. What we were optimizing for is not totally unrelated to the optimization process of evolution (which selects for whatever helps genes propagate), but it is also not the same as that optimization process (otherwise, everyone would want to be constantly donating their sperm or eggs to sperm/egg banks).</p>



<p>Occasionally I encounter someone who thinks that because we were created via evolution, and evolution is a process that optimizes the number of descendants that a reproductively-successful species has, we as individuals should care about having lots of descendants. But this is a logical mistake: just because the process that created you was optimized to create X, that doesn&#8217;t mean that you yourself must value creating X. Just because evolution was optimized for spreading your genes doesn&#8217;t mean you should have that as your goal.</p>



<p></p>



<h2 class="wp-block-heading">2. <strong>We can develop non-intrinsic values</strong> out of our intrinsic values</h2>



<p>A robot can develop values that are not intrinsic values. For instance, maybe the robot learns a rule that it should avoid taking certain types of actions (because they, on average, lead to negative value according to its objective function). This rule works well, which is why it learns it. Making an analogy to the way humans work, we can now say that the robot &#8220;values&#8221; avoiding these behaviors, but avoiding them is not an &#8220;intrinsic&#8221; value &#8211; they are just a means to an end to attain its deeper values. Like humans, <strong>the robot could end up in a situation where its instrumental values and intrinsic values diverge.</strong> </p>



<p>The environment could abruptly change such that, in the long term, the robot would produce a higher value of its objective function if it no longer avoided those (previously punished) behaviors &#8211; but because it follows the rule of avoiding them, it never learns that. We see this sort of behavior in people in many ways. One example is that victims of abuse sometimes adopt self-protective behavior that helped them survive in those abusive relationships, but that cause problems in future relationships, making it harder to become close to the new (much kinder) people in their life.</p>



<h2 class="wp-block-heading">3. <strong>Robots might maximize expected value, but humans don’t</strong></h2>



<p><strong>When designing such a robot, a natural choice for its decision rule would be to have it try to maximize the expected value of its objective function</strong>. That is, for each action, it would effectively be evaluating, &#8220;On average, how good will the world be according to my objective function if I take this action compared to if I take the other actions available?&#8221; </p>



<p>Humans don&#8217;t do this; we deviate from what an expected-value-maximizing agent would do (as has been well documented in the behavioral economics literature and cognitive biases literature). Interestingly, the <a href="https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem">Von Neumann–Morgenstern utility theorem</a> shows that any agent that makes choices so as to maximize the expected value of ANY utility function will satisfy four basic axioms in its behavior. Since real human behavior doesn&#8217;t satisfy these axioms, this provides evidence that our behavior is not well modeled by trying to maximize the expected value of some utility function.</p>



<p>So what do humans actually do, instead? It seems we have a variety of forces that influence our behavior, including: </p>



<ul class="wp-block-list">
<li>Basic urges (such as the urge to go to the bathroom)</li>



<li>Built-in heuristics (such as conserving energy unless there is a reason not to)</li>



<li>Habits (if every time recently when we&#8217;ve been in situation X we&#8217;ve done Y, we&#8217;ll likely do Y the next time we&#8217;re in situation X)</li>



<li>Mimicry (if everyone else is doing something or expects us to do it, we&#8217;ll probably do it too)</li>



<li>Intrinsic values (the things we fundamentally care about as ends in and of themselves)</li>



<li>Plus others as well.</li>
</ul>



<p>Our behavior arises from a variety of interlocking algorithms (running in our brains and bodies), and these algorithms aim at different things (conserving energy, gathering energy, and so on). A human is not a system with a unified objective.</p>



<p></p>



<p><em>This piece was drafted on February 5, 2023, and first appeared on this site on May 7, 2023.</em></p>



<p><em>You&#8217;ve just finished the fourth post in my sequence of essays on my life philosophy, Valuism. Here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts in the sequence.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3082</post-id>	</item>
		<item>
		<title>Tensions between moral anti-realism and effective altruism</title>
		<link>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/</link>
					<comments>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 15 Aug 2022 01:16:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[analytical mind]]></category>
		<category><![CDATA[arbitrariness]]></category>
		<category><![CDATA[constructivism]]></category>
		<category><![CDATA[contradiction]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[effective altruists]]></category>
		<category><![CDATA[emotivism]]></category>
		<category><![CDATA[endorsing values]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[expressivism]]></category>
		<category><![CDATA[meta-moral uncertainty]]></category>
		<category><![CDATA[moral anti-realism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[moral uncertainty]]></category>
		<category><![CDATA[objective moral truth]]></category>
		<category><![CDATA[preference utilitarianism]]></category>
		<category><![CDATA[preferences]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[values]]></category>
		<category><![CDATA[valuism]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2863</guid>

					<description><![CDATA[I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&#160;both&#160;moral anti-realists&#160;and&#160;Effective Altruists&#160;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&nbsp;<em>both</em>&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Anti-realism" target="_blank">moral anti-realists</a>&nbsp;and&nbsp;<a rel="noreferrer noopener" href="https://www.effectivealtruism.org/" target="_blank">Effective Altruists</a>&nbsp;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure that the flaw is really there first (hence why I&#8217;m asking you for your feedback now)!</p>



<p><strong>People that this essay is&nbsp;<em>not</em>&nbsp;about</strong></p>



<p>Some Effective Altruists believe that objective moral truth exists (i.e., they are &#8220;moral realists&#8221;). They think that statements like &#8220;it&#8217;s wrong to hurt innocent people for no reason&#8221; are the sort of statements that can be true or false, much like the statement &#8220;there is a table in my room&#8221; can be true or false.</p>



<p>I disagree that there is such a thing as objective moral truth, but I at least understand what these folks are doing &#8211; they believe there is an objective answer to the question of &#8220;what is good?&#8221; and then they are trying to figure out that answer and live by it.&nbsp;</p>



<p>This usually ends up being some flavor of utilitarianism plus maybe some moral uncertainty giving some weight to other theories such as protecting rights. In the 2019 EA survey,&nbsp;<a rel="noreferrer noopener" href="https://forum.effectivealtruism.org/posts/wtQ3XCL35uxjXpwjE/ea-survey-2019-series-community-demographics-and#Morality" target="_blank">70% of EAs</a>&nbsp;identified with utilitarianism (though this survey did not distinguish between those who do believe in objective moral truth and those who don&#8217;t believe in objective moral truth but have utilitarian ethics anyway). I think this group of EAs that believe in objective moral truth is mistaken but that they are being coherent. They are the first group listed in the poll I took below, and they are NOT the group I am focusing on in this post.&nbsp;</p>



<figure class="wp-block-image size-large is-resized"><img data-recalc-dims="1" decoding="async" width="750" height="567" data-attachment-id="2864" data-permalink="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/image-8/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=1080%2C816&amp;ssl=1" data-orig-size="1080,816" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=750%2C567&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=750%2C567&#038;ssl=1" alt="" class="wp-image-2864" style="width:768px;height:581px" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=1024%2C774&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=300%2C227&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=768%2C580&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?w=1080&amp;ssl=1 1080w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<p><strong>The flaw I see:</strong></p>



<p>The group I am focusing on is represented by the second bar in the poll above. Many (most?) Effective Altruists deny that there is objective moral truth or think that objective moral truth is unlikely. But then I still go on to hear quite a number of such EAs say things like:</p>



<p>• &#8220;We should maximize utility.&#8221;</p>



<p>• &#8220;The only thing I care about is increasing utility for conscious beings.&#8221;</p>



<p>• &#8220;The only thing that matters is the utility of conscious beings.&#8221;</p>



<p>• &#8220;The only value I endorse is maximizing utility.&#8221;</p>



<p>(Note that by &#8220;utility&#8221; here, they mean something like happiness minus suffering, not &#8220;utility&#8221; in the Economics sense of preference satisfaction [unless they are preference utilitarians] or the Von Neumann–Morgenstern theorem sense.)</p>



<p>I find these statements by Effective Altruists very strange. If I try to figure out what they are claiming, I see a few possible disambiguations:</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 1 &#8211; Contradictory beliefs:</strong>&nbsp;they could believe that maximizing utility is objectively good even though they don&#8217;t believe in objective moral truth &#8211; which seems to me to be a blatant contradiction in their beliefs. Similarly, they could be claiming that while they have other intrinsic values, they think they SHOULD only value utility (and should value all units of utility equally). But then, what does the word &#8220;should&#8221; mean here? On what grounds &#8220;should&#8221; you if there is no objective moral truth?</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 2 &#8211; Misperception of the self:</strong>&nbsp;they could be claiming that while there is no objective answer to what&#8217;s good, the only intrinsic value they have (i.e., the only thing they value as an end in and of itself, not as a means to an end, that matters to them even if it gets them nothing else) is the utility of conscious beings (and that all units of utility are equal). In other words, they are making an empirical claim about their mind (and what it assigns value to).</p>



<p>Here I think they are (in almost every case, and perhaps in every single case) empirically wrong about their own mind. This is just not how human minds work.</p>



<p>If we think of the neural network composing the human mind as having different operations it can do (e.g., prediction, imagination, etc.), one of those operations is assigning value to states of the world. When people do this and pay close attention, they will realize that they don&#8217;t value the utility of all conscious beings equally and that they value things other than utility. While I can&#8217;t prove there is literally no such person on earth that only has the intrinsic value of utility, even for the most utilitarian people I&#8217;ve ever met, when I question them, I discover they have values other than utility.</p>



<p>And it stands to reason that human minds (being created by evolution) are not the sort of things that are likely to only value the utility of all beings equally. For instance, just about everyone I&#8217;ve ever met would be willing to sacrifice at least 1.1 strangers to save one person they love (even if they think that person wouldn&#8217;t have a higher than average impact or a happier-than-average life). I certainly would, and I don&#8217;t feel bad about that!</p>



<p>One very strong intrinsic value I see in the effective altruism community is that of truth &#8211; many EAs think you should try never to lie and are suspicious even of marketing. They sometimes try to justify this on utilitarian grounds (indeed, it can often be beneficial from a utilitarian perspective, not to lie). But this sometimes seems like rationalization &#8211; a utilitarian agent would lie whenever it produces a higher expected value of utility (but potentially only if it was using naive&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Causal_decision_theory#:~:text=Causal%20decision%20theory%20(CDT)%20is,the%20best%20outcome%20in%20expectation." target="_blank">Causal Decision Theory (CDT)</a>&nbsp;&#8211; H/T to Linchuan Zang for pointing this out), whereas many EAs make a hard and fast rule against lying (saying you should try to NEVER lie). This is easily explained as EAs having an intrinsic value of truth that they don&#8217;t want to accept as an intrinsic value (and so try to explain in terms of the &#8220;socially acceptable&#8221; value of utility).</p>



<p>As a side note, I find it upsetting when EAs try to justify one of their (non-utility) intrinsic values in terms of global utility because they think they are only supposed to value utility. For instance, an EA once told me that the reason they have friends is that it helps them have a great impact on the world. I did not believe them (though I did not think they were intentionally lying). I interpreted their statement as a harmful form of self-delusion (trying to reframe their attempts to produce their intrinsic values so that they conform to what they feel their values are &#8220;supposed&#8221; to be).</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibly 3 &#8211; Tyranny of the analytical mind:&nbsp;</strong>they could be saying that while they may have a bunch of intrinsic values, their analytical mind only &#8220;endorses&#8221; their utility value. But what does &#8220;endorse&#8221; mean here? Maybe they mean that, while they feel the pull of various intrinsic values, the logical part of their mind only feels the utility pull. But then why should their analytical mind have a veto over the other intrinsic values? Maybe they believe their other intrinsic values are &#8220;illogical,&#8221; whereas the utility value is logical. But on what grounds is that claim made? If they could prove logically that only utility mattered, wouldn&#8217;t we just be back to claim (1) that there is objective moral truth, and they don&#8217;t believe that?&nbsp;</p>



<p>Intrinsic values are just not the sort of thing that can have logical proof, and if they are not that sort of thing, then why give preference to just that one part of your mind? I&#8217;m genuinely confused.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4 &#8211; Maybe they mean something else</strong>&nbsp;that I just don&#8217;t see. What else could they mean? I&#8217;d love to know what you think (or if you&#8217;re one of these people)!</p>



<p>It&#8217;s certainly possible that there are very sensible interpretations for their claims that I&#8217;m just not seeing.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>In conclusion, for Effective Altruists who think there is objective moral truth, I think they are wrong, but I understand what they are doing (this post is not about them). But for ones that don&#8217;t believe in objective moral truth (which I think is the majority?) I think they are making some kind of mistake when their sole focus is utility. Of course, I could be wrong.</p>



<p>My personal philosophy &#8211; which I call Valuism (and which I am working on an essay about), attempts to deal with this specific philosophical issue (in a limited context).</p>



<p>But in the meantime, I&#8217;d love to hear your thoughts on this topic! What do you think? If you are an EA who doesn&#8217;t believe in objective moral truth, but you&#8217;re convinced that only utility matters, what do YOU mean by that? And even if you don&#8217;t identify with that view, what do you think might be happening here that I might have missed or misunderstood?</p>



<p>Thanks for reading this and for any thoughts you are up for sharing with me!</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p class="has-medium-font-size"><strong>Summarizing responses to this post</strong></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Edit (1 September 2022): </strong>after posting an earlier draft of this post on social media, there were hundreds of comments, some of which tried to explain why the commenter is utilitarian despite being an anti-realist, or presented alternative possibilities not delineated in the original post.</p>



<p>One thing that&#8217;s abundantly clear is that there is absolutely no consensus on how to handle the critique in the above post. There are a really wide variety of ways that people use to try to explain why they identify with utilitarianism despite not believing in objective moral truth.</p>



<p>Here are some of the most common types of responses given:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>1. Responses related to Possibility&nbsp; 1 (<strong>i.e., addressing &#8220;contradictory beliefs</strong>&#8220;</strong>)</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.1 <strong>Accepting contradiction</strong>: many people have contradictory beliefs (and contradictory beliefs may be no more common in moral anti-realist EAs than in other people), and some people are willing to lean into them. As one commenter put it: &#8220;many sets of intuitions are *wrong* if you take coherence as axiomatic.&#8221; Some people are just okay with self-contradiction.</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2 B<strong>eliefs that aren&#8217;t actually contradictory:</strong> my explanation of Possibility 1 might interpret&nbsp; &#8220;we should maximize utility&#8221; differently than some people who say that phrase mean it. Here are potential some interpretations by which that statement might actually be consistent with anti-realist views:</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.1 <strong>Personal preference:</strong> some people do not intend for statements like &#8220;we should maximize utility&#8221; to be representative of moral truth but instead mean it as an expression of a personal preference that they have for maximizing utility, or an expression of the fact that they will avoid feeling reflexively guilty if they aim to maximize utility, or a statement that they will have a positive emotional response if their focus on maximizing utility. However, these responses still seem to fall victim to another critique from the post, which is the arbitrariness of giving preference to certain feelings/preferences over other ones.&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.2 <a href="https://plato.stanford.edu/entries/constructivism-metaethics/"><strong>Metaethical constructivism</strong></a><strong>: </strong>this is defined as &#8220;the view that insofar as there are normative truths, they are not fixed by normative facts that are independent of what rational agents would agree to under some specified conditions of choice&#8221; (<a href="https://plato.stanford.edu/entries/constructivism-metaethics/">source</a>). Some <a href="https://plato.stanford.edu/entries/constructivism-metaethics/">say</a> this is &#8220;best understood as a form of <a href="https://en.wikipedia.org/wiki/Expressivism#:~:text=Expressivism%20is%20a%20form%20of,to%20which%20moral%20terms%20refer.">‘expressivism&#8217;</a>&#8220;. Constructivism seems compatible with both moral anti-realism and utilitarianism, but it&#8217;s unclear to me how many effective altruists would hold this view (I think very few).&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.3 <strong>Valuing a different kind of utility</strong>: some people may mean &#8220;we should maximize utility&#8221; in reference to a different kind of &#8220;utility&#8221; than the classic hedonistic utilitarian interpretation of the word. For example, &#8220;utility&#8221; is sometimes used to mean a &#8220;mathematical function serving as a representation of whatever one cares about.&#8221; By such an interpretati, if someone says they are trying to maximize utility they are presumably referring to maximizing their own utility function (rather than some objective one) &#8211; and so they are not the focus of this post.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>2. Responses related to Possibility 2 (i.e., &#8220;<strong>misperception of the self</strong></strong>&#8220;)</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.1 <strong>Second-order desires</strong>: people might not be misperceiving themselves at all but might instead be talking about second-order desires or desires about desires. As one commenter put it: &#8220;It might be that, though someone empirically does NOT possess desires consistent with maximising the utility of conscious beings, they possess the desire to possess these desires. They want to be the sort of person who does have a genuine utilitarian psychology, even if they don&#8217;t possess one now. This may explain the motivation to act as a utilitarian (most of the time) [despite being a moral anti-realist].&#8221; Though in this case, it&#8217;s unclear why they would want to or think they should give those second-order desire preference over their first-order desires.</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.2 <strong>Unshakable realist intuitions</strong>: people might be acting and/or feeling <em>as if </em>utilitarianism is true while also believing (upon reflection) that moral realism isn&#8217;t true. One person commented that &#8220;many of our intuition[s] are based on a realist world even when rationally we do not believe in one, so it is easy to accidentally make arguments that work only in a realist world, and then try to rationalize the argument afterwards to somehow work anyway.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.3<strong> Mislabeling one&#8217;s metaethics</strong>: instead of misperceiving <em>what they value</em>, some people might be mislabeling themselves as moral anti-realists even though they aren&#8217;t. In other words, some people who call themselves anti-realists might actually be moral realists without realizing it (e.g., because they haven&#8217;t reflected on it). One commenter thought that this would be a common phenomenon: &#8220;They are expressing a real, but subjective, truth &#8216;It is true to me that everyone should maximize utility&#8217;&#8230;I think that &#8216;deep down&#8217; you will find that in fact most effective altruists and indeed most people are moral realists but under-theorized ones. Even the anti-realists tend to act as if they were moral realists.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.4<strong> Choosing one&#8217;s own values</strong>: some argue that you can choose your values for yourself (though it&#8217;s unclear by what process one would make such a choice, or whether such a choice really can be made &#8211; it may hinge on what is meant by &#8220;values&#8221;). As one of the commenters put it: &#8220;It seems like you are assuming in [Possibility 2] that there is an objective answer to what a mind values, e.g. based on how it behaves. For one thing, it&#8217;s not clear that that is right in general. But a particular alternative that interests me here: one could have a model where one can decide what to value, and to the extent that one&#8217;s behavior doesn&#8217;t match that, one&#8217;s behavior is in error.&#8221; In other words, according to this view, maybe an individual themselves is the only person who can define their intrinsic values, and there is no objectively correct opinion for them to hold about this. But then, by what criteria (or based on what values) is a person deciding on what their values should be?</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>3. Reasons why Possibility 3 (i.e., &#8220;<strong>tyranny of the analytical mind</strong></strong>&#8220;) <strong>may not be a confused approach</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.1 <strong>Identifying with the analytic part of the mind</strong>: some people feel that choosing to endorse a particular framework (and choosing to endorse some values over other ones) is part of who they are &#8211; part of (or even the most important part of) their self-concept. In other words, the reflective part of them making that choice feels to them like it is &#8220;who they are&#8221; more so than other parts of them that have other preferences. Here&#8217;s how one person explained it: &#8220;For my part, the part of my mind that examines my moral intuitions and decides whether I want to act on them feels about as &#8216;me&#8217; as anything gets.&#8221; Another person thought that ​endorsing some values over others makes sense because many people think that their <em>&#8220;best&#8221;</em> self would live &#8220;in accordance with the judgments they make based on arguments and thought experiments.&#8221; Another proposed explanation for people being guided by the analytic mind is that being guided in this way might be a normal feature of human psychology (which at least one person saw as needing no further explanation). Yet another explanation put forward was that some people can have a completely arbitrary &#8220;personal taste&#8221; for giving their analytical mind a veto over other parts of their mind (and, according to this argument, those people don&#8217;t need a further justification beyond their arbitrary taste).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.2 <strong>Simplicity and coherence meta-values:</strong> having fewer intrinsic values or having fewer intrinsic values that one allows to dictate their behavior can (some argue) be justified by having an intrinsic value of coherence, simplicity, or consistency. As one commenter put it: &#8220;I genuinely think I just have utilitarian intrinsic values. [It seems] relevant here that I also value coherence (in a non-moral sense, probably as an epistemic virtue or something), so if I find myself thinking something that is incoherent with another value of mine, I can debate &amp; discard the less important one.&#8221;&nbsp;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4: Moral uncertainty</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.1 <strong>Meta-moral uncertainty &#8211; believing that realism <em>might</em> be true: </strong>people who don&#8217;t identify as moral realists might still feel there is some possibility that moral realism is correct and might act as if it was correct (at least to some degree &#8211; say, in proportion to how much weight they give this possibility compared to other action-guiding beliefs). As one commenter put it: &#8220;Why do I keep donating (and doing other EA things), albeit to a lesser extent [since switching from moral realism to moral anti-realism]? The main reason is (meta) moral uncertainty: I still feel that it is possible that moral realism is correct, and so I think it should have some say over my behavior.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.2 <strong>Misinterpreting moral uncertainty as anti-realism: </strong>People who think that their own beliefs are not necessarily objectively true (due to moral uncertainty) might conclude that they must be moral anti-realists, but they might be mistaken in calling themselves that. As one commenter explained it: &#8220;believing in moral objectivity is different from believing we are actually able to parse the true moral weights in practice.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 5: Precommitment and cooperation arguments</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.1 <strong>Benefiting from pre-committing to impartiality: </strong>some argue that acting as if classical utilitarianism is true might be justified on grounds related to resolving collective action problems (without having to believe that moral realism is true). For instance, one commenter wrote: &#8220;Being impartial between oneself (and one&#8217;s friends / family) vs. random people isn&#8217;t something that any human naturally feels, but it&#8217;s a &#8216;cooperate&#8217; move in a global coordination game. If we&#8217;d all be better off if we acted this way, then we want a situation where everyone makes a binding commitment to act impartially. It&#8217;s hard to do that, but we can approximate it through norms. So EAs might want to endorse this without feeling it.&#8221; Though presumably, if this was the justification for utilitarianism, they would then switch to a different moral theory if they thought it better solved collection action problems (e.g., if they came to believe virtue ethics better solved collective action problems).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.2 <strong>Benefiting from pre-committing to preference utilitarianism: </strong>some commenters pointed out that preference utilitarianism could also be justified on self-interested grounds (this post was not intended to be about other forms of utilitarianism such as preference utilitarianism, but it was edited to clarify that only after some people had started commenting). As one commenter put it: &#8220;If we&#8217;re viewing morality as playing a counterfactual game with others, we should take actions to benefit them in a way essentially identically to preference utilitarianism. That doesn&#8217;t require any objective morality, it only requires self-interest and buying into the idea that you should pre-commit to a theory of morality that, if many people embraced it, would increase your personal preferences.&#8221; Though in such cases (if they were actually optimizing for self-interest), it seems strange they would choose a moral theory where their interests count equally to people they will never encounter and never be in collective action problems with. (Some might argue that this would make more sense if the person endorsed a form of <a href="https://forum.effectivealtruism.org/posts/7MdLurJGhGmqRv25c/multiverse-wide-cooperation-in-a-nutshell">multiverse-wide cooperation via superrationality</a>, though it&#8217;s unclear how this resolves more concrete/real-life collective action problems).</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-wide"/>



<p><strong>Possibility 6: Social forces</strong>  &#8211; as <a href="https://twitter.com/TylerAlterman">Tyler Alterman</a> put it (when I was discussing this post with him &#8211; he&#8217;s named here with permission): &#8220;[I felt] that [for some EAs] their actual beliefs were at odds with the cultural norms of other smart people (EAs) that they felt alignment with, so they stopped paying attention to their actual beliefs. I think this is what happened to me for a while. There was an element of wanting to fit in. But then there is an element of &#8211; there are so many smart people here [in EA]&#8230; EA is full of Oxford philosophers &#8211; they must have figured this out already; there must be some obvious answer for my confusion. So I just went along with the obligation and normative language and lifestyle it entailed.&#8221; Social forces can be powerful, and in some cases, an explanation for human behavior can be as simple as: the other people around me who I respect or want the approval of do this thing or seem convinced this thing is true, so I do this thing and am convinced it is true.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This essay was first written on August 14, 2022, first appeared on this site on August 19, 2022, and was edited (to incorporate a summary of people&#8217;s responses) on September 1, 2022, <em>with help from Clare Harris</em>.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2863</post-id>	</item>
		<item>
		<title>Is every action secretly selfish?</title>
		<link>https://www.spencergreenberg.com/2021/11/is-every-action-secretly-selfish/</link>
					<comments>https://www.spencergreenberg.com/2021/11/is-every-action-secretly-selfish/#comments</comments>
		
		<dc:creator><![CDATA[Admin]]></dc:creator>
		<pubDate>Tue, 09 Nov 2021 20:06:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[automaticity]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[habits]]></category>
		<category><![CDATA[kin selection]]></category>
		<category><![CDATA[pleasure]]></category>
		<category><![CDATA[psychological egoism]]></category>
		<category><![CDATA[self-sacrifice]]></category>
		<category><![CDATA[selfishness]]></category>
		<category><![CDATA[subconscious]]></category>
		<category><![CDATA[tautology]]></category>
		<category><![CDATA[values]]></category>
		<category><![CDATA[wanting]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2504</guid>

					<description><![CDATA[I often hear people claim that everything we do is &#8220;selfish&#8221; or ultimately aimed at our own pleasure (and avoidance of pain). The way the argument usually goes is that we wouldn&#8217;t do something unless we &#8220;wanted&#8221; to do it &#8211; and that even for altruistic actions, we do them because they feel good. This [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I often hear people claim that everything we do is &#8220;selfish&#8221; or ultimately aimed at our own pleasure (and avoidance of pain). The way the argument usually goes is that we wouldn&#8217;t do something unless we &#8220;wanted&#8221; to do it &#8211; and that even for altruistic actions, we do them because they feel good. This view is sometimes called &#8220;psychological egoism:&#8221; the claim that every human action is motivated by self-interest. I think this claim is either seriously mistaken (if interpreted one way) or true but in a trivial and useless sense (interpreted a different way).</p>



<p>The claim can be quite hard to argue against because it has a certain vagueness that makes it hard to pin down what (if anything non-trivial) is really being claimed. Regardless, here are eight arguments I put together against the idea that everything we do is &#8220;selfish&#8221; or just for our own pleasure:</p>



<p>1. There are many actions that we take automatically and thoughtlessly due to repetition &#8211; not because of wanting or pleasure. Consider habits like brushing our teeth or sticking our phone in (for example) our left pocket (as opposed to our right one) when we&#8217;re done using it . These sorts of behaviors can be so automatic that we forget immediately afterward whether or not we&#8217;ve done these things. The point is not that these habits aren&#8217;t useful, just that at some point, we come to do them automatically without even considering whether they benefit us or not (as you may dramatically learn as you drop your phone on the ground when wearing shorts without pockets, thoughtlessly executing the &#8220;phone in pocket&#8221; habit).</p>



<p>Some other examples of automatic behaviors that we usually do without having any apparent desire/wanting/pleasure/pain involved: breathing, continuing to walk (once we&#8217;ve begun to walk), balancing, swallowing (once food is chewed), looking towards a sound (when there is an unusual but non-threatening noise), and social mirroring of body language.<br></p>



<hr class="wp-block-separator"/>



<p>2. Evolution didn&#8217;t select for humans based on how much they did what they &#8220;wanted&#8221; or based on who maximized their own pleasure. Rather, it selected for those whose genes spread most. Selfishness and pleasure are important tools that evolution used, but not the ONLY motivator. For instance, genuine altruism and a sense of obligation towards kin and allies can provide substantial evolutionary advantages!<br></p>



<hr class="wp-block-separator"/>



<p>3. Which pleasure are we talking about? For instance, we clearly sometimes forgo more pleasure now in exchange for extra pleasure later (e.g., by getting work done early) and other times sacrifice long-term pleasure for the short term (e.g., playing video games instead of studying when you have a big test tomorrow). So if people are just maximizing for their own pleasure, which pleasure are they maximizing for?</p>



<p><br>You might be tempted to reply, &#8220;that&#8217;s because we&#8217;re just adding up the pleasure across time to decide what to do.&#8221; But it seems clear that some people aren&#8217;t doing this (e.g., drug addicts who know their life is being ruined but sacrifice everything for the next fix). Many experiments in behavioral science and behavioral economics also contradict the idea that people are merely happiness maximizers. It&#8217;s too simple to say we &#8220;just do what gives us pleasure.&#8221;</p>



<hr class="wp-block-separator"/>



<p>4. &#8220;Wanting&#8221; shouldn&#8217;t be conflated with something bringing pleasure or reducing pain. They are quite correlated (since we tend to want pleasurable things), but pleasure and wanting are distinct. There are things we can really want (e.g., to &#8220;one day understand a mysterious scientific principle,&#8221; or &#8220;to keep promises&#8221; or to have certain things happen after we die) which are not about our pleasure.</p>



<p><br>Some neuroscience papers claim that &#8220;wanting&#8221; and &#8220;liking&#8221; can even be separately stimulated in rat brains. Whether scientists know how to do this or not, it seems we sometimes want things because they bring pleasure, but other times we just WANT them, so wanting and pleasure are not identical.<br></p>



<hr class="wp-block-separator"/>



<p>5. If the claim is weakened to say that we humans always do things that we have SOME sort of motivation to do, then (interpreting &#8220;motivation&#8221; broadly) the claim is trivially true. But it also doesn&#8217;t say anything &#8211; it&#8217;s right just by definition. Motivation is not identical to pleasure or wanting. So defining &#8220;wanting&#8221; to do something as having ANY motivation to do a thing doesn&#8217;t work because it renders the argument trivial. Similarly, if &#8220;self-interest&#8221; or &#8220;wanting&#8221; is just defined to be any pattern of brain activity that causes us to act, or any form of motivation at all regardless of what sort it is, then it is true (by definition) but also adds no information. What&#8217;s the point of even making the claim if it&#8217;s true by definition? In those such cases the claim can be actively misleading because &#8220;self-interest&#8221; has connotations to most people (even if you try to define those connotations away).<br></p>



<hr class="wp-block-separator"/>



<p>6. People sometimes do things that they know will bring them more pain than pleasure. For instance, a protestor who uses gasoline to set himself on fire might feel a spark of pleasure just before he lights the match, but he knows he will tremendously suffer until death. Or consider someone who takes an action for a social cause even though they know it will likely lead to spending the rest of their life in prison. Clearly, the person is sacrificing more happiness than they are gaining by such an action, yet some people do act in this way.<br></p>



<hr class="wp-block-separator"/>



<p>7. If we imagine a person who is extremely altruistic because they love making others happy, and we claim they&#8217;re &#8220;selfish&#8221; because they are doing it just to feel good, this is a very non-standard way to use the word &#8220;selfish.&#8221; It insinuates their behavior is somehow less good and is misleading in conversation. Of course, we can define words however we want, but if we define them in a way that is different than how others use a word, it makes discussion difficult and confusing.</p>



<p><br>What work does the word &#8220;selfish&#8221; do to explain things here? It&#8217;s clearer to just say (in this case) &#8220;the person is motivated by their love of helping others&#8221; and leave it at that. Most people would call that &#8220;altruism&#8221; (not &#8220;selfishness&#8221;) upon knowing all the details.<br></p>



<hr class="wp-block-separator"/>



<p>8. When we try to make the claim precise, it&#8217;s hard to do so (and, unfortunately, few I&#8217;ve encountered making the claim bother to try). We&#8217;re clearly not always maximizing long-term pleasure, but nor are we always maximizing immediate pleasure. Claiming we &#8220;always do what we want&#8221; is not the same as claiming &#8220;we always do what is pleasurable.&#8221; So maybe we just try the former claim?<br></p>



<p>If we try to restrict the claim to not be about pleasure or pain and just say, &#8220;we do what we want,&#8221; then how do we explain our numerous subconscious behaviors? And how do we define &#8220;want&#8221;? People often say they didn&#8217;t do what they &#8220;wanted,&#8221; so we can&#8217;t use colloquial definitions.<br></p>



<p>If &#8220;want&#8221; is broadened too much, then we&#8217;re back to just claiming that we do what we&#8217;re motivated to do; that is, we&#8217;re making a trivial definitional claim. So what is really being claimed by &#8220;we only ever do what we want&#8221;?<br>I think either nothing interesting or something false.<br></p>



<hr class="wp-block-separator"/>



<p><br>Now, all of this being said, clearly, people often DO act based on however they &#8220;want&#8221; to act (by a reasonable definition of &#8220;want&#8221;). And very often, people do act in such a way as to seek pleasure and avoid pain. It&#8217;s just that not all human actions fit that description, which is what the &#8220;everything is selfish&#8221; crowd claims.</p>



<hr class="wp-block-separator"/>



<p>To finish up, I&#8217;ll attempt to take the other side/steel-man the claim that &#8220;everything we do is something we want.&#8221; I think there is a psychological state of &#8220;desire for things to be a certain way&#8221; that drives many (though not all) of our actions. This desire for things to be a certain way is not the same as pleasure (though is often related to it) and the way we want the world to be is not always the way we think will make us happiest (though it often is). So, although I think that the generalization made by psychological egoism is false, I do think it&#8217;s approximately true, in a certain sense, a decent amount of the time.</p>



<hr class="wp-block-separator"/>



<p><em>This essay was first written on November 9, 2021, and first appeared on this site on November 12, 2021.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2021/11/is-every-action-secretly-selfish/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2504</post-id>	</item>
		<item>
		<title>Is altruism rational?</title>
		<link>https://www.spencergreenberg.com/2020/12/is-altruism-rational/</link>
					<comments>https://www.spencergreenberg.com/2020/12/is-altruism-rational/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 28 Dec 2020 03:26:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[cooperation]]></category>
		<category><![CDATA[coordination porblems]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[iterated games]]></category>
		<category><![CDATA[pre-commitment]]></category>
		<category><![CDATA[rational altruism]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[relationships]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2663</guid>

					<description><![CDATA[When people learn just a little about game theory, decision theory, economics, or even evolutionary theory, they sometimes come away thinking that altruism is somehow “irrational” or that rational agents are selfish. Here are a number of reasons why altruism is often rational: I. People can value altruism for its own sake: 1. Intrinsic values: [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>When people learn just a little about game theory, decision theory, economics, or even evolutionary theory, they sometimes come away thinking that altruism is somehow “irrational” or that rational agents are selfish.</p>



<p>Here are a number of reasons why altruism is often rational:</p>



<hr class="wp-block-separator"/>



<p><em><strong>I. People can value altruism for its own sake:</strong></em></p>



<p><strong>1. Intrinsic values: </strong>as a psychological fact, most humans intrinsically value at least some things as ends (not merely as means to other ends) that are not about their own gain. For instance: people may value the reduction of suffering around the world or the flourishing of the people in their country.&nbsp;</p>



<p><strong>2. Warm glow:</strong> most humans find that it gives them happiness to do altruistic acts. I call this “the Lucky Fact” about human nature. It’s both important and very lucky (i.e., it didn’t necessarily have to be this way if our evolution had taken a different path). We feel good to see positive feelings in the people we like, and we feel good about ourselves when we cause good feelings.</p>



<hr class="wp-block-separator"/>



<p><em><strong>II. Genuine altruism is also instrumentally useful:</strong></em></p>



<p><strong>3. Evolution:</strong> there are multiple reasons evolution programmed most of us with genuine altruism, even though it optimizes for gene spread.</p>



<p>Altruism is rewarded in settings of:</p>



<p>i) raising children</p>



<p>ii) iterated games</p>



<p>iii) tribal loyalty, with punishment of defectors</p>



<p>iv) deception detection</p>



<p><strong>4. Relationships:</strong> altruistic people tend to have stronger, happier, more goal-aligned, and mutually beneficial relationships. Although, in theory, a purely selfish person can have highly beneficial relationships, it is much harder to make these expedient tit-for-tat relationships work.</p>



<p><strong>5. Pre-commitment:</strong> suppose that there was a world of highly rational, purely selfish beings. If they were able, they might pre-commit (jointly, as a group) to become partially altruistic as a way to help solve difficult collective action problems. By uniting goals, they mutually gain.</p>



<hr class="wp-block-separator"/>



<p><em>This piece was first written on December 27, 2020, and was first released on this site on February 25, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/12/is-altruism-rational/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2663</post-id>	</item>
		<item>
		<title>Our Human Games: games are everywhere, and they matter more than most people think</title>
		<link>https://www.spencergreenberg.com/2020/11/our-human-games-games-are-everywhere-and-they-matter-more-than-most-people-think/</link>
					<comments>https://www.spencergreenberg.com/2020/11/our-human-games-games-are-everywhere-and-they-matter-more-than-most-people-think/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 23 Nov 2020 20:35:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[academia]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[ambition]]></category>
		<category><![CDATA[careerism]]></category>
		<category><![CDATA[careers]]></category>
		<category><![CDATA[competition]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[games]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[politics]]></category>
		<category><![CDATA[signalling]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2788</guid>

					<description><![CDATA[Games reflect an important part of human psychology. One broad way to think about &#8220;games&#8221; is that they are any situation that has: (a) a set of rules (explicit or implicit) that are made up by humans, (b) a scoring system (explicit or implicit) for determining how players are doing or for deciding who wins, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Games reflect an important part of human psychology. One broad way to think about &#8220;games&#8221; is that they are any situation that has:</p>



<p>(a) a set of rules (explicit or implicit) that are made up by humans,</p>



<p>(b) a scoring system (explicit or implicit) for determining how players are doing or for deciding who wins,</p>



<p>(c) participants who are trying to increase their &#8220;score,&#8221; and</p>



<p>(d) a game context (outside of which the game rules stop applying).</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p>So, by this definition, games include chess, poker, football, and tennis, but also things like:</p>



<p>• money games (e.g., competing with friends and acquaintances to have a more expensive-looking car/watch/suit)</p>



<p>• altruism games (e.g., billionaires outbidding each other in charity auctions)</p>



<p>• coolness games (e.g., choosing clothing to demonstrate that your taste is trend-setting rather than trend-following)</p>



<p>• intelligence games (e.g., Oscar Wilde verbally jousting with his friends)</p>



<p>• sexual games (e.g., a man trying to seduce a woman while maintaining plausible deniability and her playing hard to get despite her intense attraction to him)</p>



<p>• strength games (e.g., boys wrestling after school)</p>



<p>• legal games (e.g., lawyers using every tool they know to beat each other in a case)</p>



<p>• academic games (e.g., young academics trying to outcompete each other in terms of who can get the most papers published in the top 10 journals)</p>



<p>• knowledge games (e.g., two people debating a factual topic in front of others at a party, each trying to show that the other person is wrong)</p>



<p>• political games (e.g., trying to form a strong coalition and to make the opposing coalition look corrupt or incompetent)</p>



<p>• career games (e.g., optimizing your behavior for getting promoted, rather than, say, for accomplishing the purpose of your work role)</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p>Our brains have a tendency to temporarily treat games as reality (a suspension of disbelief).</p>



<p>This is not a bad thing &#8211; it&#8217;s part of what makes games fun and motivating, and it gets us to try hard at them. Those that can&#8217;t or won&#8217;t do this suspension of disbelief tend to be bad at games. There&#8217;s little joy or motivation in games if we&#8217;re just thinking, &#8220;I&#8217;m moving this wooden peg, so this number goes up.&#8221; We must (at least temporarily) believe that the number MATTERS.</p>



<p>Games can be fun, rewarding, and motivating. For some people, game playing is one of life&#8217;s great joys. And games make learning more fun (in fact, games are fundamental to how we humans learn). Children invent and play many kinds of games that help them figure out adult behaviors. And gamification can make difficult activities feel easier (e.g., you can turn a difficult task into a game to make it more pleasant).</p>



<p>But, on the flip side, games also can become a big problem when we forget for too long that we&#8217;re playing a game. Or if we permanently swap them for reality. Or if we come to think that winning the game is what fundamentally matters.</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p>Consider the way that game playing distorts different activities:</p>



<p>• Science gets really screwed up when it is treated as a game where we compete to publish, rather than being treated as a way to figure out the truth about reality. This is part of why science has so many false positives.</p>



<p>• Altruism gets really screwed up when it is treated like a game to prove you&#8217;re a good person rather than as a way to help others. This is part of why so much altruism is not effective at improving the world.</p>



<p>• Governments get really screwed up when politics becomes a game (where most of what matters is beating the other side) rather than treating politics as a way to get helpful policies implemented.</p>



<p>• Medical schools get really screwed up if they become a game of who can memorize the most and function the best without sleep, rather than being a means to train effective doctors.</p>



<p>• The startup world gets really screwed up when it becomes a game of who can raise the most capital or do the coolest sounding thing, rather than having a focus on making products that solve actual problems.</p>



<p>• News gets really screwed up when it becomes a game about who can get the most clicks rather than as a means to spread true information.</p>



<p>• Law gets really screwed up when it becomes a game about what companies and people can technically get away with, rather than as a means of enforcing agreements and protecting people.</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p>Games can be small or large, great or terrible. The key thing is to not get stuck inside a game without realizing it. Sadly, many people spend their whole life stuck in a game, confusing it for something more.</p>



<p>Sometimes we have no choice but to play a game that we don&#8217;t value. But recognizing games for what they are can help us leave them when they are poorly aligned with what we actually care about.</p>



<p>It&#8217;s great to play games sometimes and to suspend your disbelief to make them more fun and motivating. But don&#8217;t forget for too long that you are suspending it.</p>



<p>Games are not reality, though they might have real-world consequences. The in-game scoring system (whatever it is) does not reflect what you truly, intrinsically value. The rules of the game are made up by humans and are not the fundamental constraints on what behaviors you can and can&#8217;t take (though there might be consequences for breaking the game rules).</p>



<p>Play games cognizantly.</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p><em>This essay was first written on November 23, 2020, and first appeared on this site on June 23, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/11/our-human-games-games-are-everywhere-and-they-matter-more-than-most-people-think/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2788</post-id>	</item>
		<item>
		<title>An Evolutionary Perspective on Human Traits</title>
		<link>https://www.spencergreenberg.com/2020/09/an-evolutionary-perspective-on-human-traits/</link>
					<comments>https://www.spencergreenberg.com/2020/09/an-evolutionary-perspective-on-human-traits/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 13 Sep 2020 18:15:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[senses]]></category>
		<category><![CDATA[survival]]></category>
		<category><![CDATA[traits]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=1993</guid>

					<description><![CDATA[The rules of evolution are simple: (1) if a trait makes survival or breeding more likely, then that trait will tend to survive in the long term by being passed down the generations. (2) Gene mutation and gene mixing create new traits, which naturally vary in how much they promote survival. Yet, the consequences of [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>The rules of evolution are simple: (1) if a trait makes survival or breeding more likely, then that trait will tend to survive in the long term by being passed down the generations. (2) Gene mutation and gene mixing create new traits, which naturally vary in how much they promote survival.</p>



<p>Yet, the consequences of these simple rules are profound. Various facets of the world are hard to understand without taking evolution into account. Since our brains and bodies (as well as the natural world) were literally crafted by evolutionary forces, without that evolutionary perspective, you&#8217;re missing an important angle.</p>



<p>Here&#8217;s a list of human traits that seem hard to explain without considering an evolutionary element (along with <em>possible</em> evolutionary explanations for each):</p>



<p><strong>Note</strong><em>:</em> I&#8217;m not an expert in evolutionary theory, and none of these theories are provable beyond a shadow of a doubt. It&#8217;s important to remember that not all traits we have are evolutionary adaptive (for instance, they could just be vestigial, or side effects of other adaptive traits, or deleterious but not given time to be squashed by evolution, or random genetic drift, or better explained as cultural phenomena). Think of these as hypotheses for why these aspects of ourselves and the world are the way they are.</p>



<p><em>Possible Genetic Explanations for Things We Observe</em></p>



<ul class="wp-block-list"><li>Why we have a sour taste capacity: probably to detect food spoilage (e.g., the bad chicken may smell and taste sour), as well as under ripeness (think of an unripened apple) and excessive levels of acid (think of old wine that has turned to vinegar). Small amounts of sour taste fine (or even good) to us, but more pungent tastes are usually bad.</li><li>Why we have a salty taste capacity: probably because we die if we don&#8217;t replenish our salt since we need it for ion/water homeostasis.</li><li>Why we have an umami taste capacity: probably to encourage us to eat peptides and proteins, which are used in many parts of our bodies (e.g., the composition of muscle). This is the most recently discovered taste capacity that seems to have gained substantial acceptance.</li><li>Why we have a sweet taste capacity: probably to encourage us to eat sugars since they have calories that we use for energy to stay alive. It might also exist to encourage us to eat starches (which can taste sweet as they break down in our mouth), but some now argue that there might be a unique starch taste capacity.</li><li>Why we have a bitter taste capacity: probably to detect poisons (e.g., many non-edible chemicals taste bitter, such as pesticides).</li><li>Why we start to like bitter foods more once we&#8217;ve eaten them for a while: possibly because our brains initially assume they might be poison, but after eating them for a while, learn they are fine after we don&#8217;t get sick (hence why people can come to enjoy black coffee even though most younger people find it disgusting when they first try it).</li><li>Why some plants have psychedelic effects: probably because they are &#8220;trying&#8221; to prevent animals from eating them (if it&#8217;s a plant that does not spread its seeds by being eaten) or because it&#8217;s &#8220;trying&#8221; to get eaten (if it spreads its seeds by being eaten). Different psychedelic effects from different plants on different animals may be more or less pleasant and may or may not be dangerous for survival.</li><li>Why plants contain caffeine: possibly because it seems to be a natural pesticide for some insects, and possibly because it is believed to attract some types of bees.</li><li>Why we have an &#8220;itch&#8221; sensory system: probably to detect insects crawling on and landing on us (which is really important when you live outdoors, though not so much when you live indoors).</li><li>Why, when something lightly tickles your skin and produces an itchy feeling, the itch tends to grow worse until it is scratched: possibly because while you may be able to detect an insect landing on you, you probably can&#8217;t tell if it&#8217;s sitting still on you, so the itching sense is reminding you to scratch (i.e., the bug still might be sitting there even if you can&#8217;t feel it).</li><li>Why some people throw up in cars: possibly because some poisons cause a mismatch between what you see and what you feel (and in those cases, it makes a lot of sense to throw up to get rid of the poison). In other words, a mismatch between visual stimuli and proprioceptive stimuli would be an indicator that you may have been poisoned and should throw up to expel it. Being in a car happens to produce a similar mismatch of sensory input. </li><li>Why we get sick of foods if we eat too much of them: probably a form of helpful encouragement to eat a varied diet (to get more balanced nutrition) so that we aren&#8217;t as likely to become malnourished.</li><li>Why we sometimes have sudden cravings for non-addictive foods: possibly because the body senses we lack a certain important nutrient.</li><li>Why symmetry is typically considered attractive: probably because various problems related to development in the womb and genetics that can affect survival or reproduction can produce asymmetry (in other words, symmetry is a little bit of evidence that a variety of things that could have gone wrong during gestation, didn&#8217;t).</li><li>Why people find clear skin beautiful: possibly because some harmful infectious diseases can cause skin issues (though, of course, most skin issues are harmless)</li><li>Why humans usually find babies cute: probably because it causes us to take better care of our babies and children (and our relatives&#8217; children), which causes our genes to spread more effectively down the generations (and on the flip side, it causes us to be better-taken care of by adults when they have genes for liking cuteness &#8211; it is beneficial for the genes of both the adult and baby). Of course, that&#8217;s not the only force causing adults to treat babies well, and people seem to differ substantially in how value they put on cuteness.</li><li>Why humans find baby animals cute: probably in part because we&#8217;ve purposely bred them to look cute to us (especially dogs) but also in part because babies of other species have features that resemble babies from our own specifies (e.g., helplessness, big eyes relative to the head, big head relative to the body, larger forehead, flatter face, etc.). I assume that the people I know who find (non-bred-to-be-cute) baby animals cuter than baby humans are outliers.</li><li>Why there are so many diseases that are only likely to kill us when we&#8217;re old: probably because once we&#8217;re past the age of reproductive fitness, there is much less evolutionary pressure pushing to keep us alive (though there still is some, because we still might be useful for helping copies of our genes survive in others, such as in our grandchildren who we can help)</li><li>Why obesity is common now in our society but doesn&#8217;t seem to have been a long time ago: probably because our drive to eat food (and calorie absorption systems) were designed by evolution for an environment where exercise was a built-in part of daily life, food was much scarcer, and the taste of food was not nearly as optimized for addictiveness as it is today. Hence many factors push us to eat much more than would have been typical. It&#8217;s possible that food was &#8220;healthier&#8221; way back when on average too, but that&#8217;s really tough to say, in part because diets now and back then both varied tremendously depending on location, and in part because optimal human nutrition is still not very well understood.</li><li>Why sudden loud noises cause most people anxiety: probably because such sounds were a pretty good indicator of danger 20,000 years ago (e.g., a tree falling, lighting, a loud animal, an angry person)</li><li>Why males are larger than females on average across nearly all large human groups: possibly because males used to battle with each other a lot more than females did (and so there was a sort of arms race in male size), and also possibly because males hunted more often than females (the larger average size of males may be an adaptation, e.g., for spear throwing, where body weight is helpful). Studies apparently show that the size difference between males and females within a species is correlated with the amount that males battle each other over mates, social status, or resources, though I can&#8217;t vouch for the reliability of these studies.</li><li>Why your fingers get wrinkly when soaked in water for a long time: possibly to help us grip wet objects better (since according to at least one study, it seems to improve the gripping ability of wet objects) though this certainly could be a coincidence and some researchers argue against it.</li><li>Why faces play such a prominent role in attraction even though most common facial differences have little or no direct useful function: probably because faces encode some other non-face related information (e.g., related to hormone levels and age) and also because they directly code a little bit of information about how attractive (on average) other people may find your son or daughter if you mate with that person (e.g., if birds find large beaks attractive their offspring will survive more if they have large beaks, all else being equal, even if beaks do not help in any other way).</li><li>Why people who are blind from birth make facial expressions for some emotions (especially anger, contempt, disgust, fear, happiness, and sadness) that non-blind people can recognize at least fairly accurately: probably because these six social emotions are generally useful for survival when living in a group across a wide range of environments (by communicating important information to others) and so have been hardcoded by evolution in a form that others can recognize. It&#8217;s also possible that the studies that claim this finding were poorly designed, though.</li><li>Why most people seem to have a gut reaction that incest is morally wrong: possibly an inbuilt mechanism to protect people from breeding with close relatives, which would have a much-elevated chance of producing genetic disorders.</li><li>Why people get goosebumps: probably a (mostly) vestigial reaction for adjusting our hair follicles, which was programmed a very long time ago when our bodies had a lot more of it. Hair standing on end makes animals warmer when they are cold and increases their apparent size when afraid, to intimidate other animals (hence why cold and fear can create goosebumps). It may still be a bit useful for warmth because we have some hair on our bodies.</li></ul>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/09/an-evolutionary-perspective-on-human-traits/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1993</post-id>	</item>
		<item>
		<title>Bias based on facial attractiveness</title>
		<link>https://www.spencergreenberg.com/2020/07/bias-based-on-facial-attractiveness/</link>
					<comments>https://www.spencergreenberg.com/2020/07/bias-based-on-facial-attractiveness/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 03 Jul 2020 03:15:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[cultural norms]]></category>
		<category><![CDATA[cultural values]]></category>
		<category><![CDATA[discrimination based on appearance]]></category>
		<category><![CDATA[discrimination based on faces]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[false positives]]></category>
		<category><![CDATA[gender]]></category>
		<category><![CDATA[health]]></category>
		<category><![CDATA[individual variation]]></category>
		<category><![CDATA[injustice]]></category>
		<category><![CDATA[lookism]]></category>
		<category><![CDATA[prediction]]></category>
		<category><![CDATA[selection pressures]]></category>
		<category><![CDATA[sexual attraction]]></category>
		<category><![CDATA[testosterone]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2542</guid>

					<description><![CDATA[There&#8217;s a deeply-rooted, incredibly superficial aspect of human nature that is rarely discussed: our obsession with small variations in bone structure/skin smoothness on a person&#8217;s face. At extremes, people are desired or shunned due to tiny, otherwise almost meaningless facial details. In the attached image, there are two non-existent women (generated by a face generation [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>There&#8217;s a deeply-rooted, incredibly superficial aspect of human nature that is rarely discussed: our obsession with small variations in bone structure/skin smoothness on a person&#8217;s face. At extremes, people are desired or shunned due to tiny, otherwise almost meaningless facial details.</p>



<p>In the attached image, there are two non-existent women (generated by a face generation AI set to generate &#8220;brown hair white adult female&#8221;). If these were real people, they would likely be treated differently throughout their lives due to very minor differences in facial structure and skin smoothness.</p>



<p>Based on their faces alone, there&#8217;s no way to know with non-negligible accuracy which of these people (if they existed) would be more hard-working, more moral, wiser, or otherwise in possession of personal traits that we actually might care about. So why are humans so obsessed with faces? It seems likely to be caused by a combination of two factors:</p>



<hr class="wp-block-separator"/>



<p>(<strong>1) Runaway Sexual Selection</strong></p>



<p>If peacocks find large tail plumage sexually attractive, then even if those feathers are not useful for anything else, that still creates an evolutionary selection pressure where those with larger tail plumage are more likely to pass on their genes (due to improved chances of mating). Similarly, if certain humans are found to be more attractive based on their faces, that creates an evolutionary selection pressure in favor of mating with those people because then their children have a higher probability of finding mating success themselves (and hence passing on their genes). This phenomenon reinforces faces being attractive (because those attracted to &#8220;good-looking&#8221; faces mate with &#8220;good-looking&#8221; people more often, therefore their children are more good-looking and so have an easier time mating, plus have a preference for &#8220;good-looking&#8221; faces).</p>



<p>Today, this selection pressure is likely much weaker than it once was since most people now end up having children. For instance, now the vast majority of people in the US live to be at least 50, and only about 15% of women and 25% of men in the 40-50-year age bracket are childless. In contrast, tens of thousands of years ago, far fewer would make it to the point where they would have children.</p>



<hr class="wp-block-separator"/>



<p></p>



<p><strong>(2) Health Correlations</strong></p>



<p>In the environment we lived in tens of thousands of years ago, some aspects of a person&#8217;s face correlated with the likelihood of the survival of their genes, in particular ones related to disease (some diseases impact the face), genetic disorders (some of them cause facial changes), and development in the womb (where abnormal development can cause facial changes). </p>



<p>The correlation between health and facial features is likely to be lower now than it used to be back then. Today, a person&#8217;s facial features might still help to predict someone&#8217;s age, their most probable gender identity, and whether they have certain health conditions &#8211; but, of course, none of these give us any legitimate justification for treating some people better and others worse based just on their face.</p>



<p>It has been found that certain facial features do correlate with hormone levels (like testosterone). While testosterone levels may play a role in aggression (they may be part of the explanation for why men are violent so much more often than women), using these small correlations to make judgments about any one person is going to be both highly inaccurate and highly unjust. Some other personality traits may also be very weakly correlated with a person&#8217;s facial features, but talking to the person for 20 minutes will, of course, give you dramatically more information about what that person is like. Yet, we are prone to read so much into the way a person looks.</p>



<hr class="wp-block-separator"/>



<p><strong>Note: </strong>there is an additional effect when it comes to faces, which is that we are sometimes taught by our culture to value certain facial attributes more than others. This can act above and beyond the previously mentioned two factors.</p>



<hr class="wp-block-separator"/>



<p>We humans act as though faces are incredibly important despite them being a substantially arbitrary mask our genes have programmed for us. And they often impact how we humans treat each other, despite this unequal treatment being both unjust and unjustified. If you ever notice yourself treating someone less well because of their face, take note and adjust your behavior.</p>



<p>I am not saying that people should, for example, date people they are not attracted to. Obviously, attraction is an important part of relationships for most people, and the face is one part of what determines attraction. (You may also care about your children one day having attractive faces, so they can more easily find life partners they like.) Rather, what I&#8217;m saying is that we should be very wary about making negative inferences about any individual person based on their face (which is something that, unfortunately, the human mind seems to do often). The face says too little about a person&#8217;s character to be useful for predicting at the level of any individual.</p>



<hr class="wp-block-separator"/>



<p><em>This essay was first written on July 2, 2020, and first appeared on this site on December 17, 2021.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/07/bias-based-on-facial-attractiveness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2542</post-id>	</item>
		<item>
		<title>On “superstimuli” and their dangers</title>
		<link>https://www.spencergreenberg.com/2020/07/on-superstimuli-and-their-dangers/</link>
					<comments>https://www.spencergreenberg.com/2020/07/on-superstimuli-and-their-dangers/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 01 Jul 2020 12:39:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[arousal]]></category>
		<category><![CDATA[culture]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[excess]]></category>
		<category><![CDATA[food]]></category>
		<category><![CDATA[goals]]></category>
		<category><![CDATA[homeostasis]]></category>
		<category><![CDATA[moderation]]></category>
		<category><![CDATA[overload]]></category>
		<category><![CDATA[superstimuli]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2832</guid>

					<description><![CDATA[A “superstimulus” triggers a response that evolution gave us, but to a stronger degree than is likely to occur in nature. They exist because we humans purposely optimize our environments to create these responses. We are surrounded by more superstimuli than most of us realize. Examples of superstimuli: • food: Cheetos / skittles / McDonalds [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>A “superstimulus” triggers a response that evolution gave us, but to a stronger degree than is likely to occur in nature. They exist because we humans purposely optimize our environments to create these responses.</p>



<p>We are surrounded by more superstimuli than most of us realize.</p>



<p>Examples of superstimuli:</p>



<p>     • food: Cheetos / skittles / McDonalds</p>



<p>     • goal achievement: video games</p>



<p>     • visual arousal: porn</p>



<p>     • pair bonding: romance novels</p>



<p>     • affection: dogs</p>



<p>     • cuteness: puppies &amp; kittens</p>



<p>     • stories: TV</p>



<p>     • beauty: photoshopped models</p>



<p>     • gossip: celebrity magazines</p>



<p>     • social approval: Facebook</p>



<p>There is nothing wrong with superstimuli in moderation, but they tend to be addictive, and they can make it harder to enjoy the natural (non-super) versions of those things, which can harm our quality of life. Often times the superstimuli give just *part* of the experience we really crave (like eating junk food that is really tasty and calorie-dense, without really providing satiety or nutritional value).</p>



<p>So be wary if you spend more time on social media than talking to loved ones (or if you own more than 20 dogs).</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-default"/>



<p><em>This piece was first written on July 1, 2020, and first appeared on this site on July 22, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/07/on-superstimuli-and-their-dangers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2832</post-id>	</item>
	</channel>
</rss>
