<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>denial &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/denial/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Sat, 15 Nov 2025 22:15:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>What happens when your beliefs can&#8217;t change?</title>
		<link>https://www.spencergreenberg.com/2024/08/what-happens-when-your-beliefs-cant-change/</link>
					<comments>https://www.spencergreenberg.com/2024/08/what-happens-when-your-beliefs-cant-change/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 13 Aug 2024 14:10:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[anchor beliefs]]></category>
		<category><![CDATA[beliefs]]></category>
		<category><![CDATA[biases]]></category>
		<category><![CDATA[cognitive distortions]]></category>
		<category><![CDATA[criticism]]></category>
		<category><![CDATA[dedication]]></category>
		<category><![CDATA[deluded]]></category>
		<category><![CDATA[delusions]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[faulty thinking]]></category>
		<category><![CDATA[imposter syndrome]]></category>
		<category><![CDATA[ingroup bias]]></category>
		<category><![CDATA[ingroup loyalty]]></category>
		<category><![CDATA[insight]]></category>
		<category><![CDATA[self-denial]]></category>
		<category><![CDATA[sunk cost fallacy]]></category>
		<category><![CDATA[updating]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4082</guid>

					<description><![CDATA[This is part 2 in my series about &#8220;anchor beliefs&#8221; &#8211; but you don&#8217;t need to read part 1 in order to understand it. I think that almost everyone has beliefs that are essentially unchangeable. These don&#8217;t feel to us like beliefs but like incontrovertible truths. Counter-evidence can&#8217;t touch them. They are beliefs we can&#8217;t [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>This is part 2 in my series about &#8220;anchor beliefs&#8221; &#8211; but you don&#8217;t need <a href="https://www.spencergreenberg.com/2021/11/human-behavior-makes-more-sense-when-you-understand-anchor-beliefs/">to read part 1</a> in order to understand it.</p>



<p>I think that almost everyone has beliefs that are essentially unchangeable. These don&#8217;t feel to us like beliefs but like incontrovertible truths. Counter-evidence can&#8217;t touch them. They are beliefs we can&#8217;t change our mind about. I call these &#8220;Anchor Beliefs.&#8221;</p>



<p>When Anchor Beliefs are false, we distort reality to fit them. So, what distortions do some reasonably common Anchor Beliefs cause?</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Anchor Belief 1: &#8220;I&#8217;m entirely good&#8221; or &#8220;I don&#8217;t do unethical things&#8221;</strong></p>



<p>What happens when someone with these Anchor Beliefs acts highly unethically? Well, since the Anchor Belief can&#8217;t change, that means the action must have been ethically okay to do, or else it was someone else&#8217;s fault or impossible to avoid. Victim blaming, denial, or shirking of responsibility ensues.</p>



<p>&#8220;My whole foundation, life, what I believed in, devotion to the company, was based on believing [Ramesh Balwani] was this person&#8230;He told me he didn&#8217;t know what I was doing in business, that my convictions were wrong&#8230;There was no way I could save our company if he was there…We were trying to do the right thing. We were trying to report results that we believed in and not report results if we thought there was any issue&#8221; -Elizabeth Holmes, who was found guilty on four counts of defrauding the investors in her company, Theranos</p>



<p>&#8220;All I ever wanted was to love women and, in turn, to be loved by them back. Their behavior towards me has only earned my hatred, and rightfully so! I am the true victim in all of this. I am the good guy.&#8221; -Elliot Rodger, in his manifesto about why he planned to commit murder before murdering six people.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Anchor Belief 2: &#8220;I&#8217;m not good enough&#8221;</strong></p>



<p>What happens when someone with this anchor belief gets a great job, performs really well, or achieves success? Well, it must have been a fluke or mistake; eventually, others will figure it out. Imposter syndrome ensues.</p>



<p>&#8220;No matter what we&#8217;ve done, there comes a point where you think, &#8216;How did I get here? When are they going to discover that I am, in fact, a fraud and take everything away from me?&#8221; &#8211; Tom Hanks, winner of two consecutive Academy Awards for Best Actor</p>



<p>&#8220;I have written 11 books, but each time I think, &#8216;Uh oh, they&#8217;re going to find out now. I&#8217;ve run a game on everybody, and they&#8217;re going to find me out.&#8221; &#8211; Maya Angelou, legendary poet and winner of the Presidential Medal of Freedom.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Anchor Belief 3: &#8220;This thing I&#8217;ve devoted a great deal of time/energy/identity into works and is good&#8221; [that doesn&#8217;t work or is harmful]</strong></p>



<p>What happens when it&#8217;s criticized? The criticism must be bad faith. Any imperfection in counter-evidence fully invalidates that evidence. Confirmation bias, cherry-picking, and motivated reasoning ensues.</p>



<p>&#8220;Those who have attacked my work on Vitamin C are scoundrels.&#8221; &#8211; Linus Pauling, two-time Nobel prize winner, defending his theory that vitamin C cures cancer and heart disease.</p>



<p>&#8220;We do not find critics of Scientology who do not have criminal pasts…Politician A stands up on his hind legs in a Parliament and brays for a condemnation of Scientology. When we look him over we find crimes &#8211; embezzled funds, moral lapses, a thirst for young boys &#8211; sordid stuff. Wife B howls at her husband for attending a Scientology group. We look her up and find she had a baby he didn&#8217;t know about.&#8221; &#8211; L. Ron Hubbard, founder of Scientology</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Anchor Belief 4: &#8220;My group is good.&#8221;</strong></p>



<p>What happens when their group does something really bad? The victims must be lying or have deserved it. Or acting badly must be justified in this case because it&#8217;s done for some more important greater good. Denial of and justification of immoral actions ensues.</p>



<p>&#8220;When we show a statement by Donald Trump that&#8217;s not truthful, Republicans will say it&#8217;s okay if it&#8217;s not true because it sends the right message, whereas Democrats will say that a statement needs to be factual&#8230;With a statement from Joe Biden, Democrats will say it&#8217;s okay if it&#8217;s not based on evidence, that it supports a generally true message, while Republicans will then have a higher bar and say every statement needs to be based on facts.&#8221; &#8211; Ethan Poskanzer, based on his studies on moral flexibility</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>So, what are the takeaways here? I think that the following three things are important and true:</p>



<p>(1) Almost everyone has at least one Anchor Belief &#8211; a belief that is so sticky that it&#8217;s nearly impossible for it to change in the face of even extremely strong counter-evidence. Some people have more of these, and perhaps a small number of people have none, but I think Anchor beliefs are a near-universal among us humans.</p>



<p>(2) When our Anchor Beliefs are false (or partially false), because the beliefs won&#8217;t change, we distort reality when we get evidence against them in order to keep them intact while also somehow &#8220;making sense&#8221; of that counter-evidence.</p>



<p>(3) By looking at fairly common Anchor Beliefs people have, we can start to understand some recurring distortions in people&#8217;s thinking. Since people&#8217;s Anchor Beliefs are fixed but reality sometimes provides strong counter-evidence against these beliefs, that leads to predictable patterns of distortions that people&#8217;s minds deploy to keep the beliefs intact around those Anchor Beliefs.</p>



<p>In particular, I think that we find:</p>



<p>• Anchor Beliefs related to being good may lead to victim blaming and denial of responsibility.</p>



<p>• Anchor Beliefs about not being good enough may lead to imposter syndrome.</p>



<p>• Anchor Beliefs about something we&#8217;ve invested a lot of time/energy/identity into working on and being good may lead to confirmation bias, cherry-picking, and motivated reasoning.</p>



<p>• Anchor Beliefs about our group being good may lead us to deny or justify immoral actions by our group.</p>



<p>There are no strong studies that I&#8217;m aware of that identify or map out anchor beliefs and their frequency in the population &#8211; I believe the points above are true based on my experiences and observations.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on August 13, 2024, and first appeared on my website on September 2, 2024.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/08/what-happens-when-your-beliefs-cant-change/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4082</post-id>	</item>
		<item>
		<title>Three motivations for believing </title>
		<link>https://www.spencergreenberg.com/2024/04/three-motivations-for-believing/</link>
					<comments>https://www.spencergreenberg.com/2024/04/three-motivations-for-believing/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 20 Apr 2024 14:04:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[addiction]]></category>
		<category><![CDATA[belief]]></category>
		<category><![CDATA[delusional]]></category>
		<category><![CDATA[delusions]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[dopamine]]></category>
		<category><![CDATA[epistemics]]></category>
		<category><![CDATA[hedonism]]></category>
		<category><![CDATA[hope]]></category>
		<category><![CDATA[motivated reasoning]]></category>
		<category><![CDATA[optimism]]></category>
		<category><![CDATA[pragmatism]]></category>
		<category><![CDATA[present bias]]></category>
		<category><![CDATA[rationalism]]></category>
		<category><![CDATA[religion]]></category>
		<category><![CDATA[self-sabotage]]></category>
		<category><![CDATA[utility]]></category>
		<category><![CDATA[values]]></category>
		<category><![CDATA[wishful thinking]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3929</guid>

					<description><![CDATA[There are three different motivations for belief, and it&#8217;s important to distinguish between them.&#160; 1) Belief because you think something&#8217;s true. For instance, you may think that the evidence supports the idea that you will eventually find love, or you may feel convinced by logical arguments you&#8217;ve heard in favor of god&#8217;s existence. 2) Belief [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>There are three different motivations for belief, and it&#8217;s important to distinguish between them.&nbsp;</p>



<p><strong>1) Belief because you think something&#8217;s true.</strong></p>



<p>For instance, you may think that the evidence supports the idea that you will eventually find love, or you may feel convinced by logical arguments you&#8217;ve heard in favor of god&#8217;s existence.</p>



<p><strong>2) Belief because you think it&#8217;s useful to believe.&nbsp;</strong></p>



<p>Regardless of whether you predict something&#8217;s true, you can predict that believing it will be more helpful than harmful to you in the long term, and so be motivated to believe for that pragmatic benefit.</p>



<p>For instance, you may intuit that you&#8217;ll be better off long-term believing that you will eventually find love (because that will make love more likely) or perceive that you&#8217;ll be happier believing in god (even if it turns out there is no god).</p>



<p><strong>3) Belief because it feels good in the moment.&nbsp;</strong></p>



<p>Regardless of whether it&#8217;s true or helpful to you in the long term, you may be motivated to believe something because it feels good right now (or prevents you from feeling bad).&nbsp;</p>



<p>For instance, you may feel comforted right now by thinking you&#8217;ll eventually find love or feel good in the moment, believing a god is watching over you.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Rationalists&nbsp;</strong>typically recommend striving to have your beliefs be of type 1: believing based on what&#8217;s most likely to be true.</p>



<p><strong>Pragmatists</strong>&nbsp;often recommend aiming for type 2 beliefs: believing based on what&#8217;s ultimately most useful to you.</p>



<p>I favor striving to have type 1 beliefs rather than type 2 beliefs, in part because I intrinsically value truth, but also because I think that for beliefs in category 2 that are *not* actually true, there are typically some beliefs in category 1 that will help you just as much, but which&nbsp;have the advantage of&nbsp;also&nbsp;being true.&nbsp;So often (but not always), there is a low cost to replacing beliefs from 2 with beliefs from 1 that have the added benefit of being true.</p>



<p>I also think that if you allow yourself&nbsp;to indiscriminately hold type 2 beliefs, it makes it hard to suddenly switch to rigorous truth-oriented thinking when it&#8217;s important to figure out the truth (e.g.,&nbsp;when you have to make a very important decision based on evidence).</p>



<p>On the other hand, many people have lots of type 3 beliefs, and all of us, myself included, have some type 3 beliefs. Whether you think that type 1 or type 2 beliefs are ultimately preferable, I think a valuable aspiration is to replace some of our type 3 beliefs with either 1s or 2s.</p>



<p>It&#8217;s very, very easy for us humans to delude ourselves based on what it feels good to believe at the moment because the reward cycle is so fast. Type 3 beliefs are immediately rewarding, incentivizing more such beliefs. But they are like the social media addiction version of believing, where you pursue what gives the greatest instantaneous reward rather than what&#8217;s actually good for you.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on April 20, 2024, and first appeared on my website on May 7, 2024.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/04/three-motivations-for-believing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3929</post-id>	</item>
		<item>
		<title>Should Effective Altruists be Valuists instead of utilitarians? &#8211; part 3 in the Valuism sequence</title>
		<link>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/</link>
					<comments>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 10 Mar 2023 07:42:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[autonomy]]></category>
		<category><![CDATA[burnout]]></category>
		<category><![CDATA[choice]]></category>
		<category><![CDATA[contradictions]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[equity]]></category>
		<category><![CDATA[freedom]]></category>
		<category><![CDATA[group membership]]></category>
		<category><![CDATA[humility]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[justice]]></category>
		<category><![CDATA[long-term success]]></category>
		<category><![CDATA[moral antirealism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[non-altruistic values]]></category>
		<category><![CDATA[self-care]]></category>
		<category><![CDATA[self-control]]></category>
		<category><![CDATA[shared values]]></category>
		<category><![CDATA[social groups]]></category>
		<category><![CDATA[social values]]></category>
		<category><![CDATA[sustainability]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[utility]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3077</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the third of five posts in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, second, fourth, and fifth parts (though the links won’t work until those other essays are released). Sometimes, people take an important value &#8211; maybe their most [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="375" data-attachment-id="3168" data-permalink="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/dall%c2%b7e-2023-02-05-16-07-14-a-treasure-chest-full-of-rainbows/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=2048%2C1024&amp;ssl=1" data-orig-size="2048,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="DALL·E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=750%2C375&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=750%2C375&#038;ssl=1" alt="" class="wp-image-3168" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1024%2C512&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=768%2C384&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1536%2C768&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?w=2048&amp;ssl=1 2048w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>
</div>


<p style="font-size:14px"><em>This is the third of five posts <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts <em>(though the links won’t work until those other essays are released)</em>.</em></p>



<p>Sometimes, people take an important value &#8211; maybe their most important value &#8211; and decide to prioritize it above all other things. They neglect or ignore their other values in the process. In my experience, this often leaves people feeling unhappy. It also leads them to produce less total value (according to their own intrinsic values). I think people in the effective altruist community (i.e., EAs) are particularly prone to this mistake.</p>



<p><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">In the first post in this sequence</a>, I introduce Valuism &#8211; my life philosophy &#8211; and offer some general arguments for its advantages. In this post, I talk about the interaction between Valuism and effective altruism. I argue that the way some EAs think about morality and value is (in my view) empirically false, potentially psychologically harmful, and (in some cases) incoherent.&nbsp;</p>



<p>EAs want to improve others’ lives in the most effective way possible. Many EAs identify as hedonic utilitarians (even the ones who reject objective moral truth). They say that impartially maximizing utility among all conscious beings &#8211; by which they usually mean the sum of all happiness minus the sum of all suffering &#8211; is the o<em>nly thing of</em> value, or the only thing that they feel they <em>should</em> value. I think this is not ideal for a few reasons.</p>



<p></p>



<h3 class="wp-block-heading">1. I think (in one sense) it&#8217;s empirically false</h3>



<p>Consider a person who claims that &#8220;only utility is valuable.&#8221;</p>



<p>If&nbsp;we interpret this as an empirical claim about the person’s own values &#8211; i.e., that the sum of happiness minus suffering for all conscious beings is the only thing that their brain assigns value to &#8211; I think that it&#8217;s very likely empirically false.&nbsp;</p>



<p>That is, I don&#8217;t think anyone <em>only</em> values (in the sense of what their brain assigns value to) maximizing utility, even if it&#8217;s a very important value of theirs. I can&#8217;t prove that literally nobody<em> </em>only values maximizing utility, but I argue that human brains aren&#8217;t built to only value one thing, nor would we expect evolution to converge on pure utilitarian psychology since evolution optimizes for survival (a purely utilitarian brain would get rapidly outcompeted by other brain types if they existed 50,000 years ago).&nbsp;</p>



<p>I think that even the most hard-core hedonic utilitarians <em>do</em> psychologically value some non-altruistic things deep down &#8211; for example, their own pleasure (more than the pleasure of everyone else), their family and friends, and truth. However, in my opinion, they sometimes deny this to themselves or feel guilty about it. If you are convinced that your only intrinsic value is utility (in a hedonistic, non-negative-leaning utilitarian sense), you may find it instructive to take a look <a href="https://twitter.com/SpencrGreenberg/status/1568595511522852871">at these philosophical scenarios</a> I assembled or check out <a href="https://www.youtube.com/watch?v=d_6i9uzsBuc&amp;ab_channel=CentreforEffectiveAltruism">the scenarios I give in this talk</a> about values.</p>



<p>For instance, does your brain actually tell you it&#8217;s a good trade (in terms of your intrinsic values) to let a loved one of yours suffer terribly in order to create a mere 1% chance of preventing 101 strangers from the same suffering? Does your brain actually tell you that equality doesn&#8217;t matter one iota (i.e., it&#8217;s equally good for one person to have all the utility compared to spreading it more equally)? Does your brain actually value a world of microscopic, dumb orgasming micro-robots more than a world (of slightly less total happiness) where complex, intelligent, happy beings pursue their goals? Because taken at face value, hedonic utilitarianism doesn&#8217;t care about whether a person is your loved one or a stranger, doesn&#8217;t care about equality <em>at all</em>, and prefers microscopic orgasming robots to complex beings as long as the former are slightly happier. But, if you consider yourself a hedonic utilitarian, is that actually what your brain values?</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/u7FnrSutFnOMuG57YHHw9RGv-QCfrH2LMvMWsATOHkYrOpNy8mr9I46XublWGnhnnVc_vSjXkOIWXfG9-rRYQYrujHM5D6d8GylwPPRuv0ePebNF-Kha_P9_b9k3Vd63BVHaP5eMOb0QHj4MJLWZ4Yw" alt=""/><figcaption class="wp-element-caption"><em>Caption: it turns out very few people are willing to risk hell on earth for a somewhat higher expected utility!</em></figcaption></figure>
</div>


<p></p>



<h3 class="wp-block-heading">2. It can be psychologically harmful</h3>



<p>Additionally, I think the attitude that there is only one thing of value can lead to severe psychological burnout as people try to push away, minimize or deny their other intrinsic values and “selfish,” non-altruistic desires. I’ve seen this happen quite a few times. <a href="https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends">Here&#8217;s Tyler Alterman&#8217;s personal account</a> of this if you’d like to see an example. <a href="https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnout">And here&#8217;s a theory</a> of how this burnout happens.</p>



<p></p>



<h3 class="wp-block-heading">3. I think (in one sense) it&#8217;s incoherent</h3>



<p>When coupled with a view that there is no objective moral truth, I think it is, in most cases, <strong>philosophically incoherent</strong> to claim that total hedonic utility is all that matters<strong>.</strong></p>



<p>If you believe in objective moral truth, it may make sense to say, “I value many things, but I have a moral obligation to prioritize only some of them” (for example, you might be convinced by arguments that you are objectively morally obliged to promote utility impartially even though that’s not the only value you have).</p>



<p>However, many EAs, like me, don’t believe in objective moral truth. If you don’t think that things <em>can</em> be objectively right or wrong, it doesn’t make sense (I claim) to say that you “should” prioritize maximizing utility for all of humanity over other values – what does this “should” even mean? Well, there are some answers for what this “should” could mean that philosophers and lay people have proposed, but I find them pretty weak.</p>



<p>For a much more in-depth discussion of this point (including an analysis of different ways that EAs have responded to my critique of pairing utilitarianism with denial of objective moral truth), see <a href="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/">this essay</a>. It collects may different objections (from EAs and from some philosophers) and discusses them. So if you are interested in whether it is or isn&#8217;t coherent to only value utility when you deny objective moral truth, and moreover, whether EAs and philosophers have good arguments for doing so, please see that essay.</p>



<p>I find that while many (perhaps the majority of) EAs deny objective moral truth, many still talk and think as though there is objective moral truth.</p>



<p>I found it striking that, in my conversations with EAs about their moral beliefs, few had a clear explanation for how to combine a belief in utilitarianism with a lack of a belief in objective moral truth, and the approaches to that that they did put forward were usually quite different from each other (suggesting, at the very least, a lack of consensus in how to support such a perspective). Some philosophers I spoke to pointed to other ways one might defend such a position (mainly drawn from the philosophical literature), but I don&#8217;t recall ever seeing these approaches being used or referenced by non-philosopher EAs (so they don&#8217;t seem to be doing much work in the beliefs of EAs who hold this view).&nbsp;</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/9GJvXrOJAl0p6FFj2eUiqu6MQPftJRlFDeIG2D_mBMMmi1_ryaOh5N19YsBdG4BlkyJNHhSvogaR1CAdEE4EsUNH5xmQ8rdzZmT90qlbkL4oCQO4sehUFLUp7y5EdLBizKLKZNxD0UFj4J2aFj0QBgo" alt=""/><figcaption class="wp-element-caption"><em>A poll I ran on Twitter. More than half of EA respondents report not being moral realists</em>.</figcaption></figure>
</div>


<p>I suspect it would help many EAs if they took a more Valuist approach: rather than claiming to or aspiring to only value hedonic utility, they could accept that while they <em>do </em>intrinsically value this – very likely far more than the average person – they also have other intrinsic values, for example, truth (which I think is another very important psychological value for many EAs), their own happiness, and the happiness of their loved ones.</p>



<p>Valuism also avoids some of the most awkward bullets that EAs sometimes are tempted to bite. For instance, hedonic utilitarianism seems to imply that your own happiness and the happiness of your loved ones “shouldn’t” matter to you even a tiny bit more than the happiness of a stranger who is certain to be born 1,000,000 years from now. Valuism may explain why people who identify as hedonic utilitarians may feel a great deal of internal conflict about this – even if you value the happiness of all sentient beings a tremendous amount, you almost certainly have other intrinsic values too. That means that Valuism may help you avoid some of the awkward conundrums that arise from ethical monism (where you assume that there is only one thing of value).</p>



<p></p>



<h2 class="wp-block-heading">Valuism and the EA Community</h2>



<p>From a Valuist perspective,<strong> I see the EA community as a group of people who share a primary intrinsic value of hedonic utility</strong> (i.e., reducing suffering and increasing happiness impartially) <strong>with a secondary strong intrinsic value of truth-seeking.</strong> Oddly (from my point of view) EAs are very aware of their intrinsic value of impartial hedonic utility, but seem much less aware of their truth-seeking intrinsic value. On a number of occasions, I&#8217;ve seen mental gymnastics used to justify truth-seeking in terms of increasing hedonic utility when (I claim) a much more natural explanation is that truth-seeking is an intrinsic value (not <em>just</em> an instrumental value that leads to greater hedonic utility). This helps explain why many EAs are so averse to <em>ever</em> lying and so averse even to persuasive marketing.</p>



<p>Each individual EA has other intrinsic values beyond impartial utility and truth-seeking, but in my view, those two values help define EA and make it unique. This is also a big part of why this community resonates with me: those are my top two universal intrinsic values as well.</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh5.googleusercontent.com/caB2WlN1Mleqk9ZsApi7rokTC-KpCErd-t3GDKOIk5didxPnvdoHJp1bVOiCFNgBmzq9QLMFPgrya91zUY4vqUEEDAJ8juRiCo_07ikYFZZRwmqZBC7B5NOLeHr6KqFLciFtoqWok8rDjHtfqd2-r-k" alt=""/><figcaption class="wp-element-caption"><em>While these groups sometimes overlap (e.g., some effective altruists are libertarians, and </em><a href="https://clearerthinkingpodcast.com/episode/085"><em>some are social justice advocates</em></a><em>, etc.), we created this graphic to illustrate what we believe are the </em><strong><em>most common</em></strong><em> universal (i.e., not self-focused, not community-focused) intrinsic values shared among most members of each group.</em></figcaption></figure>
</div>


<p></p>



<p>If more EAs adopted Valuism, I think that they would almost all continue to devote a large fraction of their time and energy toward improving the world effectively. Maximizing global hedonic utility (i.e., the sum of happiness minus suffering for conscious beings) <em>is</em> the strongest universal intrinsic value of most community members, so it would still play the largest role in determining their goals and actions, even after much reflection.&nbsp;</p>



<p>However, they would also feel more comfortable investing in their own happiness and the happiness of their loved ones at the same time, which I predict would make them happier and reduce burnout. Additionally (I claim), they’d accept that, like many effective altruists,<strong> they also have a strong intrinsic value of truth</strong>. They’d strike a balance between their various intrinsic values, and not endorse annihilating all their intrinsic values except for one.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>I published this piece on this site on March 10, 2023.</em><br><br><a rel="noreferrer noopener" href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+Should+Effective+Altruists+be+Valuists+instead+of+utilitarians%3F%C2%A0+-+part+3+in+the+Valuism+sequence&amp;source=email" target="_blank">If you read this line, please do us a favor and click here to answer one quick question.</a></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>You&#8217;ve just finished the second post in my sequence of essays on my life philosophy, Valuism –</em>&nbsp;<em><a href="https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">click here to go to the fourth post.</a></em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3077</post-id>	</item>
		<item>
		<title>What to do when your values conflict? &#8211; part 2 in the Valuism sequence</title>
		<link>https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/</link>
					<comments>https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 24 Feb 2023 10:00:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[compromise]]></category>
		<category><![CDATA[conflicting values]]></category>
		<category><![CDATA[context]]></category>
		<category><![CDATA[decision-making]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[dilemmas]]></category>
		<category><![CDATA[diminishing marginal returns]]></category>
		<category><![CDATA[long-term goals]]></category>
		<category><![CDATA[moral decisions]]></category>
		<category><![CDATA[tensions]]></category>
		<category><![CDATA[tradeoffs]]></category>
		<category><![CDATA[units of exchange]]></category>
		<category><![CDATA[win-win solutions]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3078</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the second of five posts in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, third, fourth, and fifth parts. Pretty much all of us have multiple intrinsic values (things we value for their own sake, not merely as a means [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>



<figure class="wp-block-image"><img decoding="async" src="https://lh6.googleusercontent.com/dhNX8D0WScl1qxTEJcDlgXYd57dVt53k6PsTDLYsvxmHY9FEmbdElZrJzj4y6q0N0Vn68xcugm451hYCVlUDMEXA1H6b70i7cV2C2LJsytMn20atLOgUIPZEDBrEyF4pcyfyBQbGJs4mctL-Pu8eSUA" alt=""/><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>



<p style="font-size:14px"><em>This is the second of five posts <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, <a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts. </em></p>



<p>Pretty much all of us have multiple intrinsic values (things we value for their own sake, not merely as a means to an end). This means that sometimes our intrinsic values come into <em>conflict</em>. For example, you might value:</p>



<ul class="wp-block-list">
<li>Both achieving ambitious goals <em>and</em> experiencing pleasure&nbsp;</li>



<li>Both your family&#8217;s well-being <em>and</em> the well-being of all people on Earth</li>



<li>Both honesty <em>and</em> kindness</li>
</ul>



<p>In cases like these, it can be difficult to maximize <em>both</em> values because working on one takes away from the other. If you spend most of your free time pursuing fun hobbies that give you pleasure, it may be difficult to achieve your ambitious goals; if you spend all your money on nice things for your family, you won&#8217;t have anything left to give to strangers; if you seek to be honest in all your interactions, you will sometimes say things that are unkind.</p>



<p>In this post, I describe how I approach dilemmas like this (as seen through the lens of <a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">Valuism</a> &#8211; my life philosophy).</p>



<p></p>



<h2 class="wp-block-heading">Handling Conflicting Values</h2>



<p>When you&#8217;re in a situation where your intrinsic values conflict with each other, I think it is most helpful to avoid rejecting any of the values involved &#8211; yet we often do exactly that. We may try to dismiss (or act as if we do not have) one or more of our intrinsic values, especially if our social group or the culture around us respects some of our values but not others. For instance, if you value both ambitious achievement and having a pleasurable life, but the culture around you denigrates pleasure, you may want to assign pleasure a weight of zero (i.e., you may act as if you do not value pleasure) any time that it comes into conflict with ambition.</p>



<p></p>



<p>Rather than rejecting any of your values, I think it&#8217;s usually more helpful to carefully consider how your values trade off against each other in a given scenario.</p>



<p>To illustrate this with another example, many people value the happiness of other people but also value speaking the truth. Sometimes these collide. If a friend of yours writes a play and asks you if you like it, but you think it is terrible, you might feel conflicted between telling the truth and saying what you think will make your friend feel happy. If you value both truth-telling and your friend&#8217;s happiness, then both are worth taking into account in the decision. </p>



<p>Many people who strongly value both truth and happiness would find it worthwhile (according to their values) to sacrifice a little truth to produce a lot of happiness. They might, for example, be willing to tell a white lie to protect a friend from strong negative feelings. But they may not think it worthwhile to sacrifice a lot of truth to produce just a little happiness, for instance, by making up an elaborate lie just to make a friend feel slightly happier.</p>



<p>Intrinsic values can be difficult to compare &#8211; they may at first seem simply incommensurable or uncompromising. But in practice, we <em>do</em> often have value clashes like this in life, and so, whether implicitly or explicitly, we are forced to make tradeoffs between our values. I take the view that you should recognize when these conflicts arise and reflect carefully on which intrinsic values you value more in the given circumstance.</p>



<p>So when presented with such a scenario pitting the happiness of a friend against truth-telling, it can be useful to ask yourself: how much truth am I willing to give up for how much of a friend&#8217;s happiness? There is no logically correct answer to this question &#8211; finding the answer will involve paying close attention to your intuition (and, in particular, the part of your mind that assigns values to states of the world). Your intuition may be aided by thought experiments, such as:</p>



<ul class="wp-block-list">
<li>If I had to tell a much more severe lie, but doing so would give my friend only as much happiness as is involved in this situation, would it be worth it in that case?</li>



<li>If my friend were to be made much happier than they are in this scenario, would it be a no-brainer that it is worth it to tell this small lie?</li>
</ul>



<p>By pushing the boundaries of the scenario with thought experiments such as these, it can bring the relative strengths of your values to light.</p>



<p></p>



<h2 class="wp-block-heading">The Subtlety of Values</h2>



<p>The process of reflecting on the relative strength of your intrinsic values is subjective because you&#8217;re drawing on a subtle operation of your mind: the ability to assign value to different states of affairs. It&#8217;s unlikely that you’ll be able to work out precise &#8220;units of exchange&#8221; for your intrinsic values, such as: “I value one happy day for a family member the same as one happy year for a stranger.” That&#8217;s okay because you can almost always make decisions without needing such precision. Furthermore, these units are unlikely to be constant anyway. And in cases where both sides of the equation seem nearly equally balanced from a values perspective, that may merely indicate that it doesn&#8217;t matter which choice you choose (from the point of view of your intrinsic values).</p>



<p>Empirically, I’ve observed that people&#8217;s values often seem to obey a form of diminishing marginal returns: if they try to let one value dominate over the others, the pull of other values becomes stronger. For instance, imagine you intrinsically value working towards your long-term goals, but you also intrinsically value your own happiness. You push yourself really hard at work so as to achieve your goals, but this makes you unhappy. At this point, your intrinsic value of happiness may start to gain more strength when you reflect carefully on what you value. This quirky property of values is not necessarily how you’d design a value-seeking robot, but I think it&#8217;s how many of us humans seem to work.&nbsp;</p>



<p></p>



<h2 class="wp-block-heading">A Values-Informed Decision-Making Process</h2>



<p>I&#8217;ve observed that some of the most difficult decisions to make are ones where multiple values we care a lot about are pitted against each other (whether we realize that&#8217;s what is happening or not). In my experience, though, we can often make really hard decisions easier if we look at them through a values lens.</p>



<p>Here&#8217;s a step-by-step procedure you may find useful for decisions involving conflicts in your intrinsic values.</p>



<p><strong>Step 1: Identify</strong> which of your intrinsic values are at play in the decision. It may help to write down a list of these values of yours that are at play. It may help to have a look at <a href="https://programs.clearerthinking.org/intrinsic_values_graphic/graphic.html">the intrinsic values wheel</a>.</p>



<p><strong>Step 2:</strong> <strong>Reflect</strong> on the relative importance of those intrinsic values to you (e.g., by using thought experiments to tease out how they trade off against each other &#8211; e.g., &#8220;would this decision be easy if one of the values wasn&#8217;t at stake?&#8221; or &#8220;would this decision be easy if one of the values was being sacrificed a bit more?&#8221;).</p>



<p><strong>Step 3: Brainstorm</strong> different actions that you could take in this scenario. During brainstorming, it&#8217;s usually best to withhold judgment &#8211; just get all the ideas out that you can. Some potentially useful brainstorming prompts to try are: &#8220;what is an action that would support just <em>one</em> of my values?&#8221; and &#8220;is there a win-win action that looks good from the point of view of all the values of mine that are at stake?&#8221; and &#8220;is there a compromise action I could take that is pretty good from the point of view of all my values even if it isn&#8217;t ideal from the point of view of any of them?&#8221;</p>



<p><strong>Step 4:</strong> <strong>Evaluate</strong> how good each action looks from the perspective of each of your relevant intrinsic values.</p>



<p><strong>Step 5: Select</strong> among the actions based on how well they achieve your intrinsic values overall, attempting to take into account all the relevant intrinsic values of yours and the relative importance of each of those values to you.</p>



<p></p>



<p>As an example, recently, a friend came to me when stuck deciding between two options related to one of their relationships. After talking it through carefully, we decided that what made it so hard was that:</p>



<ul class="wp-block-list">
<li> Option 1 looked good from the perspective of their intrinsic values of honesty and loyalty, while</li>
</ul>



<ul class="wp-block-list">
<li> Option 2 was better for helping them achieve their own long-term goals.</li>
</ul>



<p>Once we had figured that out together, my friend reported feeling more clarity about the situation. Now, at least, they had a clear idea about what the tradeoffs involved were. </p>



<p>Thankfully, with some brainstorming, we were able to craft a third option for what to do that was able to preserve a substantial amount of all of their intrinsic values that were at stake.</p>



<p></p>



<h2 class="wp-block-heading">Common Pitfalls</h2>



<p>Here are some values-related mistakes I think are common during decision-making that it may be useful to be on the lookout for: </p>



<ul class="wp-block-list">
<li><strong>Not noticing which intrinsic values of yours are at stake</strong> in the situation. For example, it&#8217;s easy to anchor on trying to figure out which is the &#8220;right&#8221; or &#8220;good&#8221; choice rather than reflecting on what the tradeoffs between the choices are (according to your own values) or focusing on the values of those around you rather than your own values. For instance, you might choose to cover up for your friend who has done a bad thing because of a heuristic you have that it&#8217;s the &#8220;right&#8221; thing to do, even though covering up for your friend, in this case, involves betraying other important intrinsic values you have.</li>



<li><strong>Completely dismissing one or more intrinsic values</strong> of yours that are at stake rather than balancing them based on careful consideration of how important they each are to you. For instance, you might assign no weight to your intrinsic value of having a pleasurable life.</li>



<li><strong>Not generating enough options</strong> for what choice to pick and <strong>anchoring</strong> on just the most obvious options or the ones you came up with first. For instance, you might frame a decision as &#8220;quit your job&#8221; or &#8220;stay in your role&#8221; without considering possibilities like &#8220;renegotiate your role at your job&#8221; or &#8220;transfer internally at the same company.&#8221;</li>



<li><strong>Choosing based only on what is instrumentally valuable</strong>, even when misaligned with your intrinsic values. For instance, you might choose based on what gets you the most money rather than based on what produces the most of what you intrinsically value.</li>
</ul>



<p>So remember: the next time you&#8217;re in a difficult decision-making scenario, you may find it useful to reflect on what intrinsic values of yours are at stake, and you may want to consider using a step-by-step process for incorporating your values into the decision, such as the one outlined above.</p>



<p></p>



<p><a rel="noreferrer noopener" href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+What+to+do+when+your+values+conflict%3F+-+part+2+in+the+Valuism+sequence" target="_blank">If you read this line, please do us a favor and click here to answer one quick question.</a></p>



<p></p>



<p><em>You’ve just finished the second post in my sequence of essays on my life philosophy, Valuism –</em> <em><a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">click here to go to the third post.</a></em></p>



<p></p>



<p></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3078</post-id>	</item>
		<item>
		<title>Ideology Eats Itself When Truth Becomes Stigmatized</title>
		<link>https://www.spencergreenberg.com/2020/08/how-ideology-eats-itself-or-a-quick-primer-on-how-to-be-a-genuinely-good-person-who-harms-the-world/</link>
					<comments>https://www.spencergreenberg.com/2020/08/how-ideology-eats-itself-or-a-quick-primer-on-how-to-be-a-genuinely-good-person-who-harms-the-world/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 07 Aug 2020 14:15:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[arbitrariness]]></category>
		<category><![CDATA[beliefs]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[binary thinking]]></category>
		<category><![CDATA[censorship]]></category>
		<category><![CDATA[confirmation bias]]></category>
		<category><![CDATA[defectors]]></category>
		<category><![CDATA[delusion]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[dichotomies]]></category>
		<category><![CDATA[dichotomous thinking]]></category>
		<category><![CDATA[good intentions]]></category>
		<category><![CDATA[groupthink]]></category>
		<category><![CDATA[ideology]]></category>
		<category><![CDATA[ingroup bias]]></category>
		<category><![CDATA[mass delusion]]></category>
		<category><![CDATA[outgroup bias]]></category>
		<category><![CDATA[punishment]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[road to hell]]></category>
		<category><![CDATA[self-delusion]]></category>
		<category><![CDATA[self-sabotage]]></category>
		<category><![CDATA[teaching]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2723</guid>

					<description><![CDATA[A quick primer on how to be a genuinely good person who harms the world: 1: Start to think that one ideology you like &#8211; which contains genuine benefits, truths, and positive moral elements &#8211; might be the only valid perspective. 2: Surround yourself with believers until you&#8217;re convinced that your view is common and [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>A quick primer on how to be a genuinely good person who harms the world:</p>



<p></p>



<p>1: Start to think that one ideology you like &#8211; which contains genuine benefits, truths, and positive moral elements &#8211; might be the only valid perspective.</p>



<p>2: Surround yourself with believers until you&#8217;re convinced that your view is common and normal.</p>



<p>3: Ignore your own doubts so that you can fit in better. Join in on chastising (and eventually ostracizing) insiders who doubt too much. Punish slightly more harshly than you feel is fair in order to prove that you are one of the good guys.</p>



<p>4: Since challenging the ideology is punished, pretend to believe more than you really do &#8211; contributing to the sense that almost everyone else has no doubts &#8211; in a self-reinforcing cycle.</p>



<p>5: Assume that since your view is obviously correct, normal, and morally good, those who strongly oppose your view are bad people.</p>



<p>6: Since you are good and they are bad, conclude that you, as the good guys, should try to destroy them (figuratively, or in extreme cases, literally).</p>



<p>7: But how can you tell who is bad? Decide that a set of beliefs that sound similar to the bad people&#8217;s beliefs are off-limits. Anyone who believes them is probably bad. In those cases, humane treatment is no longer necessary.</p>



<p>8: Even just spending too much time with one of the bad people, or speaking well of them, is morally suspect. Why would you do that if you weren&#8217;t bad too?</p>



<p>9: Unfortunately, some true beliefs were accidentally put on the &#8220;bad&#8221; side of the good/bad dividing line. Now there are true things that you would become a bad person for believing.</p>



<p>10: Because of that, you and your group must avoid looking at reality too closely, lest you become bad too.</p>



<p>11: If you start to notice something true that you&#8217;re not allowed to believe, look away quickly or contort reality to make it seem different than it is.</p>



<p>12: Intensify your self-delusion and your punishment of non-believers so that you can make sure that still more people in your group will delude themselves out of fear.</p>



<p>13: Start teaching children (before they are old enough to think for themselves) that your belief system is the only correct one, perpetuating the system for future generations.</p>



<p>14: Congratulations! You&#8217;ve succeeded at being a good person who harms the world. Your mostly good ideology has eaten itself and has become more bad than good.</p>



<hr class="wp-block-separator"/>



<p>This has happened many times throughout history, and it will happen many more times. Watch out for this pattern so that you (and the people you love) don&#8217;t end up as &#8220;true believers&#8221; who do harm by accident.</p>



<hr class="wp-block-separator"/>



<p><em>This piece was first written on August 7, 2020, and first appeared on this site on April 29, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/08/how-ideology-eats-itself-or-a-quick-primer-on-how-to-be-a-genuinely-good-person-who-harms-the-world/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2723</post-id>	</item>
	</channel>
</rss>
