<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>effective altruism &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/effective-altruism/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Thu, 08 Jan 2026 07:29:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>People May Value Universal Happiness And Reduction Of Suffering More Than They Realize</title>
		<link>https://www.spencergreenberg.com/2025/12/people-may-value-universal-happiness-and-reduction-of-suffering-more-than-they-realize/</link>
					<comments>https://www.spencergreenberg.com/2025/12/people-may-value-universal-happiness-and-reduction-of-suffering-more-than-they-realize/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Tue, 02 Dec 2025 19:07:39 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[catastrophe aversion]]></category>
		<category><![CDATA[consensus ethics]]></category>
		<category><![CDATA[cooperation]]></category>
		<category><![CDATA[distance-neutrality]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[empathy]]></category>
		<category><![CDATA[extended empathy]]></category>
		<category><![CDATA[global suffering]]></category>
		<category><![CDATA[harm dominance]]></category>
		<category><![CDATA[impartiality]]></category>
		<category><![CDATA[low-cost aid]]></category>
		<category><![CDATA[moral common ground]]></category>
		<category><![CDATA[moral concern]]></category>
		<category><![CDATA[moral hierarchy]]></category>
		<category><![CDATA[moral networks]]></category>
		<category><![CDATA[moral obligation]]></category>
		<category><![CDATA[moral scaling]]></category>
		<category><![CDATA[prioritization]]></category>
		<category><![CDATA[relational values]]></category>
		<category><![CDATA[self-interest limits]]></category>
		<category><![CDATA[severity]]></category>
		<category><![CDATA[shared values]]></category>
		<category><![CDATA[strangers]]></category>
		<category><![CDATA[suffering]]></category>
		<category><![CDATA[suffering primacy]]></category>
		<category><![CDATA[universalism]]></category>
		<category><![CDATA[value tradeoffs]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4618</guid>

					<description><![CDATA[I have a number of&#160;intrinsic&#160;values, but two of my most important intrinsic values are happiness and the lack of suffering for conscious beings. While these are fairly common intrinsic values, I suspect many people actually value them more than they realize. In other words, upon careful reflection, many people would realize that happiness&#160;and lack of [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I have a number of&nbsp;<a target="_blank" href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/" rel="noreferrer noopener">intrinsic&nbsp;values</a>, but two of my most important intrinsic values are happiness and the lack of suffering for conscious beings. While these are fairly common intrinsic values, I suspect many people actually value them more than they realize. In other words, upon careful reflection, many people would realize that happiness&nbsp;and lack of suffering are stronger intrinsic values to them than they previously were aware of.</p>



<p>With that in mind, here&nbsp;are seven thought experiments related to&nbsp;happiness and&nbsp;suffering that&nbsp;might make you see your intrinsic values a bit differently:</p>



<p>— we don&#8217;t necessarily know our values —</p>



<p>Unfortunately, our deepest values are not something we automatically know about ourselves. The conscious side of our mind doesn&#8217;t have direct access to the rest of our mind. And much of what we care about lies in the subconscious, meaning that our explicit beliefs about our values may not be comprehensive or even accurate. So this at least opens the possibility that we might subconsciously value increasing strangers&#8217; well-being more than we realize.</p>



<p>— our values are affected by our beliefs —</p>



<p>Some of what we value hinges on our beliefs about what&#8217;s true. And so if some of our relevant beliefs are false, or we haven’t fully explored all the implications of those beliefs (e.g., two things we believe imply a third thing but we haven’t realized that), then what we think we value may be, in a certain sense, “wrong”. So this at least opens the possibility that we might hold beliefs that are false or that contradict each other, such that, once they are corrected or the contradictions are resolved, we may end up caring more about increasing the well-being of strangers than we think we do now.</p>



<p>— our understanding of our values evolves —</p>



<p>We figure out our own values over time as we carefully introspect, discuss our values with others, compare options, notice and resolve contradictions, refine our understanding of the truth, flesh out the implications of what we already think is true, and infer things about ourselves from our own reactions. Hence, it is not that strange to think that our understanding of our values may change as we engage in reflection.</p>



<p>— a growing ember of classical utilitarianism —</p>



<p>So we may not fully understand what we value.</p>



<p>And I am proposing that through thought experiments about values, if carefully considered and reflected upon, quite a lot of people may realize that they care more about working to increase happiness or reduce suffering than they had originally thought. That many people are *partly* classical utilitarians in their values, even if they haven&#8217;t realized it, and that thought experiments can expose this.</p>



<p>— the thought experiments —</p>



<p>Warning: references to intense suffering and very difficult tradeoffs</p>



<p>(1) Suffering is bad, and not just for me</p>



<p>Remember that time when you felt really intense physical suffering (e.g., maybe you had a really nasty stomach flu)? Don’t dwell on that time, because I don’t want you to suffer now, but remember it just for a moment. Remember how much that suffering sucked?</p>



<p>Now take a few seconds to imagine a stranger. Someone you’ve never met and never will meet, but perhaps you passed them on the street at some point in your life. Take a moment to picture their face.</p>



<p>Now, suppose that right now this stranger is suffering in that same exact way that you recalled yourself suffering a moment ago. Assume this person is not someone who has done something terrible to deserve that suffering.</p>



<p>How do you feel about a state of the world where this stranger is suffering? Contrast it to a state of the world where that person is happy. I bet you think the latter world is better than the former.</p>



<p>I ran a survey asking people about their intrinsic values, that is, those things they value that they would continue to value even if no other consequences occurred as a result of that thing. In it, 49% of people (from the general U.S. mechanical Turk population that seemed to understand the question) reported that “people I don&#8217;t know suffer less than they do normally” is an intrinsic value, and 50% reported that “people I don&#8217;t know feel happy” is an intrinsic value.</p>



<p>It’s tough to measure people’s intrinsic values, and this is not a population that is fully representative of the U.S. population, so the exact numbers should be taken with a grain of salt. But these results suggest to me that many people do care about the suffering of strangers.</p>



<p>But now, the next question is, what properties should your caring about strangers have?</p>



<p>—</p>



<p>(2) Your friends care about the suffering of their friends</p>



<p>You presumably want the world to contain more of what your friends value (and less of what they disvalue) insofar as these values don’t conflict with your own.</p>



<p>Well, there’s a very good chance that one of the things your friends value is that their friends don’t suffer. Another thing your friends probably value is that their own friends get the things they value too, which presumably includes not wanting the friends of their friends (who are the friends of your friends’ friends) to suffer.</p>



<p>In other words, just by caring about the values of your friends, you may also care about the suffering of a whole host of other strangers. Not necessarily all strangers, but a lot of people you will never meet.</p>



<p>—</p>



<p>(3) More suffering is worse (a.k.a. scope sensitivity)</p>



<p>Suppose that 1 innocent person experiences a painful electric shock for one hour. How bad do you feel that is? Now suppose that, instead of that, 100 innocent people each experience the same electric shock for one hour. How much worse does that seem to you? Take a moment to consider it.</p>



<p>Now 10,000 people. How bad is that? Now 1,000,000 people. How bad is that?</p>



<p>At first, you may feel on a raw gut level that the 1,000,000 suffering is not that much worse than 1 person suffering. But are you really taking into account how many people 1,000,000 is? That’s about the entire population of San Francisco.</p>



<p>Notice how, when you really think about it, and you really try to get the enormity of the large numbers, 1,000,000 innocent people each experiencing a painful electric shock for one hour is way, way, way worse than 1 person experiencing it. Not just, say, twice as bad. But MUCH worse.</p>



<p>That implies that, for instance, eradicating a common and horribly debilitating disease that ten million people would otherwise get is not just a little bit more valuable than helping, say, 1000 people live slightly easier lives. It’s way, way, way more valuable!</p>



<p>I’m not saying you necessarily value a reduction in 1 million units of suffering as being 1 million times more valuable than a reduction in one unit of suffering, just that you probably do think it’s MUCH more valuable.</p>



<p>—</p>



<p>(4) Selfishness does not dominate</p>



<p>What’s the thing you value most in the world? Your life, maybe? Or your happiness? Or maybe something involving another person? My guess is that no matter how much you value this, there is an amount of suffering you’d be willing to give this up to alleviate.</p>



<p>For instance, if you had to give up your life to prevent all future suffering on earth, I bet you would do it, as terrible and unfair a choice as it would be to make.</p>



<p>—</p>



<p>(5) We should help suffering strangers when it is easy (a version of the famous drowning child thought experiment that Peter Singer has popularized)</p>



<p>Suppose a stranger you’re walking behind suddenly teeters and then collapses in front of you. The person is now lying on the ground, clearly in tremendous pain. You are the only person nearby.</p>



<p>I think most of us feel that even though we didn’t cause this person to be ill, we still have a moral obligation to try to help them. That is, (a) not being the cause of suffering doesn’t make us totally off the hook with regard to trying to relieve that suffering.</p>



<p>Furthermore, suppose that it would be a small inconvenience for us to help this person (e.g., we might have to show up 15 minutes late to a fairly important work meeting). I think most of us would still help this person (and would feel that it is the right thing to do). If true, that suggests that (b), if the size of the potential reduction of suffering to another person is much greater than our own loss by our helping them, we probably should help.</p>



<p>Finally, suppose that instead of this being a stranger right in front of us, we imagine that this is a stranger who we happened to have accidentally just Skype called by accident (by entering our friend’s user ID incorrectly). Assuming we don’t believe the person on the other end is faking, shouldn’t we still try to figure out some way to help this person (assuming it is feasible), even though they are far away? Of course, if we have no way to help them, obviously, we have no obligation to help. But suppose we can think of an easy way to help, shouldn’t we do it? This suggests that (c) our obligation to help doesn’t depend on how far away someone is, only on our ability to help that person.</p>



<p>We must then remember, of course, that there are people we could help around the world at little inconvenience to ourselves.</p>



<p>Even if you agree with (a), (b), and (c), that doesn’t mean that you think you should devote all your time and money to helping people who are suffering. But if you do agree with those points, then I suspect your value system tells you that you should expend at least some of your resources helping reduce suffering in others, if you have the means to do so without too much sacrifice.</p>



<p>—</p>



<p>(6) Other values may seem to diminish when happiness is even slightly reduced as a consequence of them</p>



<p>Suppose that you happen to have found out that (through no action on your part) certain people have a false belief about a certain topic. Furthermore, you know they would believe you if you corrected this belief.</p>



<p>The problem is that these people would all be slightly less happy if they knew the truth about this thing, and in fact, nobody would benefit in any way from this truth being known.</p>



<p>Would you tell these people? Well, you may think truth is important (I do too), but you may feel that it substantially takes the wind out of the sails of truth if all people involved are less happy because of it, and nobody benefits. I think in this case, some people will say, “What is the point of the truth if everyone suffers slightly more because of it?” In other words, they might feel the value of truth is reduced to almost nothing.</p>



<p>This isn’t just about truth. For instance, you can do a version of this thought experiment about equality (what if, in a particular group of people, you could make the group more equal in some dimension, but every single member of the group would be slightly less happy as a result). Or you can do it for almost any other value.</p>



<p>My guess is that these other values seem quite a bit less valuable (and perhaps to some not even valuable at all) when everyone is slightly less happy as a consequence, highlighting the potential importance of happiness in your value system.</p>



<p>Note that you may not necessarily feel this property is symmetrical with other values. For instance, suppose that someone reduces suffering a significant amount, but in doing so causes the people involved in the situation to have slightly less accurate beliefs. You may not feel that the slight reduction in accurate beliefs makes the reduction in suffering itself any less valuable.</p>



<p>—</p>



<p>(7) We can at least agree on suffering</p>



<p>Some people like apples and others like oranges. Some want to spread atheism, and others want to spread theism. Some people think you should obey authorities, while others value freedom of thought. But one of the few dimensions we are just about all similar on is that we don’t want to suffer ourselves, and we don’t want the people we love to suffer.</p>



<p>Some people are perhaps exceptions (e.g., Christopher Hitchens claimed Mother Teresa believed suffering to be at least sometimes good, quoting her as saying “I think it is very beautiful for the poor to accept their lot, to share it with the passion of Christ. I think the world is being much helped by the suffering of the poor people.&#8221;) I’m not sure what she meant by that or whether she would apply that to her own suffering or that of her loved ones, but it’s a possible exception.</p>



<p>That being said, though, disagreement on the badness of suffering seems really rare. Nearly everyone seems to find suffering bad, at least when it happens to themselves or their loved ones.</p>



<p>So if we all had to work as a species to reduce one thing, suffering seems like a pretty good contender. It’s hard to think of another thing we all dislike more.</p>



<p>— final thoughts —</p>



<p>Taken together, these thought experiments suggest (insofar as you buy into them) that you may believe:</p>



<p>(1) Suffering is bad when it happens to strangers</p>



<p>(2) You at least somewhat care about the suffering of many strangers by virtue of caring about the values of those people you care about</p>



<p>(3) More suffering of strangers is worse than less, and way, way more suffering is much worse still</p>



<p>(4) Your own self-interest is not more valuable than the potential for reducing all the suffering in the world</p>



<p>(5) We should put at least a little effort into reducing the suffering of strangers if it’s not too costly for us to do so, and we should not care whether those strangers are far away or near</p>



<p>(6) Most other values don’t seem as great if the result of producing them is to cause everyone involved to suffer slightly more, with no one benefiting, and these other values may even seem to lose their value in these cases</p>



<p>(7) We can all at least agree that suffering is bad and work together to reduce it</p>



<p>These points are not the same as classic utilitarianism, but they point in roughly the same direction as it does, I think. And anecdotally, some people seem to be quite impacted in their ethical views by thought experiments like these (though of course we can’t be sure it’s because they are revealing their deeper values as opposed to actually reshaping those values).</p>



<p>I don’t think that increasing the happiness of and/or reducing the suffering of conscious beings is the ONLY thing you care about. Nor do I think you SHOULD only care about those things.</p>



<p>But perhaps these thought experiments will make you realize that you care more about them than you thought you did, or that you’re more of a classic <a href="https://en.wikipedia.org/wiki/Utilitarianism">utilitarian</a> than you realized.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on June 2, 2018, and first appeared on my website on December 2, 2025.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2025/12/people-may-value-universal-happiness-and-reduction-of-suffering-more-than-they-realize/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4618</post-id>	</item>
		<item>
		<title>Valuism and X: how Valuism sheds light on other domains &#8211; Part 5 of the sequence on Valuism</title>
		<link>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/</link>
					<comments>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Wed, 19 Jul 2023 13:09:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[anxiety]]></category>
		<category><![CDATA[depression]]></category>
		<category><![CDATA[economics]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[GDP]]></category>
		<category><![CDATA[happiness]]></category>
		<category><![CDATA[human development index]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[life philosophy]]></category>
		<category><![CDATA[mental health]]></category>
		<category><![CDATA[philosophies]]></category>
		<category><![CDATA[suffering]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<category><![CDATA[utopias]]></category>
		<category><![CDATA[valuism]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3084</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the fifth and final part in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, second, third, and fourth parts. In previous posts, I&#8217;ve described Valuism &#8211; my life philosophy. I&#8217;ve also discussed how it could serve as a life philosophy [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="375" data-attachment-id="3167" data-permalink="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/dall%c2%b7e-2023-02-05-15-50-14-a-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?fit=2048%2C1024&amp;ssl=1" data-orig-size="2048,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="DALL·E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?fit=750%2C375&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=750%2C375&#038;ssl=1" alt="" class="wp-image-3167" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=1024%2C512&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=768%2C384&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?resize=1536%2C768&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-15.50.14-A-crystal-acts-as-a-beam-splitter-a-beam-of-white-light-enters-the-crystal-and-the-light-exits-as-a-rainbow-digital-art-1.png?w=2048&amp;ssl=1 2048w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>



<p style="font-size:15px"><em>This is the fifth and final part <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>, <a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>, <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a> parts. </em></p>



<p>In previous posts, I&#8217;ve described Valuism &#8211; my life philosophy. I&#8217;ve also discussed how it could serve as a life philosophy for others. In this post, I discuss how a Valuist lens can help shed light on various fields and areas of inquiry.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and Effective Altruism</h3>



<p>Effective Altruism is a community and social movement <a href="https://www.centreforeffectivealtruism.org/ceas-guiding-principles">about</a> &#8220;using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.&#8221;</p>



<p>Effective Altruists often operate from a hedonic utilitarian framework (trying to increase happiness and reduce suffering for all conscious beings). But Effective Altruism can alternatively be approached from a Valuist framework. </p>



<p>You can think of Valuist Effective Altruism as addressing the question of how to effectively increase the production of one&#8217;s altruistic intrinsic values within the time, effort, and focus you give to those values (as opposed to your other intrinsic values). If you&#8217;re an Effective Altruist, chances are two of your strongest intrinsic values are related to reducing suffering (or increasing happiness) and seeking truth. </p>



<p>For people with certain intrinsic values, Effective Altruism is a natural consequence of Valuism. To see this, consider a Valuist whose two strongest values are the happiness (and/or lack of suffering) of conscious beings and truth-seeking. Such a Valuist would naturally want to increase global happiness (and/or reduce global suffering) in highly effective ways while seeing the world impartially (e.g., by using reason and evidence to guide their understanding). This is extremely aligned with (and similar to) the mission of Effective Altruism.</p>



<p> For more on the relationship between Effective Altruism and Valuism, see <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">this post</a>.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and existential risk</h3>



<p>Potential existential risks (such as threats from nuclear war, bioterrorism, and advanced A.I.) are a major area of focus for many Effective Altruists. According to most people&#8217;s intrinsic values, existential risk is also incredibly bad. Existential risks threaten many of the things that humans value (happiness, pleasure, learning, achievement, freedom, longevity, legacy, virtue, and so on). So for most people&#8217;s intrinsic values, Valuism is compatible with caring about existential risk reduction (depending on one&#8217;s estimates of the relevant probabilities).</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and utopias</h3>



<p>Utopias <a href="https://www.spencergreenberg.com/2017/11/16-potentially-dystopic-utopias/">are hard to construct</a>. Sure, we pretty much all want a world without poverty and disease, but it&#8217;s hard to agree on the specific details beyond avoiding bad things. If we go all-in on one intrinsic value, we end up with a world that seems like a dystopia to many. For instance, a utopia, according to hedonic utilitarianism, might look like attaching each of our brains to a bliss-generating machine while we do nothing for the rest of our lives, or it might look like or filling the universe with tiny algorithms that experience maximal bliss per unit of energy. Of course, these are horrifying outcomes for many people. </p>



<p>If we maximize utopia according to one or a small set of intrinsic values, it will very likely seem like a dystopia according to someone with other intrinsic values. To construct a utopia that is not a dystopia to many, we should <strong>make sure that it includes high levels of a wide range of intrinsic values</strong>, keeping these in balance rather than going all-in on a small set of values.</p>



<p></p>



<p>Put another way, if we preserve a wide range of different intrinsic values in our construction of potential utopias, we protect ourselves against various failure modes.&nbsp;For instance:</p>



<ul class="wp-block-list">
<li>The intrinsic value of avoidance of suffering protects us from a world where there is a lot of pain and suffering.</li>



<li>The intrinsic value of freedom helps protect us from a failure mode of a world of forced <a href="https://en.wikipedia.org/wiki/Wirehead_(science_fiction)">wireheading</a>.&nbsp;</li>



<li>An intrinsic value of truth helps protect us from a failure mode where we&#8217;re all unknowingly in the matrix (e.g., being used for a purpose unknown to us) or living under an authoritarian world government that tries to keep the populace happy through delusion.</li>
</ul>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and worldviews</h3>



<p>Worldviews usually come with a set of shared intrinsic values. These are the strong intrinsic values that most (though not all) people with that worldview have in common. Of course, in most cases, in addition to these shared intrinsic values, each individual will also have other intrinsic values that are not shared by most people with their worldview. You can learn more about the interface between worldviews and intrinsic values in <a href="https://www.clearerthinking.org/post/understand-how-other-people-think-a-theory-of-worldviews">our essay on worldviews here</a>.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and mental health&nbsp;</h3>



<p>Mental health may have interesting connections to intrinsic values. For instance, here&#8217;s <a href="https://www.clearerthinking.org/post/understanding-the-two-most-common-mental-health-problems-in-the-world">an oversimplified model of anxiety and depression</a> that I find usefully predictive (I developed this in collaboration with my colleague Amanda Metskas):</p>



<p><strong>Anxiety</strong> occurs when you think there is a chance that something you intrinsically value may be lost. Anxiety tends to be worse when you perceive the chance of this happening as higher, when you perceive the intrinsic values as more important, or when the potential loss is nearer in time. </p>



<p></p>



<p><strong>Depression</strong> occurs when you&#8217;re convinced you can&#8217;t create sufficient intrinsic value in your future. This could be because you think the things you value most are lost forever, because you see yourself as useless at achieving what you value, or for other reasons.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and animals</h3>



<p>What do animals care about? While some animals (e.g., some insects) may not be conscious (i.e., they may lack something that it&#8217;s like to be them), and therefore it may not matter what they care about, for conscious animals, it may be important to understand what they intrinsically value so we know how to treat them ethically. </p>



<p>An intrinsic value perspective on animal ethics is that we should not deprive animals of the things they intrinsically value (and we should help them get the things they intrinsically value, at least when they are easy to provide). So, for instance, we can ask how much a chicken that lives almost its whole life in a small cage (as many chickens raised for food in the U.S. do) is able to have its intrinsic values met. The answer is probably very little.</p>



<p>But what are the sorts of things that animals may intrinsically value? I suspect there are a wide variety of animal intrinsic values and that they depend on species, but here are a few that may be especially common in mammals:</p>



<ul class="wp-block-list">
<li>Pleasure</li>



<li>Not suffering</li>



<li>Not experiencing large amounts of fear, stress, and anxiety</li>



<li>Surviving</li>



<li>Agency (e.g., the ability to choose)</li>



<li>Bonding with other animals</li>



<li>Protection of their offspring</li>
</ul>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Valuism and economics</h3>



<p>Economics often operates under the assumption that each person has a &#8220;utility function&#8221;: i.e., a function that maps states of the world into how good or bad the person thinks those states are and that describes the choices people make. According to this frame, if a person chooses A over B, that means that their utility function assigns a higher value to A than B. For example, if I buy a Mac rather than a PC, and they are the same price, this must mean that I predict the Mac gives me more utility (according to my utility function). </p>



<p>Valuism, on the other hand, says that when A is more intrinsically valuable to us than B (and equivalent along other dimensions such as price), we often will choose A over B because A produces more of what we intrinsically value; however, sometimes we choose B over A instead because we confuse instrumental value with intrinsic value, or we have a habit of doing B, we feel social pressure to do B, etc. </p>



<p>In other words, <strong>choosing something is not the same as intrinsically valuing something</strong>, <strong>and ideally, we want to construct a society where people get more of what they intrinsically value</strong>, not merely giving people more of what they would <em>choose</em>.&nbsp;</p>



<p>A classic example where intrinsic value and choice come apart is addictive products like cigarettes or video games with upsells: people sometimes choose to pay for them and use them way past the point of benefit, according to their own intrinsic values. </p>



<p>A similar issue comes up when people slip into treating every dollar of GDP or each unit reduction of &#8220;<a href="https://en.wikipedia.org/wiki/Deadweight_loss">deadweight loss</a>&#8221; as though they are equally valuable. Imagine that an influencer gets all the hottest celebrities to start wearing the hair of a rare species of sloth and that buzz convinces millions of people that it’s really cool, so consumers spend billions of dollars buying these sloth hair pieces. Unfortunately, the sloth hair is really aesthetically ugly, uncomfortable, and expensive, and making clothes out of it requires torturing the sloths. This will probably increase GDP, yet (on net) intrinsic value will almost certainly have been destroyed. There is no good reason to care about GDP for its own sake, but intrinsic values are precisely the things we care about for their own sake. While increasing GDP may often be aligned with producing more of what people intrinsically value (both now and potentially in the future), in cases when GDP and the long-term production of intrinsic values are out of alignment, I would argue that GDP is no longer a good measure of societal benefit.</p>



<p>Going back to the sloth hair example, having a free market for this sloth hair would, according to simple economic theory, reduce &#8220;deadweight loss&#8221; (relative to having restrictions on their sale). And yet, the production of this sloth hair will likely be net destructive to what people intrinsically value. We can imagine a multi-faceted accounting of how society is doing that takes into account productivity and wealth but goes beyond it to consider the extent to which people are creating their intrinsic values; productivity and wealth would be viewed as being in the service of intrinsic value production.</p>



<p>As a complement to GDP, we can think about measuring how well the people of a society get the things that they intrinsic value. For instance, attempting to measure:</p>



<ul class="wp-block-list">
<li>How happy are they?&nbsp;</li>



<li>To what extent are they accomplishing their goals?&nbsp;</li>



<li>How free are they?</li>



<li>How meaningful are their relationships?&nbsp;</li>



<li>How much are they suffering?</li>
</ul>



<p>This is related to the <a href="https://hdr.undp.org/data-center/human-development-index">Human Development Index</a>, though that index includes items that are not intrinsic values, and it doesn&#8217;t cover all intrinsic values.</p>



<p>If we had such an accounting, different people would naturally rank societies differently (in terms of how good they are overall) because they value these intrinsic values to different extents.</p>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>



<p>As you can see in this post, a Valuist perspective may have something to say about many other topic areas, giving us a different way to look at topics like Effective Altruism, utopia, animal ethics, worldviews, mental health, and economics.</p>



<p></p>



<p><em>You&#8217;ve just finished the fifth and final part in my sequence of essays on my life philosophy, Valuism &#8211;</em> <em>here are the <a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>, <a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>, <a href="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/">third</a>, and <a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a> parts. </em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3084</post-id>	</item>
		<item>
		<title>Should Effective Altruists be Valuists instead of utilitarians? &#8211; part 3 in the Valuism sequence</title>
		<link>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/</link>
					<comments>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 10 Mar 2023 07:42:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[autonomy]]></category>
		<category><![CDATA[burnout]]></category>
		<category><![CDATA[choice]]></category>
		<category><![CDATA[contradictions]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[equity]]></category>
		<category><![CDATA[freedom]]></category>
		<category><![CDATA[group membership]]></category>
		<category><![CDATA[humility]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[justice]]></category>
		<category><![CDATA[long-term success]]></category>
		<category><![CDATA[moral antirealism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[non-altruistic values]]></category>
		<category><![CDATA[self-care]]></category>
		<category><![CDATA[self-control]]></category>
		<category><![CDATA[shared values]]></category>
		<category><![CDATA[social groups]]></category>
		<category><![CDATA[social values]]></category>
		<category><![CDATA[sustainability]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[utility]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3077</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the third of five posts in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, second, fourth, and fifth parts (though the links won’t work until those other essays are released). Sometimes, people take an important value &#8211; maybe their most [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" decoding="async" width="750" height="375" data-attachment-id="3168" data-permalink="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/dall%c2%b7e-2023-02-05-16-07-14-a-treasure-chest-full-of-rainbows/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=2048%2C1024&amp;ssl=1" data-orig-size="2048,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="DALL·E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=750%2C375&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=750%2C375&#038;ssl=1" alt="" class="wp-image-3168" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1024%2C512&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=768%2C384&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1536%2C768&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?w=2048&amp;ssl=1 2048w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>
</div>


<p style="font-size:14px"><em>This is the third of five posts <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts <em>(though the links won’t work until those other essays are released)</em>.</em></p>



<p>Sometimes, people take an important value &#8211; maybe their most important value &#8211; and decide to prioritize it above all other things. They neglect or ignore their other values in the process. In my experience, this often leaves people feeling unhappy. It also leads them to produce less total value (according to their own intrinsic values). I think people in the effective altruist community (i.e., EAs) are particularly prone to this mistake.</p>



<p><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">In the first post in this sequence</a>, I introduce Valuism &#8211; my life philosophy &#8211; and offer some general arguments for its advantages. In this post, I talk about the interaction between Valuism and effective altruism. I argue that the way some EAs think about morality and value is (in my view) empirically false, potentially psychologically harmful, and (in some cases) incoherent.&nbsp;</p>



<p>EAs want to improve others’ lives in the most effective way possible. Many EAs identify as hedonic utilitarians (even the ones who reject objective moral truth). They say that impartially maximizing utility among all conscious beings &#8211; by which they usually mean the sum of all happiness minus the sum of all suffering &#8211; is the o<em>nly thing of</em> value, or the only thing that they feel they <em>should</em> value. I think this is not ideal for a few reasons.</p>



<p></p>



<h3 class="wp-block-heading">1. I think (in one sense) it&#8217;s empirically false</h3>



<p>Consider a person who claims that &#8220;only utility is valuable.&#8221;</p>



<p>If&nbsp;we interpret this as an empirical claim about the person’s own values &#8211; i.e., that the sum of happiness minus suffering for all conscious beings is the only thing that their brain assigns value to &#8211; I think that it&#8217;s very likely empirically false.&nbsp;</p>



<p>That is, I don&#8217;t think anyone <em>only</em> values (in the sense of what their brain assigns value to) maximizing utility, even if it&#8217;s a very important value of theirs. I can&#8217;t prove that literally nobody<em> </em>only values maximizing utility, but I argue that human brains aren&#8217;t built to only value one thing, nor would we expect evolution to converge on pure utilitarian psychology since evolution optimizes for survival (a purely utilitarian brain would get rapidly outcompeted by other brain types if they existed 50,000 years ago).&nbsp;</p>



<p>I think that even the most hard-core hedonic utilitarians <em>do</em> psychologically value some non-altruistic things deep down &#8211; for example, their own pleasure (more than the pleasure of everyone else), their family and friends, and truth. However, in my opinion, they sometimes deny this to themselves or feel guilty about it. If you are convinced that your only intrinsic value is utility (in a hedonistic, non-negative-leaning utilitarian sense), you may find it instructive to take a look <a href="https://twitter.com/SpencrGreenberg/status/1568595511522852871">at these philosophical scenarios</a> I assembled or check out <a href="https://www.youtube.com/watch?v=d_6i9uzsBuc&amp;ab_channel=CentreforEffectiveAltruism">the scenarios I give in this talk</a> about values.</p>



<p>For instance, does your brain actually tell you it&#8217;s a good trade (in terms of your intrinsic values) to let a loved one of yours suffer terribly in order to create a mere 1% chance of preventing 101 strangers from the same suffering? Does your brain actually tell you that equality doesn&#8217;t matter one iota (i.e., it&#8217;s equally good for one person to have all the utility compared to spreading it more equally)? Does your brain actually value a world of microscopic, dumb orgasming micro-robots more than a world (of slightly less total happiness) where complex, intelligent, happy beings pursue their goals? Because taken at face value, hedonic utilitarianism doesn&#8217;t care about whether a person is your loved one or a stranger, doesn&#8217;t care about equality <em>at all</em>, and prefers microscopic orgasming robots to complex beings as long as the former are slightly happier. But, if you consider yourself a hedonic utilitarian, is that actually what your brain values?</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/u7FnrSutFnOMuG57YHHw9RGv-QCfrH2LMvMWsATOHkYrOpNy8mr9I46XublWGnhnnVc_vSjXkOIWXfG9-rRYQYrujHM5D6d8GylwPPRuv0ePebNF-Kha_P9_b9k3Vd63BVHaP5eMOb0QHj4MJLWZ4Yw" alt=""/><figcaption class="wp-element-caption"><em>Caption: it turns out very few people are willing to risk hell on earth for a somewhat higher expected utility!</em></figcaption></figure>
</div>


<p></p>



<h3 class="wp-block-heading">2. It can be psychologically harmful</h3>



<p>Additionally, I think the attitude that there is only one thing of value can lead to severe psychological burnout as people try to push away, minimize or deny their other intrinsic values and “selfish,” non-altruistic desires. I’ve seen this happen quite a few times. <a href="https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends">Here&#8217;s Tyler Alterman&#8217;s personal account</a> of this if you’d like to see an example. <a href="https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnout">And here&#8217;s a theory</a> of how this burnout happens.</p>



<p></p>



<h3 class="wp-block-heading">3. I think (in one sense) it&#8217;s incoherent</h3>



<p>When coupled with a view that there is no objective moral truth, I think it is, in most cases, <strong>philosophically incoherent</strong> to claim that total hedonic utility is all that matters<strong>.</strong></p>



<p>If you believe in objective moral truth, it may make sense to say, “I value many things, but I have a moral obligation to prioritize only some of them” (for example, you might be convinced by arguments that you are objectively morally obliged to promote utility impartially even though that’s not the only value you have).</p>



<p>However, many EAs, like me, don’t believe in objective moral truth. If you don’t think that things <em>can</em> be objectively right or wrong, it doesn’t make sense (I claim) to say that you “should” prioritize maximizing utility for all of humanity over other values – what does this “should” even mean? Well, there are some answers for what this “should” could mean that philosophers and lay people have proposed, but I find them pretty weak.</p>



<p>For a much more in-depth discussion of this point (including an analysis of different ways that EAs have responded to my critique of pairing utilitarianism with denial of objective moral truth), see <a href="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/">this essay</a>. It collects may different objections (from EAs and from some philosophers) and discusses them. So if you are interested in whether it is or isn&#8217;t coherent to only value utility when you deny objective moral truth, and moreover, whether EAs and philosophers have good arguments for doing so, please see that essay.</p>



<p>I find that while many (perhaps the majority of) EAs deny objective moral truth, many still talk and think as though there is objective moral truth.</p>



<p>I found it striking that, in my conversations with EAs about their moral beliefs, few had a clear explanation for how to combine a belief in utilitarianism with a lack of a belief in objective moral truth, and the approaches to that that they did put forward were usually quite different from each other (suggesting, at the very least, a lack of consensus in how to support such a perspective). Some philosophers I spoke to pointed to other ways one might defend such a position (mainly drawn from the philosophical literature), but I don&#8217;t recall ever seeing these approaches being used or referenced by non-philosopher EAs (so they don&#8217;t seem to be doing much work in the beliefs of EAs who hold this view).&nbsp;</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/9GJvXrOJAl0p6FFj2eUiqu6MQPftJRlFDeIG2D_mBMMmi1_ryaOh5N19YsBdG4BlkyJNHhSvogaR1CAdEE4EsUNH5xmQ8rdzZmT90qlbkL4oCQO4sehUFLUp7y5EdLBizKLKZNxD0UFj4J2aFj0QBgo" alt=""/><figcaption class="wp-element-caption"><em>A poll I ran on Twitter. More than half of EA respondents report not being moral realists</em>.</figcaption></figure>
</div>


<p>I suspect it would help many EAs if they took a more Valuist approach: rather than claiming to or aspiring to only value hedonic utility, they could accept that while they <em>do </em>intrinsically value this – very likely far more than the average person – they also have other intrinsic values, for example, truth (which I think is another very important psychological value for many EAs), their own happiness, and the happiness of their loved ones.</p>



<p>Valuism also avoids some of the most awkward bullets that EAs sometimes are tempted to bite. For instance, hedonic utilitarianism seems to imply that your own happiness and the happiness of your loved ones “shouldn’t” matter to you even a tiny bit more than the happiness of a stranger who is certain to be born 1,000,000 years from now. Valuism may explain why people who identify as hedonic utilitarians may feel a great deal of internal conflict about this – even if you value the happiness of all sentient beings a tremendous amount, you almost certainly have other intrinsic values too. That means that Valuism may help you avoid some of the awkward conundrums that arise from ethical monism (where you assume that there is only one thing of value).</p>



<p></p>



<h2 class="wp-block-heading">Valuism and the EA Community</h2>



<p>From a Valuist perspective,<strong> I see the EA community as a group of people who share a primary intrinsic value of hedonic utility</strong> (i.e., reducing suffering and increasing happiness impartially) <strong>with a secondary strong intrinsic value of truth-seeking.</strong> Oddly (from my point of view) EAs are very aware of their intrinsic value of impartial hedonic utility, but seem much less aware of their truth-seeking intrinsic value. On a number of occasions, I&#8217;ve seen mental gymnastics used to justify truth-seeking in terms of increasing hedonic utility when (I claim) a much more natural explanation is that truth-seeking is an intrinsic value (not <em>just</em> an instrumental value that leads to greater hedonic utility). This helps explain why many EAs are so averse to <em>ever</em> lying and so averse even to persuasive marketing.</p>



<p>Each individual EA has other intrinsic values beyond impartial utility and truth-seeking, but in my view, those two values help define EA and make it unique. This is also a big part of why this community resonates with me: those are my top two universal intrinsic values as well.</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh5.googleusercontent.com/caB2WlN1Mleqk9ZsApi7rokTC-KpCErd-t3GDKOIk5didxPnvdoHJp1bVOiCFNgBmzq9QLMFPgrya91zUY4vqUEEDAJ8juRiCo_07ikYFZZRwmqZBC7B5NOLeHr6KqFLciFtoqWok8rDjHtfqd2-r-k" alt=""/><figcaption class="wp-element-caption"><em>While these groups sometimes overlap (e.g., some effective altruists are libertarians, and </em><a href="https://clearerthinkingpodcast.com/episode/085"><em>some are social justice advocates</em></a><em>, etc.), we created this graphic to illustrate what we believe are the </em><strong><em>most common</em></strong><em> universal (i.e., not self-focused, not community-focused) intrinsic values shared among most members of each group.</em></figcaption></figure>
</div>


<p></p>



<p>If more EAs adopted Valuism, I think that they would almost all continue to devote a large fraction of their time and energy toward improving the world effectively. Maximizing global hedonic utility (i.e., the sum of happiness minus suffering for conscious beings) <em>is</em> the strongest universal intrinsic value of most community members, so it would still play the largest role in determining their goals and actions, even after much reflection.&nbsp;</p>



<p>However, they would also feel more comfortable investing in their own happiness and the happiness of their loved ones at the same time, which I predict would make them happier and reduce burnout. Additionally (I claim), they’d accept that, like many effective altruists,<strong> they also have a strong intrinsic value of truth</strong>. They’d strike a balance between their various intrinsic values, and not endorse annihilating all their intrinsic values except for one.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>I published this piece on this site on March 10, 2023.</em><br><br><a rel="noreferrer noopener" href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+Should+Effective+Altruists+be+Valuists+instead+of+utilitarians%3F%C2%A0+-+part+3+in+the+Valuism+sequence&amp;source=email" target="_blank">If you read this line, please do us a favor and click here to answer one quick question.</a></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>You&#8217;ve just finished the second post in my sequence of essays on my life philosophy, Valuism –</em>&nbsp;<em><a href="https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">click here to go to the fourth post.</a></em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3077</post-id>	</item>
		<item>
		<title>Arguments For and Against Longtermism</title>
		<link>https://www.spencergreenberg.com/2022/08/arguments-for-and-against-longtermism/</link>
					<comments>https://www.spencergreenberg.com/2022/08/arguments-for-and-against-longtermism/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 30 Aug 2022 17:12:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[big history]]></category>
		<category><![CDATA[discounting the future]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[ethics]]></category>
		<category><![CDATA[feedback loops]]></category>
		<category><![CDATA[future people]]></category>
		<category><![CDATA[hingeyness]]></category>
		<category><![CDATA[human history]]></category>
		<category><![CDATA[longtermism]]></category>
		<category><![CDATA[morality]]></category>
		<category><![CDATA[neglectedness]]></category>
		<category><![CDATA[pascalian]]></category>
		<category><![CDATA[pivotal times]]></category>
		<category><![CDATA[population ethics]]></category>
		<category><![CDATA[scope insensitivity]]></category>
		<category><![CDATA[size of the future]]></category>
		<category><![CDATA[technological progress]]></category>
		<category><![CDATA[tiny probabilities]]></category>
		<category><![CDATA[window of opportunity]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2896</guid>

					<description><![CDATA[Thanks to William MacAskill&#8217;s excellent new book on the topic (What We Owe the Future), lots of people are talking about longtermism right now. For those not familiar with the concept, &#8220;longtermism&#8221; is the ethical view that &#8220;positively influencing the long-term future should be a key moral priority of our time.&#8221; Below are some of [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>Thanks to William MacAskill&#8217;s excellent new book on the topic (<a rel="noreferrer noopener" href="https://whatweowethefuture.com/" target="_blank">What We Owe the Future</a>), lots of people are talking about <a href="https://en.wikipedia.org/wiki/Longtermism">longtermism</a> right now. For those not familiar with the concept, &#8220;longtermism&#8221; is the ethical view that &#8220;positively influencing the long-term future should be a key moral priority of our time.&#8221;</p>



<p>Below are some of my favorite arguments for longtermism, followed by some of my favorite against it. Note that I borrow from Will&#8217;s book heavily here in the section on arguments in favor of longtermism.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Five of my favorite arguments FOR longtermism:</strong></p>



<p>1. We value future people. If someone creates a hidden nuke set to explode in 150 years, we can agree this is highly immoral, even if we&#8217;re sure none of the people harmed exist today. And we can agree it&#8217;s wrong to neglect the world so much that it&#8217;s a hell hole for our grandchildren.</p>



<p>2. There may be VAST numbers of future people. We can agree that harming more people is worse than harming fewer and that helping more people is better than helping fewer. So if our choices affect all future people, that puts a large moral weight on those actions.</p>



<p>3. We may live at a pivotal time to influence the future. Because of technologies that might cause existential catastrophes for humanity (e.g., nuclear bombs, human-engineered viruses, advanced A.I.), our choices today could have a massive impact on the future of our species.</p>



<p>4. It&#8217;s neglected &#8211; relatively little work goes into improving the far future. News cycles, political terms, profitability requirements, and many other factors push us towards thinking near-term. It seems likely that less than 1% of current resources go into trying to improve the long-term future.</p>



<p>5. Future people don&#8217;t have a say. When we take actions impacting the environment or our chance of extinction or technological development, we do so without input from the future people that may be affected. Historically, groups with no say are often ethically undervalued.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Six of my favorite arguments AGAINST prioritizing longtermism:</strong></p>



<p>1. The future may not be big. Much of the pull of longtermism comes from assuming the future could have FAR more people than alive today. But if there&#8217;s a lower bound on the yearly probability of extinction that you can&#8217;t change, the expected length of the future may not be long.</p>



<p>2. Humans need tight feedback loops. My opinion is that without such feedback toward a clear objective, human behavior usually spirals off into something weird (e.g., impracticality or status signaling). It can be VERY hard to tell if we&#8217;re positively influencing the long-term.</p>



<p>3. There may be valid reasons to discount the future. Empirically, few people care about far future people nearly as much as those alive today (the further out, the less they care). If there&#8217;s no such thing as objective moral truth (a debatable question), they aren&#8217;t making a mistake.</p>



<p>4. Longtermism is less compelling than other arguments for similar goals. Most people really care about humanity not going extinct and want the world to be a good place for our grandchildren (even if they don&#8217;t now have grandchildren). Bringing in considerations of humans in 20000AD makes these important topics less convincing.</p>



<p>5. YOUR probability of creating far future positive change may be tiny. Longtermism says if YOU have a 1 in a million chance of non-negligible far future influence, that&#8217;s incredibly valuable. And more generally, it can be very hard to figure out what to do now that will positively influence the world (with any significant degree of confidence) hundreds or thousands of years from now. I think it may be wiser to focus on a higher probability of, for instance, preventing calamity in 30 years (rather than, say, focusing on preventing it from occurring in the far distant future).</p>



<p>6. It&#8217;s debatable whether it&#8217;s better for more people to exist. Philosophers don&#8217;t agree whether a world with vastly more people is better than one with fewer (see discussions of &#8220;population ethics&#8221;). I think that most lay people also prefer a world with billions of very happy people to one with quadrillions of very slightly happy people (even if the latter sums up to far more total happiness). While we are not necessarily forced to choose between these options (maybe we could have a world of a huge number of very happy people), it&#8217;s not obvious we should be aiming for a vast future rather than one with very high levels of good things (e.g., only tens of billions of people but where everyone has very high well-being, freedom, self-actualization, etc.).</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This essay was first written on August 30, 2022, and first appeared on this site on September 1, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/08/arguments-for-and-against-longtermism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2896</post-id>	</item>
		<item>
		<title>Tensions between moral anti-realism and effective altruism</title>
		<link>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/</link>
					<comments>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 15 Aug 2022 01:16:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[analytical mind]]></category>
		<category><![CDATA[arbitrariness]]></category>
		<category><![CDATA[constructivism]]></category>
		<category><![CDATA[contradiction]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[effective altruists]]></category>
		<category><![CDATA[emotivism]]></category>
		<category><![CDATA[endorsing values]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[expressivism]]></category>
		<category><![CDATA[meta-moral uncertainty]]></category>
		<category><![CDATA[moral anti-realism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[moral uncertainty]]></category>
		<category><![CDATA[objective moral truth]]></category>
		<category><![CDATA[preference utilitarianism]]></category>
		<category><![CDATA[preferences]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[values]]></category>
		<category><![CDATA[valuism]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2863</guid>

					<description><![CDATA[I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&#160;both&#160;moral anti-realists&#160;and&#160;Effective Altruists&#160;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&nbsp;<em>both</em>&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Anti-realism" target="_blank">moral anti-realists</a>&nbsp;and&nbsp;<a rel="noreferrer noopener" href="https://www.effectivealtruism.org/" target="_blank">Effective Altruists</a>&nbsp;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure that the flaw is really there first (hence why I&#8217;m asking you for your feedback now)!</p>



<p><strong>People that this essay is&nbsp;<em>not</em>&nbsp;about</strong></p>



<p>Some Effective Altruists believe that objective moral truth exists (i.e., they are &#8220;moral realists&#8221;). They think that statements like &#8220;it&#8217;s wrong to hurt innocent people for no reason&#8221; are the sort of statements that can be true or false, much like the statement &#8220;there is a table in my room&#8221; can be true or false.</p>



<p>I disagree that there is such a thing as objective moral truth, but I at least understand what these folks are doing &#8211; they believe there is an objective answer to the question of &#8220;what is good?&#8221; and then they are trying to figure out that answer and live by it.&nbsp;</p>



<p>This usually ends up being some flavor of utilitarianism plus maybe some moral uncertainty giving some weight to other theories such as protecting rights. In the 2019 EA survey,&nbsp;<a rel="noreferrer noopener" href="https://forum.effectivealtruism.org/posts/wtQ3XCL35uxjXpwjE/ea-survey-2019-series-community-demographics-and#Morality" target="_blank">70% of EAs</a>&nbsp;identified with utilitarianism (though this survey did not distinguish between those who do believe in objective moral truth and those who don&#8217;t believe in objective moral truth but have utilitarian ethics anyway). I think this group of EAs that believe in objective moral truth is mistaken but that they are being coherent. They are the first group listed in the poll I took below, and they are NOT the group I am focusing on in this post.&nbsp;</p>



<figure class="wp-block-image size-large is-resized"><img data-recalc-dims="1" decoding="async" width="750" height="567" data-attachment-id="2864" data-permalink="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/image-8/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=1080%2C816&amp;ssl=1" data-orig-size="1080,816" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=750%2C567&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=750%2C567&#038;ssl=1" alt="" class="wp-image-2864" style="width:768px;height:581px" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=1024%2C774&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=300%2C227&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=768%2C580&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?w=1080&amp;ssl=1 1080w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<p><strong>The flaw I see:</strong></p>



<p>The group I am focusing on is represented by the second bar in the poll above. Many (most?) Effective Altruists deny that there is objective moral truth or think that objective moral truth is unlikely. But then I still go on to hear quite a number of such EAs say things like:</p>



<p>• &#8220;We should maximize utility.&#8221;</p>



<p>• &#8220;The only thing I care about is increasing utility for conscious beings.&#8221;</p>



<p>• &#8220;The only thing that matters is the utility of conscious beings.&#8221;</p>



<p>• &#8220;The only value I endorse is maximizing utility.&#8221;</p>



<p>(Note that by &#8220;utility&#8221; here, they mean something like happiness minus suffering, not &#8220;utility&#8221; in the Economics sense of preference satisfaction [unless they are preference utilitarians] or the Von Neumann–Morgenstern theorem sense.)</p>



<p>I find these statements by Effective Altruists very strange. If I try to figure out what they are claiming, I see a few possible disambiguations:</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 1 &#8211; Contradictory beliefs:</strong>&nbsp;they could believe that maximizing utility is objectively good even though they don&#8217;t believe in objective moral truth &#8211; which seems to me to be a blatant contradiction in their beliefs. Similarly, they could be claiming that while they have other intrinsic values, they think they SHOULD only value utility (and should value all units of utility equally). But then, what does the word &#8220;should&#8221; mean here? On what grounds &#8220;should&#8221; you if there is no objective moral truth?</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 2 &#8211; Misperception of the self:</strong>&nbsp;they could be claiming that while there is no objective answer to what&#8217;s good, the only intrinsic value they have (i.e., the only thing they value as an end in and of itself, not as a means to an end, that matters to them even if it gets them nothing else) is the utility of conscious beings (and that all units of utility are equal). In other words, they are making an empirical claim about their mind (and what it assigns value to).</p>



<p>Here I think they are (in almost every case, and perhaps in every single case) empirically wrong about their own mind. This is just not how human minds work.</p>



<p>If we think of the neural network composing the human mind as having different operations it can do (e.g., prediction, imagination, etc.), one of those operations is assigning value to states of the world. When people do this and pay close attention, they will realize that they don&#8217;t value the utility of all conscious beings equally and that they value things other than utility. While I can&#8217;t prove there is literally no such person on earth that only has the intrinsic value of utility, even for the most utilitarian people I&#8217;ve ever met, when I question them, I discover they have values other than utility.</p>



<p>And it stands to reason that human minds (being created by evolution) are not the sort of things that are likely to only value the utility of all beings equally. For instance, just about everyone I&#8217;ve ever met would be willing to sacrifice at least 1.1 strangers to save one person they love (even if they think that person wouldn&#8217;t have a higher than average impact or a happier-than-average life). I certainly would, and I don&#8217;t feel bad about that!</p>



<p>One very strong intrinsic value I see in the effective altruism community is that of truth &#8211; many EAs think you should try never to lie and are suspicious even of marketing. They sometimes try to justify this on utilitarian grounds (indeed, it can often be beneficial from a utilitarian perspective, not to lie). But this sometimes seems like rationalization &#8211; a utilitarian agent would lie whenever it produces a higher expected value of utility (but potentially only if it was using naive&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Causal_decision_theory#:~:text=Causal%20decision%20theory%20(CDT)%20is,the%20best%20outcome%20in%20expectation." target="_blank">Causal Decision Theory (CDT)</a>&nbsp;&#8211; H/T to Linchuan Zang for pointing this out), whereas many EAs make a hard and fast rule against lying (saying you should try to NEVER lie). This is easily explained as EAs having an intrinsic value of truth that they don&#8217;t want to accept as an intrinsic value (and so try to explain in terms of the &#8220;socially acceptable&#8221; value of utility).</p>



<p>As a side note, I find it upsetting when EAs try to justify one of their (non-utility) intrinsic values in terms of global utility because they think they are only supposed to value utility. For instance, an EA once told me that the reason they have friends is that it helps them have a great impact on the world. I did not believe them (though I did not think they were intentionally lying). I interpreted their statement as a harmful form of self-delusion (trying to reframe their attempts to produce their intrinsic values so that they conform to what they feel their values are &#8220;supposed&#8221; to be).</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibly 3 &#8211; Tyranny of the analytical mind:&nbsp;</strong>they could be saying that while they may have a bunch of intrinsic values, their analytical mind only &#8220;endorses&#8221; their utility value. But what does &#8220;endorse&#8221; mean here? Maybe they mean that, while they feel the pull of various intrinsic values, the logical part of their mind only feels the utility pull. But then why should their analytical mind have a veto over the other intrinsic values? Maybe they believe their other intrinsic values are &#8220;illogical,&#8221; whereas the utility value is logical. But on what grounds is that claim made? If they could prove logically that only utility mattered, wouldn&#8217;t we just be back to claim (1) that there is objective moral truth, and they don&#8217;t believe that?&nbsp;</p>



<p>Intrinsic values are just not the sort of thing that can have logical proof, and if they are not that sort of thing, then why give preference to just that one part of your mind? I&#8217;m genuinely confused.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4 &#8211; Maybe they mean something else</strong>&nbsp;that I just don&#8217;t see. What else could they mean? I&#8217;d love to know what you think (or if you&#8217;re one of these people)!</p>



<p>It&#8217;s certainly possible that there are very sensible interpretations for their claims that I&#8217;m just not seeing.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>In conclusion, for Effective Altruists who think there is objective moral truth, I think they are wrong, but I understand what they are doing (this post is not about them). But for ones that don&#8217;t believe in objective moral truth (which I think is the majority?) I think they are making some kind of mistake when their sole focus is utility. Of course, I could be wrong.</p>



<p>My personal philosophy &#8211; which I call Valuism (and which I am working on an essay about), attempts to deal with this specific philosophical issue (in a limited context).</p>



<p>But in the meantime, I&#8217;d love to hear your thoughts on this topic! What do you think? If you are an EA who doesn&#8217;t believe in objective moral truth, but you&#8217;re convinced that only utility matters, what do YOU mean by that? And even if you don&#8217;t identify with that view, what do you think might be happening here that I might have missed or misunderstood?</p>



<p>Thanks for reading this and for any thoughts you are up for sharing with me!</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p class="has-medium-font-size"><strong>Summarizing responses to this post</strong></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Edit (1 September 2022): </strong>after posting an earlier draft of this post on social media, there were hundreds of comments, some of which tried to explain why the commenter is utilitarian despite being an anti-realist, or presented alternative possibilities not delineated in the original post.</p>



<p>One thing that&#8217;s abundantly clear is that there is absolutely no consensus on how to handle the critique in the above post. There are a really wide variety of ways that people use to try to explain why they identify with utilitarianism despite not believing in objective moral truth.</p>



<p>Here are some of the most common types of responses given:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>1. Responses related to Possibility&nbsp; 1 (<strong>i.e., addressing &#8220;contradictory beliefs</strong>&#8220;</strong>)</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.1 <strong>Accepting contradiction</strong>: many people have contradictory beliefs (and contradictory beliefs may be no more common in moral anti-realist EAs than in other people), and some people are willing to lean into them. As one commenter put it: &#8220;many sets of intuitions are *wrong* if you take coherence as axiomatic.&#8221; Some people are just okay with self-contradiction.</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2 B<strong>eliefs that aren&#8217;t actually contradictory:</strong> my explanation of Possibility 1 might interpret&nbsp; &#8220;we should maximize utility&#8221; differently than some people who say that phrase mean it. Here are potential some interpretations by which that statement might actually be consistent with anti-realist views:</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.1 <strong>Personal preference:</strong> some people do not intend for statements like &#8220;we should maximize utility&#8221; to be representative of moral truth but instead mean it as an expression of a personal preference that they have for maximizing utility, or an expression of the fact that they will avoid feeling reflexively guilty if they aim to maximize utility, or a statement that they will have a positive emotional response if their focus on maximizing utility. However, these responses still seem to fall victim to another critique from the post, which is the arbitrariness of giving preference to certain feelings/preferences over other ones.&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.2 <a href="https://plato.stanford.edu/entries/constructivism-metaethics/"><strong>Metaethical constructivism</strong></a><strong>: </strong>this is defined as &#8220;the view that insofar as there are normative truths, they are not fixed by normative facts that are independent of what rational agents would agree to under some specified conditions of choice&#8221; (<a href="https://plato.stanford.edu/entries/constructivism-metaethics/">source</a>). Some <a href="https://plato.stanford.edu/entries/constructivism-metaethics/">say</a> this is &#8220;best understood as a form of <a href="https://en.wikipedia.org/wiki/Expressivism#:~:text=Expressivism%20is%20a%20form%20of,to%20which%20moral%20terms%20refer.">‘expressivism&#8217;</a>&#8220;. Constructivism seems compatible with both moral anti-realism and utilitarianism, but it&#8217;s unclear to me how many effective altruists would hold this view (I think very few).&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.3 <strong>Valuing a different kind of utility</strong>: some people may mean &#8220;we should maximize utility&#8221; in reference to a different kind of &#8220;utility&#8221; than the classic hedonistic utilitarian interpretation of the word. For example, &#8220;utility&#8221; is sometimes used to mean a &#8220;mathematical function serving as a representation of whatever one cares about.&#8221; By such an interpretati, if someone says they are trying to maximize utility they are presumably referring to maximizing their own utility function (rather than some objective one) &#8211; and so they are not the focus of this post.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>2. Responses related to Possibility 2 (i.e., &#8220;<strong>misperception of the self</strong></strong>&#8220;)</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.1 <strong>Second-order desires</strong>: people might not be misperceiving themselves at all but might instead be talking about second-order desires or desires about desires. As one commenter put it: &#8220;It might be that, though someone empirically does NOT possess desires consistent with maximising the utility of conscious beings, they possess the desire to possess these desires. They want to be the sort of person who does have a genuine utilitarian psychology, even if they don&#8217;t possess one now. This may explain the motivation to act as a utilitarian (most of the time) [despite being a moral anti-realist].&#8221; Though in this case, it&#8217;s unclear why they would want to or think they should give those second-order desire preference over their first-order desires.</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.2 <strong>Unshakable realist intuitions</strong>: people might be acting and/or feeling <em>as if </em>utilitarianism is true while also believing (upon reflection) that moral realism isn&#8217;t true. One person commented that &#8220;many of our intuition[s] are based on a realist world even when rationally we do not believe in one, so it is easy to accidentally make arguments that work only in a realist world, and then try to rationalize the argument afterwards to somehow work anyway.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.3<strong> Mislabeling one&#8217;s metaethics</strong>: instead of misperceiving <em>what they value</em>, some people might be mislabeling themselves as moral anti-realists even though they aren&#8217;t. In other words, some people who call themselves anti-realists might actually be moral realists without realizing it (e.g., because they haven&#8217;t reflected on it). One commenter thought that this would be a common phenomenon: &#8220;They are expressing a real, but subjective, truth &#8216;It is true to me that everyone should maximize utility&#8217;&#8230;I think that &#8216;deep down&#8217; you will find that in fact most effective altruists and indeed most people are moral realists but under-theorized ones. Even the anti-realists tend to act as if they were moral realists.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.4<strong> Choosing one&#8217;s own values</strong>: some argue that you can choose your values for yourself (though it&#8217;s unclear by what process one would make such a choice, or whether such a choice really can be made &#8211; it may hinge on what is meant by &#8220;values&#8221;). As one of the commenters put it: &#8220;It seems like you are assuming in [Possibility 2] that there is an objective answer to what a mind values, e.g. based on how it behaves. For one thing, it&#8217;s not clear that that is right in general. But a particular alternative that interests me here: one could have a model where one can decide what to value, and to the extent that one&#8217;s behavior doesn&#8217;t match that, one&#8217;s behavior is in error.&#8221; In other words, according to this view, maybe an individual themselves is the only person who can define their intrinsic values, and there is no objectively correct opinion for them to hold about this. But then, by what criteria (or based on what values) is a person deciding on what their values should be?</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>3. Reasons why Possibility 3 (i.e., &#8220;<strong>tyranny of the analytical mind</strong></strong>&#8220;) <strong>may not be a confused approach</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.1 <strong>Identifying with the analytic part of the mind</strong>: some people feel that choosing to endorse a particular framework (and choosing to endorse some values over other ones) is part of who they are &#8211; part of (or even the most important part of) their self-concept. In other words, the reflective part of them making that choice feels to them like it is &#8220;who they are&#8221; more so than other parts of them that have other preferences. Here&#8217;s how one person explained it: &#8220;For my part, the part of my mind that examines my moral intuitions and decides whether I want to act on them feels about as &#8216;me&#8217; as anything gets.&#8221; Another person thought that ​endorsing some values over others makes sense because many people think that their <em>&#8220;best&#8221;</em> self would live &#8220;in accordance with the judgments they make based on arguments and thought experiments.&#8221; Another proposed explanation for people being guided by the analytic mind is that being guided in this way might be a normal feature of human psychology (which at least one person saw as needing no further explanation). Yet another explanation put forward was that some people can have a completely arbitrary &#8220;personal taste&#8221; for giving their analytical mind a veto over other parts of their mind (and, according to this argument, those people don&#8217;t need a further justification beyond their arbitrary taste).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.2 <strong>Simplicity and coherence meta-values:</strong> having fewer intrinsic values or having fewer intrinsic values that one allows to dictate their behavior can (some argue) be justified by having an intrinsic value of coherence, simplicity, or consistency. As one commenter put it: &#8220;I genuinely think I just have utilitarian intrinsic values. [It seems] relevant here that I also value coherence (in a non-moral sense, probably as an epistemic virtue or something), so if I find myself thinking something that is incoherent with another value of mine, I can debate &amp; discard the less important one.&#8221;&nbsp;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4: Moral uncertainty</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.1 <strong>Meta-moral uncertainty &#8211; believing that realism <em>might</em> be true: </strong>people who don&#8217;t identify as moral realists might still feel there is some possibility that moral realism is correct and might act as if it was correct (at least to some degree &#8211; say, in proportion to how much weight they give this possibility compared to other action-guiding beliefs). As one commenter put it: &#8220;Why do I keep donating (and doing other EA things), albeit to a lesser extent [since switching from moral realism to moral anti-realism]? The main reason is (meta) moral uncertainty: I still feel that it is possible that moral realism is correct, and so I think it should have some say over my behavior.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.2 <strong>Misinterpreting moral uncertainty as anti-realism: </strong>People who think that their own beliefs are not necessarily objectively true (due to moral uncertainty) might conclude that they must be moral anti-realists, but they might be mistaken in calling themselves that. As one commenter explained it: &#8220;believing in moral objectivity is different from believing we are actually able to parse the true moral weights in practice.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 5: Precommitment and cooperation arguments</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.1 <strong>Benefiting from pre-committing to impartiality: </strong>some argue that acting as if classical utilitarianism is true might be justified on grounds related to resolving collective action problems (without having to believe that moral realism is true). For instance, one commenter wrote: &#8220;Being impartial between oneself (and one&#8217;s friends / family) vs. random people isn&#8217;t something that any human naturally feels, but it&#8217;s a &#8216;cooperate&#8217; move in a global coordination game. If we&#8217;d all be better off if we acted this way, then we want a situation where everyone makes a binding commitment to act impartially. It&#8217;s hard to do that, but we can approximate it through norms. So EAs might want to endorse this without feeling it.&#8221; Though presumably, if this was the justification for utilitarianism, they would then switch to a different moral theory if they thought it better solved collection action problems (e.g., if they came to believe virtue ethics better solved collective action problems).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.2 <strong>Benefiting from pre-committing to preference utilitarianism: </strong>some commenters pointed out that preference utilitarianism could also be justified on self-interested grounds (this post was not intended to be about other forms of utilitarianism such as preference utilitarianism, but it was edited to clarify that only after some people had started commenting). As one commenter put it: &#8220;If we&#8217;re viewing morality as playing a counterfactual game with others, we should take actions to benefit them in a way essentially identically to preference utilitarianism. That doesn&#8217;t require any objective morality, it only requires self-interest and buying into the idea that you should pre-commit to a theory of morality that, if many people embraced it, would increase your personal preferences.&#8221; Though in such cases (if they were actually optimizing for self-interest), it seems strange they would choose a moral theory where their interests count equally to people they will never encounter and never be in collective action problems with. (Some might argue that this would make more sense if the person endorsed a form of <a href="https://forum.effectivealtruism.org/posts/7MdLurJGhGmqRv25c/multiverse-wide-cooperation-in-a-nutshell">multiverse-wide cooperation via superrationality</a>, though it&#8217;s unclear how this resolves more concrete/real-life collective action problems).</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-wide"/>



<p><strong>Possibility 6: Social forces</strong>  &#8211; as <a href="https://twitter.com/TylerAlterman">Tyler Alterman</a> put it (when I was discussing this post with him &#8211; he&#8217;s named here with permission): &#8220;[I felt] that [for some EAs] their actual beliefs were at odds with the cultural norms of other smart people (EAs) that they felt alignment with, so they stopped paying attention to their actual beliefs. I think this is what happened to me for a while. There was an element of wanting to fit in. But then there is an element of &#8211; there are so many smart people here [in EA]&#8230; EA is full of Oxford philosophers &#8211; they must have figured this out already; there must be some obvious answer for my confusion. So I just went along with the obligation and normative language and lifestyle it entailed.&#8221; Social forces can be powerful, and in some cases, an explanation for human behavior can be as simple as: the other people around me who I respect or want the approval of do this thing or seem convinced this thing is true, so I do this thing and am convinced it is true.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This essay was first written on August 14, 2022, first appeared on this site on August 19, 2022, and was edited (to incorporate a summary of people&#8217;s responses) on September 1, 2022, <em>with help from Clare Harris</em>.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2863</post-id>	</item>
	</channel>
</rss>
