<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>effective altruists &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/effective-altruists/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Thu, 25 Jan 2024 01:24:38 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Tensions between moral anti-realism and effective altruism</title>
		<link>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/</link>
					<comments>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 15 Aug 2022 01:16:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[altruism]]></category>
		<category><![CDATA[analytical mind]]></category>
		<category><![CDATA[arbitrariness]]></category>
		<category><![CDATA[constructivism]]></category>
		<category><![CDATA[contradiction]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[effective altruists]]></category>
		<category><![CDATA[emotivism]]></category>
		<category><![CDATA[endorsing values]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[expressivism]]></category>
		<category><![CDATA[meta-moral uncertainty]]></category>
		<category><![CDATA[moral anti-realism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[moral uncertainty]]></category>
		<category><![CDATA[objective moral truth]]></category>
		<category><![CDATA[preference utilitarianism]]></category>
		<category><![CDATA[preferences]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[values]]></category>
		<category><![CDATA[valuism]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2863</guid>

					<description><![CDATA[I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&#160;both&#160;moral anti-realists&#160;and&#160;Effective Altruists&#160;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I believe I&#8217;ve identified a philosophical confusion associated with people who state that they are&nbsp;<em>both</em>&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Anti-realism" target="_blank">moral anti-realists</a>&nbsp;and&nbsp;<a rel="noreferrer noopener" href="https://www.effectivealtruism.org/" target="_blank">Effective Altruists</a>&nbsp;(EAs). I&#8217;d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I&#8217;m working on an essay about how I think that can be done), but I&#8217;d like to be sure that the flaw is really there first (hence why I&#8217;m asking you for your feedback now)!</p>



<p><strong>People that this essay is&nbsp;<em>not</em>&nbsp;about</strong></p>



<p>Some Effective Altruists believe that objective moral truth exists (i.e., they are &#8220;moral realists&#8221;). They think that statements like &#8220;it&#8217;s wrong to hurt innocent people for no reason&#8221; are the sort of statements that can be true or false, much like the statement &#8220;there is a table in my room&#8221; can be true or false.</p>



<p>I disagree that there is such a thing as objective moral truth, but I at least understand what these folks are doing &#8211; they believe there is an objective answer to the question of &#8220;what is good?&#8221; and then they are trying to figure out that answer and live by it.&nbsp;</p>



<p>This usually ends up being some flavor of utilitarianism plus maybe some moral uncertainty giving some weight to other theories such as protecting rights. In the 2019 EA survey,&nbsp;<a rel="noreferrer noopener" href="https://forum.effectivealtruism.org/posts/wtQ3XCL35uxjXpwjE/ea-survey-2019-series-community-demographics-and#Morality" target="_blank">70% of EAs</a>&nbsp;identified with utilitarianism (though this survey did not distinguish between those who do believe in objective moral truth and those who don&#8217;t believe in objective moral truth but have utilitarian ethics anyway). I think this group of EAs that believe in objective moral truth is mistaken but that they are being coherent. They are the first group listed in the poll I took below, and they are NOT the group I am focusing on in this post.&nbsp;</p>



<figure class="wp-block-image size-large is-resized"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="567" data-attachment-id="2864" data-permalink="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/image-8/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=1080%2C816&amp;ssl=1" data-orig-size="1080,816" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?fit=750%2C567&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=750%2C567&#038;ssl=1" alt="" class="wp-image-2864" style="width:768px;height:581px" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=1024%2C774&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=300%2C227&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?resize=768%2C580&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2022/08/image.png?w=1080&amp;ssl=1 1080w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<p><strong>The flaw I see:</strong></p>



<p>The group I am focusing on is represented by the second bar in the poll above. Many (most?) Effective Altruists deny that there is objective moral truth or think that objective moral truth is unlikely. But then I still go on to hear quite a number of such EAs say things like:</p>



<p>• &#8220;We should maximize utility.&#8221;</p>



<p>• &#8220;The only thing I care about is increasing utility for conscious beings.&#8221;</p>



<p>• &#8220;The only thing that matters is the utility of conscious beings.&#8221;</p>



<p>• &#8220;The only value I endorse is maximizing utility.&#8221;</p>



<p>(Note that by &#8220;utility&#8221; here, they mean something like happiness minus suffering, not &#8220;utility&#8221; in the Economics sense of preference satisfaction [unless they are preference utilitarians] or the Von Neumann–Morgenstern theorem sense.)</p>



<p>I find these statements by Effective Altruists very strange. If I try to figure out what they are claiming, I see a few possible disambiguations:</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 1 &#8211; Contradictory beliefs:</strong>&nbsp;they could believe that maximizing utility is objectively good even though they don&#8217;t believe in objective moral truth &#8211; which seems to me to be a blatant contradiction in their beliefs. Similarly, they could be claiming that while they have other intrinsic values, they think they SHOULD only value utility (and should value all units of utility equally). But then, what does the word &#8220;should&#8221; mean here? On what grounds &#8220;should&#8221; you if there is no objective moral truth?</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 2 &#8211; Misperception of the self:</strong>&nbsp;they could be claiming that while there is no objective answer to what&#8217;s good, the only intrinsic value they have (i.e., the only thing they value as an end in and of itself, not as a means to an end, that matters to them even if it gets them nothing else) is the utility of conscious beings (and that all units of utility are equal). In other words, they are making an empirical claim about their mind (and what it assigns value to).</p>



<p>Here I think they are (in almost every case, and perhaps in every single case) empirically wrong about their own mind. This is just not how human minds work.</p>



<p>If we think of the neural network composing the human mind as having different operations it can do (e.g., prediction, imagination, etc.), one of those operations is assigning value to states of the world. When people do this and pay close attention, they will realize that they don&#8217;t value the utility of all conscious beings equally and that they value things other than utility. While I can&#8217;t prove there is literally no such person on earth that only has the intrinsic value of utility, even for the most utilitarian people I&#8217;ve ever met, when I question them, I discover they have values other than utility.</p>



<p>And it stands to reason that human minds (being created by evolution) are not the sort of things that are likely to only value the utility of all beings equally. For instance, just about everyone I&#8217;ve ever met would be willing to sacrifice at least 1.1 strangers to save one person they love (even if they think that person wouldn&#8217;t have a higher than average impact or a happier-than-average life). I certainly would, and I don&#8217;t feel bad about that!</p>



<p>One very strong intrinsic value I see in the effective altruism community is that of truth &#8211; many EAs think you should try never to lie and are suspicious even of marketing. They sometimes try to justify this on utilitarian grounds (indeed, it can often be beneficial from a utilitarian perspective, not to lie). But this sometimes seems like rationalization &#8211; a utilitarian agent would lie whenever it produces a higher expected value of utility (but potentially only if it was using naive&nbsp;<a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Causal_decision_theory#:~:text=Causal%20decision%20theory%20(CDT)%20is,the%20best%20outcome%20in%20expectation." target="_blank">Causal Decision Theory (CDT)</a>&nbsp;&#8211; H/T to Linchuan Zang for pointing this out), whereas many EAs make a hard and fast rule against lying (saying you should try to NEVER lie). This is easily explained as EAs having an intrinsic value of truth that they don&#8217;t want to accept as an intrinsic value (and so try to explain in terms of the &#8220;socially acceptable&#8221; value of utility).</p>



<p>As a side note, I find it upsetting when EAs try to justify one of their (non-utility) intrinsic values in terms of global utility because they think they are only supposed to value utility. For instance, an EA once told me that the reason they have friends is that it helps them have a great impact on the world. I did not believe them (though I did not think they were intentionally lying). I interpreted their statement as a harmful form of self-delusion (trying to reframe their attempts to produce their intrinsic values so that they conform to what they feel their values are &#8220;supposed&#8221; to be).</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibly 3 &#8211; Tyranny of the analytical mind:&nbsp;</strong>they could be saying that while they may have a bunch of intrinsic values, their analytical mind only &#8220;endorses&#8221; their utility value. But what does &#8220;endorse&#8221; mean here? Maybe they mean that, while they feel the pull of various intrinsic values, the logical part of their mind only feels the utility pull. But then why should their analytical mind have a veto over the other intrinsic values? Maybe they believe their other intrinsic values are &#8220;illogical,&#8221; whereas the utility value is logical. But on what grounds is that claim made? If they could prove logically that only utility mattered, wouldn&#8217;t we just be back to claim (1) that there is objective moral truth, and they don&#8217;t believe that?&nbsp;</p>



<p>Intrinsic values are just not the sort of thing that can have logical proof, and if they are not that sort of thing, then why give preference to just that one part of your mind? I&#8217;m genuinely confused.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4 &#8211; Maybe they mean something else</strong>&nbsp;that I just don&#8217;t see. What else could they mean? I&#8217;d love to know what you think (or if you&#8217;re one of these people)!</p>



<p>It&#8217;s certainly possible that there are very sensible interpretations for their claims that I&#8217;m just not seeing.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>In conclusion, for Effective Altruists who think there is objective moral truth, I think they are wrong, but I understand what they are doing (this post is not about them). But for ones that don&#8217;t believe in objective moral truth (which I think is the majority?) I think they are making some kind of mistake when their sole focus is utility. Of course, I could be wrong.</p>



<p>My personal philosophy &#8211; which I call Valuism (and which I am working on an essay about), attempts to deal with this specific philosophical issue (in a limited context).</p>



<p>But in the meantime, I&#8217;d love to hear your thoughts on this topic! What do you think? If you are an EA who doesn&#8217;t believe in objective moral truth, but you&#8217;re convinced that only utility matters, what do YOU mean by that? And even if you don&#8217;t identify with that view, what do you think might be happening here that I might have missed or misunderstood?</p>



<p>Thanks for reading this and for any thoughts you are up for sharing with me!</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p class="has-medium-font-size"><strong>Summarizing responses to this post</strong></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Edit (1 September 2022): </strong>after posting an earlier draft of this post on social media, there were hundreds of comments, some of which tried to explain why the commenter is utilitarian despite being an anti-realist, or presented alternative possibilities not delineated in the original post.</p>



<p>One thing that&#8217;s abundantly clear is that there is absolutely no consensus on how to handle the critique in the above post. There are a really wide variety of ways that people use to try to explain why they identify with utilitarianism despite not believing in objective moral truth.</p>



<p>Here are some of the most common types of responses given:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>1. Responses related to Possibility&nbsp; 1 (<strong>i.e., addressing &#8220;contradictory beliefs</strong>&#8220;</strong>)</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.1 <strong>Accepting contradiction</strong>: many people have contradictory beliefs (and contradictory beliefs may be no more common in moral anti-realist EAs than in other people), and some people are willing to lean into them. As one commenter put it: &#8220;many sets of intuitions are *wrong* if you take coherence as axiomatic.&#8221; Some people are just okay with self-contradiction.</p>



<p>     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2 B<strong>eliefs that aren&#8217;t actually contradictory:</strong> my explanation of Possibility 1 might interpret&nbsp; &#8220;we should maximize utility&#8221; differently than some people who say that phrase mean it. Here are potential some interpretations by which that statement might actually be consistent with anti-realist views:</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.1 <strong>Personal preference:</strong> some people do not intend for statements like &#8220;we should maximize utility&#8221; to be representative of moral truth but instead mean it as an expression of a personal preference that they have for maximizing utility, or an expression of the fact that they will avoid feeling reflexively guilty if they aim to maximize utility, or a statement that they will have a positive emotional response if their focus on maximizing utility. However, these responses still seem to fall victim to another critique from the post, which is the arbitrariness of giving preference to certain feelings/preferences over other ones.&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.2 <a href="https://plato.stanford.edu/entries/constructivism-metaethics/"><strong>Metaethical constructivism</strong></a><strong>: </strong>this is defined as &#8220;the view that insofar as there are normative truths, they are not fixed by normative facts that are independent of what rational agents would agree to under some specified conditions of choice&#8221; (<a href="https://plato.stanford.edu/entries/constructivism-metaethics/">source</a>). Some <a href="https://plato.stanford.edu/entries/constructivism-metaethics/">say</a> this is &#8220;best understood as a form of <a href="https://en.wikipedia.org/wiki/Expressivism#:~:text=Expressivism%20is%20a%20form%20of,to%20which%20moral%20terms%20refer.">‘expressivism&#8217;</a>&#8220;. Constructivism seems compatible with both moral anti-realism and utilitarianism, but it&#8217;s unclear to me how many effective altruists would hold this view (I think very few).&nbsp;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2.3 <strong>Valuing a different kind of utility</strong>: some people may mean &#8220;we should maximize utility&#8221; in reference to a different kind of &#8220;utility&#8221; than the classic hedonistic utilitarian interpretation of the word. For example, &#8220;utility&#8221; is sometimes used to mean a &#8220;mathematical function serving as a representation of whatever one cares about.&#8221; By such an interpretati, if someone says they are trying to maximize utility they are presumably referring to maximizing their own utility function (rather than some objective one) &#8211; and so they are not the focus of this post.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>2. Responses related to Possibility 2 (i.e., &#8220;<strong>misperception of the self</strong></strong>&#8220;)</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.1 <strong>Second-order desires</strong>: people might not be misperceiving themselves at all but might instead be talking about second-order desires or desires about desires. As one commenter put it: &#8220;It might be that, though someone empirically does NOT possess desires consistent with maximising the utility of conscious beings, they possess the desire to possess these desires. They want to be the sort of person who does have a genuine utilitarian psychology, even if they don&#8217;t possess one now. This may explain the motivation to act as a utilitarian (most of the time) [despite being a moral anti-realist].&#8221; Though in this case, it&#8217;s unclear why they would want to or think they should give those second-order desire preference over their first-order desires.</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.2 <strong>Unshakable realist intuitions</strong>: people might be acting and/or feeling <em>as if </em>utilitarianism is true while also believing (upon reflection) that moral realism isn&#8217;t true. One person commented that &#8220;many of our intuition[s] are based on a realist world even when rationally we do not believe in one, so it is easy to accidentally make arguments that work only in a realist world, and then try to rationalize the argument afterwards to somehow work anyway.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.3<strong> Mislabeling one&#8217;s metaethics</strong>: instead of misperceiving <em>what they value</em>, some people might be mislabeling themselves as moral anti-realists even though they aren&#8217;t. In other words, some people who call themselves anti-realists might actually be moral realists without realizing it (e.g., because they haven&#8217;t reflected on it). One commenter thought that this would be a common phenomenon: &#8220;They are expressing a real, but subjective, truth &#8216;It is true to me that everyone should maximize utility&#8217;&#8230;I think that &#8216;deep down&#8217; you will find that in fact most effective altruists and indeed most people are moral realists but under-theorized ones. Even the anti-realists tend to act as if they were moral realists.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.4<strong> Choosing one&#8217;s own values</strong>: some argue that you can choose your values for yourself (though it&#8217;s unclear by what process one would make such a choice, or whether such a choice really can be made &#8211; it may hinge on what is meant by &#8220;values&#8221;). As one of the commenters put it: &#8220;It seems like you are assuming in [Possibility 2] that there is an objective answer to what a mind values, e.g. based on how it behaves. For one thing, it&#8217;s not clear that that is right in general. But a particular alternative that interests me here: one could have a model where one can decide what to value, and to the extent that one&#8217;s behavior doesn&#8217;t match that, one&#8217;s behavior is in error.&#8221; In other words, according to this view, maybe an individual themselves is the only person who can define their intrinsic values, and there is no objectively correct opinion for them to hold about this. But then, by what criteria (or based on what values) is a person deciding on what their values should be?</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>3. Reasons why Possibility 3 (i.e., &#8220;<strong>tyranny of the analytical mind</strong></strong>&#8220;) <strong>may not be a confused approach</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.1 <strong>Identifying with the analytic part of the mind</strong>: some people feel that choosing to endorse a particular framework (and choosing to endorse some values over other ones) is part of who they are &#8211; part of (or even the most important part of) their self-concept. In other words, the reflective part of them making that choice feels to them like it is &#8220;who they are&#8221; more so than other parts of them that have other preferences. Here&#8217;s how one person explained it: &#8220;For my part, the part of my mind that examines my moral intuitions and decides whether I want to act on them feels about as &#8216;me&#8217; as anything gets.&#8221; Another person thought that ​endorsing some values over others makes sense because many people think that their <em>&#8220;best&#8221;</em> self would live &#8220;in accordance with the judgments they make based on arguments and thought experiments.&#8221; Another proposed explanation for people being guided by the analytic mind is that being guided in this way might be a normal feature of human psychology (which at least one person saw as needing no further explanation). Yet another explanation put forward was that some people can have a completely arbitrary &#8220;personal taste&#8221; for giving their analytical mind a veto over other parts of their mind (and, according to this argument, those people don&#8217;t need a further justification beyond their arbitrary taste).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.2 <strong>Simplicity and coherence meta-values:</strong> having fewer intrinsic values or having fewer intrinsic values that one allows to dictate their behavior can (some argue) be justified by having an intrinsic value of coherence, simplicity, or consistency. As one commenter put it: &#8220;I genuinely think I just have utilitarian intrinsic values. [It seems] relevant here that I also value coherence (in a non-moral sense, probably as an epistemic virtue or something), so if I find myself thinking something that is incoherent with another value of mine, I can debate &amp; discard the less important one.&#8221;&nbsp;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 4: Moral uncertainty</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.1 <strong>Meta-moral uncertainty &#8211; believing that realism <em>might</em> be true: </strong>people who don&#8217;t identify as moral realists might still feel there is some possibility that moral realism is correct and might act as if it was correct (at least to some degree &#8211; say, in proportion to how much weight they give this possibility compared to other action-guiding beliefs). As one commenter put it: &#8220;Why do I keep donating (and doing other EA things), albeit to a lesser extent [since switching from moral realism to moral anti-realism]? The main reason is (meta) moral uncertainty: I still feel that it is possible that moral realism is correct, and so I think it should have some say over my behavior.&#8221;</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.2 <strong>Misinterpreting moral uncertainty as anti-realism: </strong>People who think that their own beliefs are not necessarily objectively true (due to moral uncertainty) might conclude that they must be moral anti-realists, but they might be mistaken in calling themselves that. As one commenter explained it: &#8220;believing in moral objectivity is different from believing we are actually able to parse the true moral weights in practice.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Possibility 5: Precommitment and cooperation arguments</strong></p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.1 <strong>Benefiting from pre-committing to impartiality: </strong>some argue that acting as if classical utilitarianism is true might be justified on grounds related to resolving collective action problems (without having to believe that moral realism is true). For instance, one commenter wrote: &#8220;Being impartial between oneself (and one&#8217;s friends / family) vs. random people isn&#8217;t something that any human naturally feels, but it&#8217;s a &#8216;cooperate&#8217; move in a global coordination game. If we&#8217;d all be better off if we acted this way, then we want a situation where everyone makes a binding commitment to act impartially. It&#8217;s hard to do that, but we can approximate it through norms. So EAs might want to endorse this without feeling it.&#8221; Though presumably, if this was the justification for utilitarianism, they would then switch to a different moral theory if they thought it better solved collection action problems (e.g., if they came to believe virtue ethics better solved collective action problems).</p>



<p>          &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.2 <strong>Benefiting from pre-committing to preference utilitarianism: </strong>some commenters pointed out that preference utilitarianism could also be justified on self-interested grounds (this post was not intended to be about other forms of utilitarianism such as preference utilitarianism, but it was edited to clarify that only after some people had started commenting). As one commenter put it: &#8220;If we&#8217;re viewing morality as playing a counterfactual game with others, we should take actions to benefit them in a way essentially identically to preference utilitarianism. That doesn&#8217;t require any objective morality, it only requires self-interest and buying into the idea that you should pre-commit to a theory of morality that, if many people embraced it, would increase your personal preferences.&#8221; Though in such cases (if they were actually optimizing for self-interest), it seems strange they would choose a moral theory where their interests count equally to people they will never encounter and never be in collective action problems with. (Some might argue that this would make more sense if the person endorsed a form of <a href="https://forum.effectivealtruism.org/posts/7MdLurJGhGmqRv25c/multiverse-wide-cooperation-in-a-nutshell">multiverse-wide cooperation via superrationality</a>, though it&#8217;s unclear how this resolves more concrete/real-life collective action problems).</p>



<hr class="wp-block-separator has-alpha-channel-opacity is-style-wide"/>



<p><strong>Possibility 6: Social forces</strong>  &#8211; as <a href="https://twitter.com/TylerAlterman">Tyler Alterman</a> put it (when I was discussing this post with him &#8211; he&#8217;s named here with permission): &#8220;[I felt] that [for some EAs] their actual beliefs were at odds with the cultural norms of other smart people (EAs) that they felt alignment with, so they stopped paying attention to their actual beliefs. I think this is what happened to me for a while. There was an element of wanting to fit in. But then there is an element of &#8211; there are so many smart people here [in EA]&#8230; EA is full of Oxford philosophers &#8211; they must have figured this out already; there must be some obvious answer for my confusion. So I just went along with the obligation and normative language and lifestyle it entailed.&#8221; Social forces can be powerful, and in some cases, an explanation for human behavior can be as simple as: the other people around me who I respect or want the approval of do this thing or seem convinced this thing is true, so I do this thing and am convinced it is true.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This essay was first written on August 14, 2022, first appeared on this site on August 19, 2022, and was edited (to incorporate a summary of people&#8217;s responses) on September 1, 2022, <em>with help from Clare Harris</em>.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2863</post-id>	</item>
	</channel>
</rss>
