<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>freedom &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/freedom/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Mon, 22 Dec 2025 18:05:45 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>If AI Replaces Human Labor Does That Have To Strip Human Lives Of Meaning?</title>
		<link>https://www.spencergreenberg.com/2025/11/if-ai-replaces-human-labor-does-that-have-to-strip-human-lives-of-meaning/</link>
					<comments>https://www.spencergreenberg.com/2025/11/if-ai-replaces-human-labor-does-that-have-to-strip-human-lives-of-meaning/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 16 Nov 2025 17:59:55 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[abundance]]></category>
		<category><![CDATA[achievement]]></category>
		<category><![CDATA[agency]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[beauty]]></category>
		<category><![CDATA[control]]></category>
		<category><![CDATA[diversity]]></category>
		<category><![CDATA[fairness]]></category>
		<category><![CDATA[freedom]]></category>
		<category><![CDATA[happiness]]></category>
		<category><![CDATA[human flourishing]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[justice]]></category>
		<category><![CDATA[learning]]></category>
		<category><![CDATA[meaning]]></category>
		<category><![CDATA[nature]]></category>
		<category><![CDATA[pleasure]]></category>
		<category><![CDATA[power]]></category>
		<category><![CDATA[protection]]></category>
		<category><![CDATA[relationships]]></category>
		<category><![CDATA[spirituality]]></category>
		<category><![CDATA[suffering]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[truth]]></category>
		<category><![CDATA[virtue]]></category>
		<category><![CDATA[WORK]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4669</guid>

					<description><![CDATA[A common worry is that technological development, and increasingly advanced AI in particular, will necessarily remove meaning from our lives. For instance, if humanity ends up in a situation of extreme material abundance, but at some point there is a lack of ability for most (or all) people to do work that&#8217;s value-additive, will that [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>A common worry is that technological development, and increasingly advanced AI in particular, will necessarily remove meaning from our lives. For instance, if humanity ends up in a situation of extreme material abundance, but at some point there is a lack of ability for most (or all) people to do work that&#8217;s value-additive, will that lead to widespread depression and lack of meaning?</p>



<p>While I think there are very serious concerns that advancing technologies, and AI in particular, raise (such as lack of control over these systems with could be a tremendous threat, reduction of agency, and the potential for extreme concentration of power), if we can keep these technologies well under control and pointed at the betterment of humanity (a big if) I don&#8217;t think they have to destroy meaning. Here&#8217;s why:</p>



<p>While some people do derive a lot of their sense of self-worth from their work (such as myself), and such people could be especially hard hit if they are replaced by technology, there are, thankfully, many things that humans intrinsically value, and therefore, lots of potential sources of meaning. By seeking and then (at least to a reasonable degree) creating what we intrinsically value, we create meaning.</p>



<p>So let&#8217;s have a quick look at different human intrinsic values (i.e., things people value for their own sake, not as a means to an end) and how advancing technology, such as AI, may impact each of them:</p>



<p>—</p>



<p>Spirituality and purity: there are no reasons I see that technology would have to interfere with spirituality, religion, or attempts to act purely. So these values could continue being a source of meaning.</p>



<p>Truth and learning: if anything, really effective technology can accelerate the search for truth and our ability to learn. At the same time, technology gone wrong could make the truth harder to discern (e.g., if technology facilitates misinformation outcompeting accurate information).</p>



<p>Achievement: this one could be hard hit by technology insofar as it&#8217;s related to doing things that eventually AI may do better than all of us. At the same time, humans find a lot of value in achievements regardless of non-human performance. For instance, people compete in sprints (even though cheetahs could easily outrun us) and find value in achievement in chess (despite AI being able to easily beat the best human). A lot of people also value personal achievement &#8211; merely doing the best you can, or improving to do better than your own previous results.</p>



<p>Freedom: while technology could impair freedom (e.g., if it concentrates power into the hands of certain actors, they might choose to limit freedom), there is also potential for technology to expand freedom a lot by allowing us to do many things that weren&#8217;t possible before, either because we didn&#8217;t know how to do them or because they were too costly before.</p>



<p>Pleasure, non-suffering, longevity: there is no fundamental tension between technology and these values, and technology may be able to improve these by reducing sources of suffering (such as disease), increasing lifespans, and making pleasure more easily accessible.</p>



<p>Happiness (as distinct from pleasure and non-suffering): This is a tricky one, because technology can cut both ways here. For instance, while it&#8217;s likely social media has increased some kinds of pleasure, it may well have reduced overall happiness for some people by making them more disconnected or impacting the way they see the world.</p>



<p>Caring, reputation, respect, loyalty, and virtue: these don&#8217;t have to be impacted by technology; we could continue valuing these in our relationships with others, even in a world where AI has replaced most work. The main threats I see here from technology are the ways that social media can cause people to spend less face-to-face time together, and the way that AI friends or &#8220;relationship&#8221; partners could take the place of human relationships.</p>



<p>Justice and fairness: this could go either way. Technology could concentrate power in a way that makes these worse or systematize bias. On the other hand, if the benefits of technology are distributed widely, they could create increased abundance. Technology also has the potential, if harnessed correctly, to reduce (currently commonplace) human bias.</p>



<p>Diversity: globalization tends to reduce diversity, and so technology could accelerate that trend. On the other hand, giving people more freedom through technology could end up increasing forms of diversity (such as how people choose to live their lives).</p>



<p>Protection: technology has the ability to make us safer, so while we may experience more protection (for ourselves and our loved ones), it also could mean that our own role of protecting others is reduced, which could reduce the meaning derived from providing protection. On the other hand, if technology is not developed thoughtfully, the world could feel increasingly chaotic and even become more unsafe, so protection could become even more important.</p>



<p>Nature: technology has a track record of destroying nature, so that trend may continue. However, it&#8217;s possible that with sufficiently advanced technology, that trend will go the opposite direction (e.g., cheap green energy makes it easier to protect nature). Technology often destroys nature either as a means to accelerate or as a side effect of acceleration, but sufficiently advanced technology may reduce that effect.</p>



<p>Beauty: technology has the possibility of increasing beauty in the world (making it easier to create and experience beauty), but also runs the risk of filling the world with generic slop.</p>



<p>Overall, while advancing technology may have a negative impact on some things that humans intrinsically value, as long as we don&#8217;t destroy the world with these technologies and avoid allowing extreme concentration of power from them, other intrinsic values may not be impacted or even be benefited by technology. As long as we can seek and (to a reasonable degree) create what we intrinsically value, there are sources of meaning available.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on November 16, 2025, and first appeared on my website on December 22, 2025.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2025/11/if-ai-replaces-human-labor-does-that-have-to-strip-human-lives-of-meaning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4669</post-id>	</item>
		<item>
		<title>Should Effective Altruists be Valuists instead of utilitarians? &#8211; part 3 in the Valuism sequence</title>
		<link>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/</link>
					<comments>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 10 Mar 2023 07:42:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[autonomy]]></category>
		<category><![CDATA[burnout]]></category>
		<category><![CDATA[choice]]></category>
		<category><![CDATA[contradictions]]></category>
		<category><![CDATA[denial]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[equity]]></category>
		<category><![CDATA[freedom]]></category>
		<category><![CDATA[group membership]]></category>
		<category><![CDATA[humility]]></category>
		<category><![CDATA[intrinsic values]]></category>
		<category><![CDATA[justice]]></category>
		<category><![CDATA[long-term success]]></category>
		<category><![CDATA[moral antirealism]]></category>
		<category><![CDATA[moral realism]]></category>
		<category><![CDATA[non-altruistic values]]></category>
		<category><![CDATA[self-care]]></category>
		<category><![CDATA[self-control]]></category>
		<category><![CDATA[shared values]]></category>
		<category><![CDATA[social groups]]></category>
		<category><![CDATA[social values]]></category>
		<category><![CDATA[sustainability]]></category>
		<category><![CDATA[truth-seeking]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<category><![CDATA[utility]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3077</guid>

					<description><![CDATA[By Spencer Greenberg and Amber Dawn Ace&#160; This is the third of five posts in my sequence of essays about my life philosophy, Valuism &#8211; here are the first, second, fourth, and fifth parts (though the links won’t work until those other essays are released). Sometimes, people take an important value &#8211; maybe their most [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>By Spencer Greenberg and Amber Dawn Ace&nbsp;</em></p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="375" data-attachment-id="3168" data-permalink="https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/dall%c2%b7e-2023-02-05-16-07-14-a-treasure-chest-full-of-rainbows/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=2048%2C1024&amp;ssl=1" data-orig-size="2048,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="DALL·E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?fit=750%2C375&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=750%2C375&#038;ssl=1" alt="" class="wp-image-3168" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1024%2C512&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=768%2C384&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?resize=1536%2C768&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/02/DALL%C2%B7E-2023-02-05-16.07.14-A-treasure-chest-full-of-rainbows.png?w=2048&amp;ssl=1 2048w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Image created using the A.I. DALL•E 2</em></figcaption></figure>
</div>


<p style="font-size:14px"><em>This is the third of five posts <em>in my sequence of essays</em> about my life philosophy, Valuism &#8211; here are the <em><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">first</a>,</em> <a href="https://www.spencergreenberg.com/2023/02/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/"></a><em><a href="https://www.spencergreenberg.com/2023/02/what-to-do-when-your-values-conflict-part-2-in-the-valuism-sequence/">second</a>,</em></em> <em><a href="https://www.spencergreenberg.com/2023/02/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">fourth</a>, and <a href="https://www.spencergreenberg.com/2023/07/valuism-and-x-how-valuism-sheds-light-on-other-domains-part-5-of-the-sequence-on-valuism/">fifth</a> parts <em>(though the links won’t work until those other essays are released)</em>.</em></p>



<p>Sometimes, people take an important value &#8211; maybe their most important value &#8211; and decide to prioritize it above all other things. They neglect or ignore their other values in the process. In my experience, this often leaves people feeling unhappy. It also leads them to produce less total value (according to their own intrinsic values). I think people in the effective altruist community (i.e., EAs) are particularly prone to this mistake.</p>



<p><a href="https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/">In the first post in this sequence</a>, I introduce Valuism &#8211; my life philosophy &#8211; and offer some general arguments for its advantages. In this post, I talk about the interaction between Valuism and effective altruism. I argue that the way some EAs think about morality and value is (in my view) empirically false, potentially psychologically harmful, and (in some cases) incoherent.&nbsp;</p>



<p>EAs want to improve others’ lives in the most effective way possible. Many EAs identify as hedonic utilitarians (even the ones who reject objective moral truth). They say that impartially maximizing utility among all conscious beings &#8211; by which they usually mean the sum of all happiness minus the sum of all suffering &#8211; is the o<em>nly thing of</em> value, or the only thing that they feel they <em>should</em> value. I think this is not ideal for a few reasons.</p>



<p></p>



<h3 class="wp-block-heading">1. I think (in one sense) it&#8217;s empirically false</h3>



<p>Consider a person who claims that &#8220;only utility is valuable.&#8221;</p>



<p>If&nbsp;we interpret this as an empirical claim about the person’s own values &#8211; i.e., that the sum of happiness minus suffering for all conscious beings is the only thing that their brain assigns value to &#8211; I think that it&#8217;s very likely empirically false.&nbsp;</p>



<p>That is, I don&#8217;t think anyone <em>only</em> values (in the sense of what their brain assigns value to) maximizing utility, even if it&#8217;s a very important value of theirs. I can&#8217;t prove that literally nobody<em> </em>only values maximizing utility, but I argue that human brains aren&#8217;t built to only value one thing, nor would we expect evolution to converge on pure utilitarian psychology since evolution optimizes for survival (a purely utilitarian brain would get rapidly outcompeted by other brain types if they existed 50,000 years ago).&nbsp;</p>



<p>I think that even the most hard-core hedonic utilitarians <em>do</em> psychologically value some non-altruistic things deep down &#8211; for example, their own pleasure (more than the pleasure of everyone else), their family and friends, and truth. However, in my opinion, they sometimes deny this to themselves or feel guilty about it. If you are convinced that your only intrinsic value is utility (in a hedonistic, non-negative-leaning utilitarian sense), you may find it instructive to take a look <a href="https://twitter.com/SpencrGreenberg/status/1568595511522852871">at these philosophical scenarios</a> I assembled or check out <a href="https://www.youtube.com/watch?v=d_6i9uzsBuc&amp;ab_channel=CentreforEffectiveAltruism">the scenarios I give in this talk</a> about values.</p>



<p>For instance, does your brain actually tell you it&#8217;s a good trade (in terms of your intrinsic values) to let a loved one of yours suffer terribly in order to create a mere 1% chance of preventing 101 strangers from the same suffering? Does your brain actually tell you that equality doesn&#8217;t matter one iota (i.e., it&#8217;s equally good for one person to have all the utility compared to spreading it more equally)? Does your brain actually value a world of microscopic, dumb orgasming micro-robots more than a world (of slightly less total happiness) where complex, intelligent, happy beings pursue their goals? Because taken at face value, hedonic utilitarianism doesn&#8217;t care about whether a person is your loved one or a stranger, doesn&#8217;t care about equality <em>at all</em>, and prefers microscopic orgasming robots to complex beings as long as the former are slightly happier. But, if you consider yourself a hedonic utilitarian, is that actually what your brain values?</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/u7FnrSutFnOMuG57YHHw9RGv-QCfrH2LMvMWsATOHkYrOpNy8mr9I46XublWGnhnnVc_vSjXkOIWXfG9-rRYQYrujHM5D6d8GylwPPRuv0ePebNF-Kha_P9_b9k3Vd63BVHaP5eMOb0QHj4MJLWZ4Yw" alt=""/><figcaption class="wp-element-caption"><em>Caption: it turns out very few people are willing to risk hell on earth for a somewhat higher expected utility!</em></figcaption></figure>
</div>


<p></p>



<h3 class="wp-block-heading">2. It can be psychologically harmful</h3>



<p>Additionally, I think the attitude that there is only one thing of value can lead to severe psychological burnout as people try to push away, minimize or deny their other intrinsic values and “selfish,” non-altruistic desires. I’ve seen this happen quite a few times. <a href="https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends">Here&#8217;s Tyler Alterman&#8217;s personal account</a> of this if you’d like to see an example. <a href="https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnout">And here&#8217;s a theory</a> of how this burnout happens.</p>



<p></p>



<h3 class="wp-block-heading">3. I think (in one sense) it&#8217;s incoherent</h3>



<p>When coupled with a view that there is no objective moral truth, I think it is, in most cases, <strong>philosophically incoherent</strong> to claim that total hedonic utility is all that matters<strong>.</strong></p>



<p>If you believe in objective moral truth, it may make sense to say, “I value many things, but I have a moral obligation to prioritize only some of them” (for example, you might be convinced by arguments that you are objectively morally obliged to promote utility impartially even though that’s not the only value you have).</p>



<p>However, many EAs, like me, don’t believe in objective moral truth. If you don’t think that things <em>can</em> be objectively right or wrong, it doesn’t make sense (I claim) to say that you “should” prioritize maximizing utility for all of humanity over other values – what does this “should” even mean? Well, there are some answers for what this “should” could mean that philosophers and lay people have proposed, but I find them pretty weak.</p>



<p>For a much more in-depth discussion of this point (including an analysis of different ways that EAs have responded to my critique of pairing utilitarianism with denial of objective moral truth), see <a href="https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/">this essay</a>. It collects may different objections (from EAs and from some philosophers) and discusses them. So if you are interested in whether it is or isn&#8217;t coherent to only value utility when you deny objective moral truth, and moreover, whether EAs and philosophers have good arguments for doing so, please see that essay.</p>



<p>I find that while many (perhaps the majority of) EAs deny objective moral truth, many still talk and think as though there is objective moral truth.</p>



<p>I found it striking that, in my conversations with EAs about their moral beliefs, few had a clear explanation for how to combine a belief in utilitarianism with a lack of a belief in objective moral truth, and the approaches to that that they did put forward were usually quite different from each other (suggesting, at the very least, a lack of consensus in how to support such a perspective). Some philosophers I spoke to pointed to other ways one might defend such a position (mainly drawn from the philosophical literature), but I don&#8217;t recall ever seeing these approaches being used or referenced by non-philosopher EAs (so they don&#8217;t seem to be doing much work in the beliefs of EAs who hold this view).&nbsp;</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh4.googleusercontent.com/9GJvXrOJAl0p6FFj2eUiqu6MQPftJRlFDeIG2D_mBMMmi1_ryaOh5N19YsBdG4BlkyJNHhSvogaR1CAdEE4EsUNH5xmQ8rdzZmT90qlbkL4oCQO4sehUFLUp7y5EdLBizKLKZNxD0UFj4J2aFj0QBgo" alt=""/><figcaption class="wp-element-caption"><em>A poll I ran on Twitter. More than half of EA respondents report not being moral realists</em>.</figcaption></figure>
</div>


<p>I suspect it would help many EAs if they took a more Valuist approach: rather than claiming to or aspiring to only value hedonic utility, they could accept that while they <em>do </em>intrinsically value this – very likely far more than the average person – they also have other intrinsic values, for example, truth (which I think is another very important psychological value for many EAs), their own happiness, and the happiness of their loved ones.</p>



<p>Valuism also avoids some of the most awkward bullets that EAs sometimes are tempted to bite. For instance, hedonic utilitarianism seems to imply that your own happiness and the happiness of your loved ones “shouldn’t” matter to you even a tiny bit more than the happiness of a stranger who is certain to be born 1,000,000 years from now. Valuism may explain why people who identify as hedonic utilitarians may feel a great deal of internal conflict about this – even if you value the happiness of all sentient beings a tremendous amount, you almost certainly have other intrinsic values too. That means that Valuism may help you avoid some of the awkward conundrums that arise from ethical monism (where you assume that there is only one thing of value).</p>



<p></p>



<h2 class="wp-block-heading">Valuism and the EA Community</h2>



<p>From a Valuist perspective,<strong> I see the EA community as a group of people who share a primary intrinsic value of hedonic utility</strong> (i.e., reducing suffering and increasing happiness impartially) <strong>with a secondary strong intrinsic value of truth-seeking.</strong> Oddly (from my point of view) EAs are very aware of their intrinsic value of impartial hedonic utility, but seem much less aware of their truth-seeking intrinsic value. On a number of occasions, I&#8217;ve seen mental gymnastics used to justify truth-seeking in terms of increasing hedonic utility when (I claim) a much more natural explanation is that truth-seeking is an intrinsic value (not <em>just</em> an instrumental value that leads to greater hedonic utility). This helps explain why many EAs are so averse to <em>ever</em> lying and so averse even to persuasive marketing.</p>



<p>Each individual EA has other intrinsic values beyond impartial utility and truth-seeking, but in my view, those two values help define EA and make it unique. This is also a big part of why this community resonates with me: those are my top two universal intrinsic values as well.</p>



<p></p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh5.googleusercontent.com/caB2WlN1Mleqk9ZsApi7rokTC-KpCErd-t3GDKOIk5didxPnvdoHJp1bVOiCFNgBmzq9QLMFPgrya91zUY4vqUEEDAJ8juRiCo_07ikYFZZRwmqZBC7B5NOLeHr6KqFLciFtoqWok8rDjHtfqd2-r-k" alt=""/><figcaption class="wp-element-caption"><em>While these groups sometimes overlap (e.g., some effective altruists are libertarians, and </em><a href="https://clearerthinkingpodcast.com/episode/085"><em>some are social justice advocates</em></a><em>, etc.), we created this graphic to illustrate what we believe are the </em><strong><em>most common</em></strong><em> universal (i.e., not self-focused, not community-focused) intrinsic values shared among most members of each group.</em></figcaption></figure>
</div>


<p></p>



<p>If more EAs adopted Valuism, I think that they would almost all continue to devote a large fraction of their time and energy toward improving the world effectively. Maximizing global hedonic utility (i.e., the sum of happiness minus suffering for conscious beings) <em>is</em> the strongest universal intrinsic value of most community members, so it would still play the largest role in determining their goals and actions, even after much reflection.&nbsp;</p>



<p>However, they would also feel more comfortable investing in their own happiness and the happiness of their loved ones at the same time, which I predict would make them happier and reduce burnout. Additionally (I claim), they’d accept that, like many effective altruists,<strong> they also have a strong intrinsic value of truth</strong>. They’d strike a balance between their various intrinsic values, and not endorse annihilating all their intrinsic values except for one.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>I published this piece on this site on March 10, 2023.</em><br><br><a rel="noreferrer noopener" href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+Should+Effective+Altruists+be+Valuists+instead+of+utilitarians%3F%C2%A0+-+part+3+in+the+Valuism+sequence&amp;source=email" target="_blank">If you read this line, please do us a favor and click here to answer one quick question.</a></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>You&#8217;ve just finished the second post in my sequence of essays on my life philosophy, Valuism –</em>&nbsp;<em><a href="https://www.spencergreenberg.com/2023/05/what-would-a-robot-value-an-analogy-for-human-values-part-4-of-the-valuism-sequence/">click here to go to the fourth post.</a></em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3077</post-id>	</item>
		<item>
		<title>Thoughts on Common Political Perspectives</title>
		<link>https://www.spencergreenberg.com/2017/12/thoughts-on-common-political-perspectives/</link>
					<comments>https://www.spencergreenberg.com/2017/12/thoughts-on-common-political-perspectives/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Wed, 20 Dec 2017 19:36:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[censorship]]></category>
		<category><![CDATA[economic]]></category>
		<category><![CDATA[economy]]></category>
		<category><![CDATA[freedom]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[politics]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2107</guid>

					<description><![CDATA[Personal freedom (often part of liberal and libertarian perspectives): (1) People seem better at understanding what is good for them, as individuals, than regulators are at figuring out what those same people need, so I think personal freedom to do what you like should be the default position unless there is a good reason to [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><strong>Personal freedom</strong> (often part of liberal and libertarian perspectives):</p>



<p>(1) People seem better at understanding what is good for them, as individuals, than regulators are at figuring out what those same people need, so I think personal freedom to do what you like should be the default position unless there is a good reason to deviate from it.</p>



<p>(2) I think it makes our society better, on average, when any person who is critical of government or authority can publicly state their criticism without fear of punishment by the state, since this type of personal freedom exerts a useful pressure on authorities not to go against what people want and need.<br><br><strong>Limits on personal freedom </strong>(often part of conservative and authoritarian perspectives):</p>



<p>1) Some activities that people choose freely end up making both themselves and society more broadly, worse off (e.g., some highly addictive drugs or reckless behaviors that put others in danger), and so restricting these types of personal freedoms can improve society.</p>



<p>(2) Some types of communication can trigger widespread harm and fear, such as publicly inciting others to commit terrorism, and so I think restricting very specific types of communication can be societally beneficial.<br><br><strong>Economic freedom</strong> (often part of conservative and libertarian perspectives):</p>



<p>(1) Most of the time, when people freely choose to engage in a trade, I think the total benefit to the two parties is greater than the total costs of the transaction so that economic freedom tends to be beneficial.</p>



<p>(2) On average technological development has made humans enormously better off, and I think economic freedom accelerates technological progress.<br><br><strong>Limits on economic freedom</strong> (often part of liberal and authoritarian perspectives):</p>



<p>(1) Human welfare is not what capitalism optimizes for, and sometimes the two are even directly at odds (e.g., when companies lie to us about harms of their products, pollute the environment, convince us we are deficient unless we have what they offer, etc.), and so some restrictions on company activities are important to preserve human welfare.</p>



<p>(2) Some technological developments in the next 15-100 years may end up causing catastrophic problems for humanity if they aren’t approached with extreme caution (e.g., technologies for designing biological threats or advanced Artificial Intelligence), but unfettered capitalism tends to produce rapid production and arms races, the opposite of what is needed to reduce the chance of calamity.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2017/12/thoughts-on-common-political-perspectives/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2107</post-id>	</item>
	</channel>
</rss>
