<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>predictions &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/predictions/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Thu, 29 May 2025 23:50:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Does The Music You Listen To Predict Your Personality?</title>
		<link>https://www.spencergreenberg.com/2025/05/does-the-music-you-listen-to-predict-your-personality/</link>
					<comments>https://www.spencergreenberg.com/2025/05/does-the-music-you-listen-to-predict-your-personality/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 23 May 2025 23:44:14 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[classical]]></category>
		<category><![CDATA[correlations]]></category>
		<category><![CDATA[country]]></category>
		<category><![CDATA[experience]]></category>
		<category><![CDATA[groups]]></category>
		<category><![CDATA[hip-hop]]></category>
		<category><![CDATA[jazz]]></category>
		<category><![CDATA[music]]></category>
		<category><![CDATA[personality]]></category>
		<category><![CDATA[pop]]></category>
		<category><![CDATA[predict]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[rock]]></category>
		<category><![CDATA[samples]]></category>
		<category><![CDATA[traits]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4379</guid>

					<description><![CDATA[Does whether you like rock music rather than pop or country say something about your personality? I would have thought not, but we ran a study, and it turns out yes &#8211; in the U.S., your music tastes predict aspects of your personality! Much to my surprise, liking rock and classical music predicts the same [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Does whether you like rock music rather than pop or country say something about your personality? I would have thought not, but we ran a study, and it turns out yes &#8211; in the U.S., your music tastes predict aspects of your personality!</p>



<p>Much to my surprise, liking rock and classical music predicts the same things about your personality: having greater &#8220;openness to experience&#8221; (a personality trait from the Big Five framework) and being more intellectual.</p>



<p>Makes sense for classical, but who would have guessed that&#8217;s true of rock?</p>



<p>Another surprise to me was that enjoying dance/electronic music, country music, and jazz music predicted similar traits: being more group-oriented (e.g., gravitating toward group rather than 1-1 interactions), being more extroverted, and being more spontaneous.</p>



<p>But each of these 3 groups also stood out uniquely. Enjoying country was associated with being more emotional, enjoying dance/electronic was associated with higher openness to experience, and enjoying jazz was associated with being less attention-seeking than the other two groups.</p>



<p>Enjoyment of both pop music and hip-hop was associated with being more emotional, but pop music enjoyers were more group-oriented, whereas hip-hop music enjoyers were more spontaneous.</p>



<p>All the correlations discussed here are between r=0.3 and r=0.45 in size, so they are moderately large. It would be neat to see whether this generalizes to non-U.S. samples.</p>



<p>You can explore all of these music genre correlations, plus over a million more correlations about humans, for free using PersonalityMap: <a target="_blank" href="https://personalitymap.io/?fbclid=IwZXh0bgNhZW0CMTAAYnJpZBEwY2cxY1daZjZkRzJKaVhISwEevo271ehOGgfcpqOoxoGDXTFZylSMG9OqCeyu-4uhwk8qbs0q42K3aflFWqY_aem_CmSTqO0uFs8R8NHMqdMZLg" rel="noreferrer noopener">https://personalitymap.io</a></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on May 23, 2025, and first appeared on my website on May 29, 2025.</em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2025/05/does-the-music-you-listen-to-predict-your-personality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4379</post-id>	</item>
		<item>
		<title>Predictions of extinction are not like other predictions</title>
		<link>https://www.spencergreenberg.com/2025/02/predictions-of-extinction-are-not-like-other-predictions/</link>
					<comments>https://www.spencergreenberg.com/2025/02/predictions-of-extinction-are-not-like-other-predictions/#comments</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Fri, 14 Feb 2025 00:39:32 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[climate change]]></category>
		<category><![CDATA[death]]></category>
		<category><![CDATA[extinction]]></category>
		<category><![CDATA[extinction risk]]></category>
		<category><![CDATA[extinction risks]]></category>
		<category><![CDATA[high stakes]]></category>
		<category><![CDATA[human]]></category>
		<category><![CDATA[nanotech]]></category>
		<category><![CDATA[nuclear war]]></category>
		<category><![CDATA[prediction]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[stakes]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4331</guid>

					<description><![CDATA[Predictions of extinction are not like other predictions for at least two reasons: Why? Regarding point one, reasoning based on track record: Normally, a type of prediction being wrong again and again will lead you to dismiss that type of prediction. For instance, if every year (for some reason), experts predict that your country will [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Predictions of extinction are not like other predictions for at least two reasons:</p>



<ol class="wp-block-list">
<li>You can’t reason based on track record in the same way you can with normal predictions.</li>



<li>The stakes are extremely high. Being wrong on normal predictions rarely matters as much.</li>
</ol>



<p>Why?</p>



<p>Regarding point one, reasoning based on track record:</p>



<p>Normally, a type of prediction being wrong again and again will lead you to dismiss that type of prediction. For instance, if every year (for some reason), experts predict that your country will soon have the highest math scores in the world, and yet each year it is ranked 50th in such scores, eventually, you (rightly) ignore the experts.</p>



<p>However, with extinction risks, this kind of reasoning doesn’t quite work. In all possible universes, those who predict the extinction of their species will be wrong right up until extinction happens. The predictions can, at most, be right only once.</p>



<p>Consider two worlds: one where humans go extinct in 2030 and one where they don’t ever go extinct (or go extinct only much later). What would you observe in 2029 regarding past predictions of extinction in these two worlds?</p>



<p>Well, in <strong>both </strong>worlds you’d observe that all past extinction predictions had failed up until that point. (If anything, I’d anticipate having MORE past extinction predictions fail in the world where extinction happens in 2030 since there would be more evidence of potential extinction in that world, all else equal.)</p>



<p>Therefore, the reasoning that “we’ve had a lot of past extinction predictions and they’ve always failed, therefore extinction is unlikely” is not a good argument &#8211; you’d witness these failed predictions in both such worlds (and perhaps even more of them in the world where extinction happens soon).</p>



<p>This makes predictions of extinction a special class of prediction.</p>



<p>To dismiss arguments about extinction risk, it’s necessary to engage with the actual arguments themselves, as they can’t be dismissed as a group due to past failed predictions. While near misses can tell you about the probability of some extinction risks (e.g., times when nuclear war nearly broke out or asteroid near impacts), failed predictions are not very informative.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Regarding point two, the enormous stakes:</p>



<p>Extinction, most people will agree, would be incredibly bad. For that reason, they don’t have to be very likely to be worth taking very, very seriously.</p>



<p>In a world where there were millions of distinct, plausible extinction risks, the large number of them would suggest that each one is (a priori) not that likely to end the species, and in such a world, it might be silly to invest much in these kinds of concerns (unless a smaller number of much more likely ones could be identified).</p>



<p>But that’s not the world we live in. There are only around 13 plausible human extinction risks &#8211; and in this short list, some of them aren’t even really plausible (when considered as a potential cause of literally ALL humans dying out). Here’s the list, in no particular order (if I missed any, let me know):</p>



<ol class="wp-block-list">
<li>Advanced AI technology</li>



<li>Nuclear war or the invention of new destructive weapons</li>



<li>Pathogens (e.g., human-engineered viruses)</li>



<li>Asteroids, extreme solar flares, supernovae, gamma-ray bursts, or other cosmological events</li>



<li>A mega volcano, mega earthquake, dramatic change in the earth’s magnetic field, or another major geological event</li>



<li>Advanced nanotech (e.g., grey goo) or synthetic biology</li>



<li>The second coming of various figures according to different religions, or god(s) or demons ending the world or terminating our species</li>



<li>Simulators (if we’re living in a simulation) ending the world or our species</li>



<li>Aliens from other planets</li>



<li>Runaway climate change/extreme climate shifts, or sudden ecosystem collapse</li>



<li>Physics experiments (e.g., related to vacuum stability) gone wrong or that were purposely carried out to end humanity</li>



<li>Universal environmental contaminates turning out to be deadly or to cause infertility</li>



<li>Extreme population decline until no reproduction takes place (e.g., following an event that greatly reduces the world population)</li>
</ol>



<p>Obviously, some of these are much less probable than others. And maybe you think some of these are ridiculous. Okay, cross those out. What about the others?</p>



<p>Given the incredible stakes, the short size of the list, and humanity’s (in my view, bizarre and irrational) unwillingness to protect its own future, all of these are worth investing much more in than humanity currently does. Obviously, it could be stupid for humanity to invest so much in preventing extinction that it’s seriously impaired. But we invest so little, it’s almost absurd.</p>



<p>I don’t think that predictions of extinction can be easily dismissed, despite all prior such predictions being wrong &#8211; they don’t work like other predictions do and are much higher stakes.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on February 13, 2025, and first appeared on my website on April 9, 2025.</em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2025/02/predictions-of-extinction-are-not-like-other-predictions/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4331</post-id>	</item>
		<item>
		<title>Understanding the Landscape of Viewpoints on the Risks and Benefits of AI</title>
		<link>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/</link>
					<comments>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 28 Jul 2024 00:16:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[existential risk]]></category>
		<category><![CDATA[good governance]]></category>
		<category><![CDATA[hype]]></category>
		<category><![CDATA[long-term risk]]></category>
		<category><![CDATA[longtermism]]></category>
		<category><![CDATA[near-term risk]]></category>
		<category><![CDATA[neartermism]]></category>
		<category><![CDATA[new technologies]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[societal risks]]></category>
		<category><![CDATA[stewardship]]></category>
		<category><![CDATA[superintelligence]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4057</guid>

					<description><![CDATA[I&#8217;ve seen seven main viewpoints on AI and the future from those who spend a lot of time thinking about it: (1) Superintelligence Doomers &#8211; they believe we are likely to build AI that&#8217;s superintelligent (i.e., that surpasses human intelligence in all respects) and that once we do, it will kill or enslave humanity. See: [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>I&#8217;ve seen seven main viewpoints on AI and the future from those who spend a lot of time thinking about it:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(1) Superintelligence Doomers &#8211; they believe we are likely to build AI that&#8217;s superintelligent (i.e., that surpasses human intelligence in all respects) and that once we do, it will kill or enslave humanity.</p>



<p>See: Eliezer Yudkowsky</p>



<p>&#8220;The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(2) AI Corrosionists &#8211; they believe AI, while it could be beneficial, has a very substantial likelihood of making things far worse. Unlike doomers, they see most risk coming from other processes rather than a sudden AI takeover or annihilation: for instance, it could be that humans lose control over the future slowly (by ceding more and more control over to AIs that are optimizing for things that are different than human values), or it could be that AI de-stabilized aspects of society that make other large risks (like nuclear war) more likely</p>



<p>See: Paul Christiano</p>



<p>&#8220;If you imagine a society in which almost all of the work is being done by these inhuman systems who want something that&#8217;s significantly at cross purposes, it&#8217;s possible to have social arrangements in which their desires are thwarted, but you&#8217;ve kind of set up a really bad position. And I think the best guess would be that what happens will not be what the humans want to happen, but what the systems who greatly outnumber us want to happen.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(3) Near Risk Doomers &#8211; they believe AI will have very bad effects (e.g., increasing unfairness, authoritarianism, climate change, unemployment, or concentration of power) but that we&#8217;re not going to build superintelligence anytime soon.</p>



<p>See: Cathy O&#8217;Neil</p>



<p>&#8220;The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(4) Current Paradigm Doubter &#8211; they believe the current AI paradigm is overhyped and isn&#8217;t going to change things all that much (some of them see the current paradigm as net negative; others see it as positive but only as one useful and overhyped tech among many other useful technologies). Some, like Marcus, hope that future paradigms might be more trustworthy and beneficial.</p>



<p>See: Gary Marcus</p>



<p>&#8220;What a strange world… All the major AI companies spending billions producing almost exactly the same results using almost exactly the same data using almost exactly the same technology, all flawed in almost exactly the same ways. Historians gonna be scratching their heads&#8230;The only way we will move significantly forward is to develop new architectures—likely neurosymbolic—that are less tied to the idiosyncrasies of specific training set.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(5) AI Stewardists &#8211; they believe AI advancement is important to humanity&#8217;s future, but it&#8217;s not necessarily going to be good or bad: it will be what we make of it, so it should be developed thoughtfully based on what we want to achieve.</p>



<p>See: Kevin Kelly</p>



<p>&#8220;AI could just as well stand for &#8216;alien intelligence.&#8217; We have no certainty we&#8217;ll contact extraterrestrial beings in the next 200 years, but we have almost 100 percent certainty that we&#8217;ll manufacture an alien intelligence by then. When we face these synthetic aliens, we&#8217;ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for?&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(6) Near Benefit Boosters &#8211; they believe AI will be very useful, impactful and important as a technology, and they also believe superintelligence is a ridiculous thing to worry about.</p>



<p>See: Yann LeCun</p>



<p>&#8220;AI is intrinsically good, because the effect of AI is to make people smarter&#8230;.AI is an amplifier of human intelligence and when people are smarter, better things happen: people are more productive, happier and the economy thrives.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>(7) Superintelligence Boosters &#8211; they believe we are likely to build superintelligent AI and that it will usher in an incredible and positive new era.</p>



<p>See: Ray Kurzweil</p>



<p>&#8220;By the time of the Singularity, there won&#8217;t be a distinction between humans and technology. This is not because humans will have become what we think of as machines today, but rather machines will have progressed to be like humans and beyond. Technology will be the metaphorical opposable thumb that enables our next step in evolution.&#8221;</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>How do we best organize the positions on risks from AI?</p>



<p></p>



<p>I think the two most informative spectrums to think about are</p>



<p>(1) How *substantial* the near-term impacts of AI are expected to be </p>



<p>and</p>



<p>(2) Whether the effects of AI are likely to be *good* or *bad*</p>



<p>The image shown is my attempt to place these different thinkers who talk about AI on this two-axis system. Though note that this just reflects my best guesses, some of these thinkers have very nuanced views, and I&#8217;m not certain that I&#8217;m placing each of them in the right place.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on July 27, 2024, and first appeared on my website on August 4, 2024.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/07/understanding-the-landscape-of-viewpoints-on-the-risks-and-benefits-of-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4057</post-id>	</item>
		<item>
		<title>The many forms of belief</title>
		<link>https://www.spencergreenberg.com/2020/10/the-many-forms-of-belief/</link>
					<comments>https://www.spencergreenberg.com/2020/10/the-many-forms-of-belief/#comments</comments>
		
		<dc:creator><![CDATA[Admin]]></dc:creator>
		<pubDate>Sun, 11 Oct 2020 20:25:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[actions]]></category>
		<category><![CDATA[anticipation]]></category>
		<category><![CDATA[association]]></category>
		<category><![CDATA[belief]]></category>
		<category><![CDATA[belief elicitation]]></category>
		<category><![CDATA[desirability bias]]></category>
		<category><![CDATA[emotions]]></category>
		<category><![CDATA[endorsed beliefs]]></category>
		<category><![CDATA[generative models]]></category>
		<category><![CDATA[implication]]></category>
		<category><![CDATA[inferences]]></category>
		<category><![CDATA[intuition]]></category>
		<category><![CDATA[memorization]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[simulation]]></category>
		<category><![CDATA[transient beliefs]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2579</guid>

					<description><![CDATA[What does it mean to believe? We often say things like &#8220;I believe&#8230;&#8221; and &#8220;they think that&#8230;&#8221; But what do we really mean by a &#8220;belief&#8221;? It&#8217;s notoriously tricky to define. For starters, we sometimes think of beliefs in binaries (true vs. false) and other times in probabilities (a 90% chance of coming true). We [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>What does it mean to believe?</p>



<p>We often say things like &#8220;I believe&#8230;&#8221; and &#8220;they think that&#8230;&#8221;</p>



<p>But what do we really mean by a &#8220;belief&#8221;? It&#8217;s notoriously tricky to define.</p>



<p>For starters, we sometimes think of beliefs in binaries (true vs. false) and other times in probabilities (a 90% chance of coming true). We sometimes would be willing to bet on our beliefs (&#8220;I&#8217;ll bet you $100 that New York City is not the capital of New York State&#8221;), and other times we wouldn&#8217;t be willing to bet (e.g., that your favorite team will win the Super Bowl, even though you may feel confident about it). It seems, sometimes, like we don&#8217;t fully believe our beliefs (e.g., we say, &#8220;I know it&#8217;s not dangerous,&#8221; and mean it, but then act as though it&#8217;s dangerous).</p>



<p>So what&#8217;s going on here? My theory: what makes discussing beliefs so confusing is that there are actually many different mental states we can have that are &#8220;belief-like.&#8221; In other words, beliefs are not one type of thing. They come in many forms.</p>



<p>We usually lump these divergent forms together, which creates a lot of confusion. At best, we divide them into dichotomies like explicit, cognitive &#8220;beliefs&#8221; vs. automatic &#8220;aliefs,&#8221; which still combine disparate forms together.</p>



<p>Below is my ambitious attempt to distinguish all the different belief-like states our minds can have.</p>



<p>I&#8217;m sure I missed some. So, I&#8217;d be very interested to know: what types of beliefs am I missing?</p>



<hr class="wp-block-separator"/>



<p><strong>FORMS OF BELIEF</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Beliefs On Reflection</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Endorsed Belief: </strong>there are some things we&#8217;re willing to sincerely say we believe (or willing to say we&#8217;re X% confident about), and this is a form of belief &#8211; the things we &#8220;believe&#8221; that we believe.</p>



<p>Ex: you write as a social media post, &#8220;I believe healthcare should be free for everyone,&#8221; and there is no doubt in your mind that you really do endorse this.</p>



<hr class="wp-block-separator"/>



<p><strong>Simulation Belief: </strong>we can ask ourselves hypotheticals like: &#8220;if I suddenly took my clothes off in the street, how would people react?&#8221; Our brains will then simulate the scenario and provide the best guess of what will happen. This is a form of belief about what would happen.</p>



<p>Ex: you consider how your friend would react if you told them you don&#8217;t like their haircut, and your belief is that they would be angry at you.</p>



<hr class="wp-block-separator"/>



<p><meta charset="utf-8"><strong>Automatic Beliefs</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Anticipation Belief:</strong> our brains constantly predict what is about to happen (based on current sensory input, knowledge, and what happened recently). When these anticipations are far off from what actually happens, we feel surprised. These are a kind of &#8220;belief&#8221; about what is about to happen in the next moment.</p>



<p>Ex: you knock over a full cup of coffee and anticipate it will spill onto the ground. If it doesn&#8217;t spill, you will be surprised.</p>



<hr class="wp-block-separator"/>



<p><strong>Sensed Belief: </strong>when someone makes a claim, such as &#8220;most pirates are ninjas,&#8221; we have a felt sense of whether we believe that statement or not (and how strongly/confidently we believe it). This is just a feeling, but it is a feeling about our level of &#8220;belief.&#8221;</p>



<p>Ex: you say to yourself, &#8220;I believe I am a good person,&#8221; and then you consider that statement carefully to see if you really FEEL like you believe it.</p>



<p>This idea relates to techniques like Focusing, where you learn to pay attention to the &#8220;felt sense&#8221; of whether a statement resonates with you.</p>



<hr class="wp-block-separator"/>



<p><strong>Emotional Belief: </strong>our emotions activate in specific sorts of situations (e.g., risk -&gt; anxiety, contamination -&gt; disgust). So if our emotions are activated, they can be interpreted as a form of &#8220;Emotional Belief.&#8221; If I&#8217;m anxious about X, on some level, I believe X could go badly.</p>



<p>Ex: you know it&#8217;s totally safe to walk across the balance beam suspended above the pit of foam cubes, but your heart is pounding in your chest, and your sympathetic nervous system seems convinced you&#8217;re about to plummet to your death.</p>



<hr class="wp-block-separator"/>



<p><strong>Intuitive Moral Belief: </strong>we have feelings about whether most things are good, neutral, or bad (and to what degree). For instance, you might like tea and hate coffee. You might like Biden and dislike Trump. These are beliefs of a sort &#8211; about what&#8217;s good.</p>



<p>Ex: are guns good or bad? What about nuclear power? Youth? Avocados? Spiders? If you pay close attention, you&#8217;ll probably realize you have an automatic sense of how good or bad these things are.</p>



<hr class="wp-block-separator"/>



<p><strong>Self-serving Belief: </strong>sometimes, we want something to be true badly, and we won&#8217;t even allow the possibility that it&#8217;s false (e.g., because the thought that it could be false gives us pain, and so we immediately flinch away from that thought).</p>



<p>Ex: Some people might say, &#8220;my partner has never cheated on me,&#8221; without actually even considering whether it could be true.</p>



<hr class="wp-block-separator"/>



<p><strong>Association Belief: </strong>we associate ideas with each other. For instance, we might associate milk with health (because of those &#8220;got milk&#8221; commercials) and Segways with nerds. These are a sort of implicit belief about the nature of these things (health effects / who uses them). (These beliefs can also be thought of as automatic or implicit memory-based beliefs.)</p>



<p>Ex: do you associate cities with smog? Do you associate French people with an enjoyment of food?</p>



<hr class="wp-block-separator"/>



<p><strong>Implied Belief: </strong>you have probably never considered whether 384883828382553 is a number, but you already believed it is, in the sense that you have beliefs about what makes something a number. By implication, they imply it is one. Beliefs can imply other beliefs.</p>



<p>Ex: if you believe all men are mortal, and you believe Elon Musk is a man, you also, in a sense, believe that Elon Musk is mortal, even if you&#8217;ve never thought about the mortality of Elon Musk.</p>



<hr class="wp-block-separator"/>



<p><strong>Memory-based Beliefs</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Autobiographical Belief: </strong>suppose someone asks you if you have ever eaten Flaming Hot Cheetos. If you can recall an instance of doing so, you&#8217;ll say you have eaten them; otherwise, you won&#8217;t. One type of belief is what we can recall being true. Another way this can manifest is if we&#8217;re considering a question (like: &#8220;Is Sally a flaky friend?&#8221;) we may try to recall an instance of her being flaky, and if we can, we conclude she is flaky, whereas if we can&#8217;t, we are more likely to conclude she is not.</p>



<p>Ex: have you called a friend on their cell phone in the past seven days? If you can recall a case of doing so, you&#8217;ll believe it&#8217;s true. If not, you&#8217;ll very likely believe it&#8217;s not.</p>



<hr class="wp-block-separator"/>



<p><strong>Memorized Belief: </strong>you can believe a statement in the sense of memorizing that the statement is supposed to be &#8220;true.&#8221; For instance, if you&#8217;re taught from age three that &#8220;Morloc is the sky god,&#8221; You say &#8220;Morloc is the sky god&#8221; and believe you believe it. You may not know what it means.</p>



<p>Ex: TV rots your brain. Opposites attract. Do you believe these statements are &#8220;true&#8221; because you heard that they were before you&#8217;ve had a chance to reflect on whether or not they really are true?</p>



<hr class="wp-block-separator"/>



<p><strong>Elicited Beliefs</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Generated Belief: </strong>when we are asked a question (by ourselves or others), we usually will generate an answer (e.g., &#8220;Why did you do that?&#8221; -&gt; &#8220;Because I was angry&#8221;). This answer is a kind of belief related to the query. On reflection, though, we may decide we don&#8217;t believe it.</p>



<p>Ex: your friend asks you whether you want to go camping. You immediately blurt out &#8220;yes&#8221; &#8211; but then you immediately start considering whether or not you would actually enjoy camping.</p>



<hr class="wp-block-separator"/>



<p><strong>Queried Belief:</strong> suppose you are shown a picture of a person and asked to &#8220;predict where this person is from.&#8221; A guess will likely appear in your head. In a sense, you have a belief that this person is from this place (arguably even if you have not yet run that mental query).</p>



<p>Ex: What country in the world do you think has the smallest landmass? Your brain may well generate a guess upon hearing that query.</p>



<hr class="wp-block-separator"/>



<p><strong>Reactive Belief:</strong> we may find that a thought pops into our head (e.g., &#8220;Joe is a jerk&#8221;), and some people may view merely having this thought as a form of belief. Of course, upon reflection, we may or may not agree with the thought.</p>



<p>Ex: upon seeing Marty throw the bowling ball into the next lane by accident, you may have the thought, &#8220;Marty is awful at bowling.&#8221;</p>



<hr class="wp-block-separator"/>



<p><strong>Behavioral Beliefs</strong></p>



<hr class="wp-block-separator"/>



<p><strong>Enacted Belief: </strong>sometimes, we believe something in order to create a state of affairs that is true by virtue of having the belief. For instance, we may think that &#8220;good people believe X&#8221; and we find that we can get ourselves to believe X, so we try to believe it, or we may think &#8220;my company will succeed if I believe it strongly enough&#8221; and so we push ourselves to believe we will succeed in hopes of making it true (e.g., dismissing any thoughts that we will fail, and trying to focus just on evidence in support of our future success).</p>



<p>Ex: someone who believes deeply in the placebo effect might do everything they can to believe the medicine will make them feel better, with the theory that, if they believe hard enough, it will work &#8211; which means then that they will be justified in that belief. (H/T&nbsp;Alli Smith&nbsp;and&nbsp;Annie Kotowicz.)</p>



<hr class="wp-block-separator"/>



<p><strong>As-if Belief:</strong> sometimes we believe something in the sense of simply acting as though it is true, whether or not we would say we are highly confident in it. For instance, we might have heard a rumor that our neighbor once went to prison for molesting a child. We are considering how to act towards this neighbor. We may conclude that, even though there is uncertainty (since it&#8217;s just a rumor), we will act &#8220;as if&#8221; it is true &#8211; even though, if pressed on the topic, we will say we are uncertain whether it is true. Or we could decide the opposite (to act as though it is NOT true, on the principle that we shouldn&#8217;t act as if someone committed a criminal act unless we have strong evidence &#8211; like the principle of being &#8220;innocent until proven guilty&#8221;).&nbsp;</p>



<p>Sometimes these &#8220;as-if&#8221; beliefs are much deeper; for instance, we might act &#8220;as-if&#8221; induction works, even if we know we can&#8217;t provide strong arguments in favor of it, or we might act &#8220;as-if&#8221; god doesn&#8217;t exist, even if we aren&#8217;t really sure about it. Or we might act &#8220;as-if&#8221; we have a death wish (e.g., by engaging in extremely risky behavior), even though we don&#8217;t think we have any desire for self-harm. H/T to Pepe Le Pew for inspiring this belief type.</p>



<hr class="wp-block-separator"/>



<p>There are many forms of belief. In plenty of cases, it&#8217;s unnecessary to differentiate between them, but on complex questions of human psychology, we may need to get granular with the idea of &#8220;belief&#8221; to really understand what&#8217;s happening.</p>



<p>Consider advanced cases like these, that are hard to make sense of without a nuanced perspective of the different forms of belief:</p>



<p>• Why do people who are absolutely convinced they&#8217;re going to heaven get scared of dying?</p>



<p>• Why do people claim that Trump or Biden is definitely going to win but then refuse to make a bet on that claim (even though normally they enjoy betting)?</p>



<p>• Why do we say things like &#8220;I believe in love,&#8221; even though, when asked what it means, we may struggle to explain it, and we may not have thought in detail about its meaning before?</p>



<p><em>This essay was first written on October 11th, 2020, and first appeared on this site on January 14th, 2022.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2020/10/the-many-forms-of-belief/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2579</post-id>	</item>
		<item>
		<title>Obvious Defaults Perform The Best</title>
		<link>https://www.spencergreenberg.com/2017/06/obvious-defaults-perform-the-best/</link>
					<comments>https://www.spencergreenberg.com/2017/06/obvious-defaults-perform-the-best/#respond</comments>
		
		<dc:creator><![CDATA[Spencer]]></dc:creator>
		<pubDate>Sun, 11 Jun 2017 01:09:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[behavior]]></category>
		<category><![CDATA[charity]]></category>
		<category><![CDATA[domain]]></category>
		<category><![CDATA[domains]]></category>
		<category><![CDATA[GiveWell]]></category>
		<category><![CDATA[Investing]]></category>
		<category><![CDATA[methods]]></category>
		<category><![CDATA[motivators]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[simple]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4303</guid>

					<description><![CDATA[It&#8217;s surprising how often, in highly complex domains where we are trying to figure out what to do, an obvious or simple default can perform extremely well. This is sometimes even true when the defaults are clearly not optimal. Here are simple defaults in four complex domains that can be surprisingly hard to outperform: 1. [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>It&#8217;s surprising how often, in highly complex domains where we are trying to figure out what to do, an obvious or simple default can perform extremely well. This is sometimes even true when the defaults are clearly not optimal.</p>



<p>Here are simple defaults in four complex domains that can be surprisingly hard to outperform:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>1. Charity: give money to the poorest non-drug addicted people you can find, and let them do whatever they want with the money.</p>



<ol class="wp-block-list"></ol>



<ol class="wp-block-list"></ol>



<p>GiveWell, which has spent years looking for the most effective, evidence-based giving opportunities, has concluded that Give Directly is among the best options it can find. But all it does is use smart ways to locate and give money to really poor people internationally.</p>



<p>And yet, a tremendous amount of effort goes into developing clever interventions that provide goods or services that people struggle to provide for themselves. Yet most of these don&#8217;t seem to perform that well when in terms of outcomes compared to simply giving money to very poor people.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>2. Investing: put some money in long-term government bonds and the rest in the stock market (the ratio determined by risk tolerance and need for liquidity).</p>



<ol start="2" class="wp-block-list"></ol>



<p>The significant majority of people who attempt to do better than this historically have underperformed this simple strategy (due largely to trading commissions, taxes, management fees, the existence of a small number of highly skilled players, and the fact that the average performance of market participants is the same as the average return of the market).</p>



<p>Yet, the market is not perfectly &#8220;efficient,&#8221; and there are many other asset classes beyond stocks and long-term government bonds that one could mix.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>3. Predictions: train a simple linear regression model based on historical data to predict the variable of interest.</p>



<ol start="3" class="wp-block-list"></ol>



<p>Evidence suggests that linear regression models (ordinary &#8220;least squares&#8221; regression) beat human experts at many types of forecasting (e.g., see Paul Meehl&#8217;s work on statistical prediction). Additionally, quite often, a simple linear regression does as well (or close to it) as attempts to use complex models on data-driven prediction problems.</p>



<p>And yet, linear regression is not anywhere close to the cutting-edge methods for making predictions.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>4. Behavior: align people&#8217;s monetary incentives with the behavior that is desirable.</p>



<ol start="4" class="wp-block-list"></ol>



<p>It&#8217;s remarkable how often the way people in a field behave seems to end up aligning with their monetary incentives (e.g., think about cases where employees at banks create tons of fake accounts on behalf of unsuspecting clients because of a monetary incentive to do so). As famed investor Charlie Munger put it, &#8220;I think I&#8217;ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I&#8217;ve underestimated it. And never a year passes, but I get some surprise that pushes my limit a little farther.&#8221; Part of the power of monetary incentives is from people directly responding to the money itself, but another part is because those who aren&#8217;t responsive to the existing monetary incentives tend to be squeezed out of a field, advance less quickly, or don&#8217;t like the environment and so quit on their own accord. And since monetary incentives are usually more tangible than other forms of rewards, there is often a more reliable feedback loop for those trying to optimize for money than for other goods.</p>



<p>And yet, clearly, there are many motivators that people have beyond just money and work achievement, and money is sometimes a problematic or counterproductive motivator.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Sometimes, simple methods work almost unbelievably well.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on June 10, 2017, and first appeared on my website on March 12, 2025.</em></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2017/06/obvious-defaults-perform-the-best/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4303</post-id>	</item>
	</channel>
</rss>
