Does money buy happiness, according to science?

By Spencer Greenberg and Amber Dawn Ace  This piece first appeared on ClearerThinking.org on February 28, 2024, was edited on February 29, 2024, and appeared here with minor edits on March 27, 2024. Does money buy happiness? Intuitively, the answer is yes: common sense tells us that poverty and hardship make people unhappy. We can use money to buy a lot of things that might make us happier – things like a nicer home, fancier vacations, education for our children, or just the oppor...
More

I’m an extreme non-credentialist – what about you?

Photo by Good Free Photos on Unsplash
I'm an extreme (>99th percentile) non-credentialist. Does that mean if I find out someone has a nutrition Ph.D., then I don't think they know more about nutrition than most random people? Of course not. Credentials are evidence of what someone knows (e.g., having a nutrition Ph.D. is evidence that you have nutrition knowledge). But part of what makes me an extreme non-credentialist is that if I spend an hour watching someone with a nutrition Ph.D. debate a completely self-taught person, a...
More

How great is the U.S., really?

This piece was coauthored with Travis Manuel. This is a cross-post from the Clearer Thinking blog. According to YouGov polling, 41% of people in the United States think that it is the greatest country in the world. Others see the U.S. as a place full of arrogance, violence, and inequality. So, what's the truth?  The truth is that there isn't a single notion of what makes something the "best." To explore how great (or not) America is, we'll start by looking at the question from mu...
More

Five rules for good science (and how they can help you spot bad science)

Image by S. Widua on Unsplash
I have a few rules that I aim to use when I run studies. By considering what it looks like when these rules are inverted, they also may help guide you in thinking about which studies are not reliable. (1) Don't use a net with big holes to catch a small fish That means you should use a large enough sample size (e.g., number of study participants) to reliably detect whatever effects you're looking for! (2) Don't use calculus to help you assemble IKEA furniture  That means...
More

Three reasons to be cautious when reading data-driven “explanations”

Photo by Sunder Muthukumaran on Unsplash
Did you know that fairly often, there will be multiple extremely different stories you can tell about identical data, none of which are false? In other words, the mapping from statistical results to true stories about those results is not unique. This leads to a lot of confusion, and it also implies that claims about "the reason" behind a complex social phenomenon should be interpreted with caution. Here are 3 common situations of this happening, each illustrated with realistic political ...
More

How to avoid feeding anti-science sentiments

Photo by Nila Racigan on Pexels
A major mistake scientists sometimes make in public communication: they state things science isn't sure about as confidently as things it is sure about.   This confuses the public and undermines trust in science and scientists.   Some interesting examples:   1) As COVID-19 spread early in the pandemic, epidemiologists confidently stated many true things about it that were scientifically measured (e.g., rate of spread). Some of them were also equally confidently stating things that were just spec...
More

Importance Hacking: a major (yet rarely-discussed) problem in science

Image created using the A.I. DALL·E
I first published this post on the Clearer Thinking blog on December 19, 2022, and first cross-posted it to this site on January 21, 2023. You have probably heard the phrase "replication crisis." It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, they fairly often don't get the same results. The magnitude of the problem depends on the field, but in psychology, it seems that something like 40% of studies i...
More

How can we look at the same dataset and come to wildly different conclusions?

Image by Ludomił Sawicki on Unsplash
Recently, a study came out where 73 research teams independently analyzed the same data, all trying to test the same hypothesis. Seventy-one of the teams came up with numerical results across a total of 1,253 models. Across these 1,253 different ways of looking at the data, about 58% showed no effect, 17% showed a positive effect, and 25% showed a negative effect. But that's not even the oddest part.  The oddest part is that despite a heroic attempt to do so, the study authors failed to...
More

Soldier Altruists vs. Scout Altruists

Photo by Aramudi on Unsplash
There is an important division between people who want to improve the world that few seem to be aware of. Inspired by Julia Galef's new book (The Scout Mindset), I'll call this division: Soldier Altruists vs. Scout Altruists. 1. Soldier Altruists think it's obvious how to improve the world and that we just need to execute those obvious steps. They see the barriers to a better world as: (i) not enough people taking action (e.g., due to ignorance, selfishness, or propaganda), and ...
More

It can be shockingly hard just to understand three variables

Image by Ayşenur Şahin on Unsplash
In science (and when developing hypotheses more generally), it is very common to come across situations where a variable of interest (let’s call this the dependent variable, “Y”) is strongly correlated with at least two other variables (let’s call them “A” and “B”). Here are some examples:  If you’re a psychology researcher investigating possible causes of depression (Y), you may have trouble disentangling the effects of poor sleep quality (A) and anxiety (B), both of which tend to be corre...
More