Image by Pouria Teymouri on Pexels

People May Value Universal Happiness And Reduction Of Suffering More Than They Realize

I have a number of intrinsic values, but two of my most important intrinsic values are happiness and the lack of suffering for conscious beings. While these are fairly common intrinsic values, I suspect many people actually value them more than they realize. In other words, upon careful reflection, many people would realize that happiness and lack of suffering are stronger intrinsic values to them than they previously were aware of.

With that in mind, here are seven thought experiments related to happiness and suffering that might make you see your intrinsic values a bit differently:

— we don’t necessarily know our values —

Unfortunately, our deepest values are not something we automatically know about ourselves. The conscious side of our mind doesn’t have direct access to the rest of our mind. And much of what we care about lies in the subconscious, meaning that our explicit beliefs about our values may not be comprehensive or even accurate. So this at least opens the possibility that we might subconsciously value increasing strangers’ well-being more than we realize.

— our values are affected by our beliefs —

Some of what we value hinges on our beliefs about what’s true. And so if some of our relevant beliefs are false, or we haven’t fully explored all the implications of those beliefs (e.g., two things we believe imply a third thing but we haven’t realized that), then what we think we value may be, in a certain sense, “wrong”. So this at least opens the possibility that we might hold beliefs that are false or that contradict each other, such that, once they are corrected or the contradictions are resolved, we may end up caring more about increasing the well-being of strangers than we think we do now.

— our understanding of our values evolves —

We figure out our own values over time as we carefully introspect, discuss our values with others, compare options, notice and resolve contradictions, refine our understanding of the truth, flesh out the implications of what we already think is true, and infer things about ourselves from our own reactions. Hence, it is not that strange to think that our understanding of our values may change as we engage in reflection.

— a growing ember of classical utilitarianism —

So we may not fully understand what we value.

And I am proposing that through thought experiments about values, if carefully considered and reflected upon, quite a lot of people may realize that they care more about working to increase happiness or reduce suffering than they had originally thought. That many people are *partly* classical utilitarians in their values, even if they haven’t realized it, and that thought experiments can expose this.

— the thought experiments —

Warning: references to intense suffering and very difficult tradeoffs

(1) Suffering is bad, and not just for me

Remember that time when you felt really intense physical suffering (e.g., maybe you had a really nasty stomach flu)? Don’t dwell on that time, because I don’t want you to suffer now, but remember it just for a moment. Remember how much that suffering sucked?

Now take a few seconds to imagine a stranger. Someone you’ve never met and never will meet, but perhaps you passed them on the street at some point in your life. Take a moment to picture their face.

Now, suppose that right now this stranger is suffering in that same exact way that you recalled yourself suffering a moment ago. Assume this person is not someone who has done something terrible to deserve that suffering.

How do you feel about a state of the world where this stranger is suffering? Contrast it to a state of the world where that person is happy. I bet you think the latter world is better than the former.

I ran a survey asking people about their intrinsic values, that is, those things they value that they would continue to value even if no other consequences occurred as a result of that thing. In it, 49% of people (from the general U.S. mechanical Turk population that seemed to understand the question) reported that “people I don’t know suffer less than they do normally” is an intrinsic value, and 50% reported that “people I don’t know feel happy” is an intrinsic value.

It’s tough to measure people’s intrinsic values, and this is not a population that is fully representative of the U.S. population, so the exact numbers should be taken with a grain of salt. But these results suggest to me that many people do care about the suffering of strangers.

But now, the next question is, what properties should your caring about strangers have?

(2) Your friends care about the suffering of their friends

You presumably want the world to contain more of what your friends value (and less of what they disvalue) insofar as these values don’t conflict with your own.

Well, there’s a very good chance that one of the things your friends value is that their friends don’t suffer. Another thing your friends probably value is that their own friends get the things they value too, which presumably includes not wanting the friends of their friends (who are the friends of your friends’ friends) to suffer.

In other words, just by caring about the values of your friends, you may also care about the suffering of a whole host of other strangers. Not necessarily all strangers, but a lot of people you will never meet.

(3) More suffering is worse (a.k.a. scope sensitivity)

Suppose that 1 innocent person experiences a painful electric shock for one hour. How bad do you feel that is? Now suppose that, instead of that, 100 innocent people each experience the same electric shock for one hour. How much worse does that seem to you? Take a moment to consider it.

Now 10,000 people. How bad is that? Now 1,000,000 people. How bad is that?

At first, you may feel on a raw gut level that the 1,000,000 suffering is not that much worse than 1 person suffering. But are you really taking into account how many people 1,000,000 is? That’s about the entire population of San Francisco.

Notice how, when you really think about it, and you really try to get the enormity of the large numbers, 1,000,000 innocent people each experiencing a painful electric shock for one hour is way, way, way worse than 1 person experiencing it. Not just, say, twice as bad. But MUCH worse.

That implies that, for instance, eradicating a common and horribly debilitating disease that ten million people would otherwise get is not just a little bit more valuable than helping, say, 1000 people live slightly easier lives. It’s way, way, way more valuable!

I’m not saying you necessarily value a reduction in 1 million units of suffering as being 1 million times more valuable than a reduction in one unit of suffering, just that you probably do think it’s MUCH more valuable.

(4) Selfishness does not dominate

What’s the thing you value most in the world? Your life, maybe? Or your happiness? Or maybe something involving another person? My guess is that no matter how much you value this, there is an amount of suffering you’d be willing to give this up to alleviate.

For instance, if you had to give up your life to prevent all future suffering on earth, I bet you would do it, as terrible and unfair a choice as it would be to make.

(5) We should help suffering strangers when it is easy (a version of the famous drowning child thought experiment that Peter Singer has popularized)

Suppose a stranger you’re walking behind suddenly teeters and then collapses in front of you. The person is now lying on the ground, clearly in tremendous pain. You are the only person nearby.

I think most of us feel that even though we didn’t cause this person to be ill, we still have a moral obligation to try to help them. That is, (a) not being the cause of suffering doesn’t make us totally off the hook with regard to trying to relieve that suffering.

Furthermore, suppose that it would be a small inconvenience for us to help this person (e.g., we might have to show up 15 minutes late to a fairly important work meeting). I think most of us would still help this person (and would feel that it is the right thing to do). If true, that suggests that (b), if the size of the potential reduction of suffering to another person is much greater than our own loss by our helping them, we probably should help.

Finally, suppose that instead of this being a stranger right in front of us, we imagine that this is a stranger who we happened to have accidentally just Skype called by accident (by entering our friend’s user ID incorrectly). Assuming we don’t believe the person on the other end is faking, shouldn’t we still try to figure out some way to help this person (assuming it is feasible), even though they are far away? Of course, if we have no way to help them, obviously, we have no obligation to help. But suppose we can think of an easy way to help, shouldn’t we do it? This suggests that (c) our obligation to help doesn’t depend on how far away someone is, only on our ability to help that person.

We must then remember, of course, that there are people we could help around the world at little inconvenience to ourselves.

Even if you agree with (a), (b), and (c), that doesn’t mean that you think you should devote all your time and money to helping people who are suffering. But if you do agree with those points, then I suspect your value system tells you that you should expend at least some of your resources helping reduce suffering in others, if you have the means to do so without too much sacrifice.

(6) Other values may seem to diminish when happiness is even slightly reduced as a consequence of them

Suppose that you happen to have found out that (through no action on your part) certain people have a false belief about a certain topic. Furthermore, you know they would believe you if you corrected this belief.

The problem is that these people would all be slightly less happy if they knew the truth about this thing, and in fact, nobody would benefit in any way from this truth being known.

Would you tell these people? Well, you may think truth is important (I do too), but you may feel that it substantially takes the wind out of the sails of truth if all people involved are less happy because of it, and nobody benefits. I think in this case, some people will say, “What is the point of the truth if everyone suffers slightly more because of it?” In other words, they might feel the value of truth is reduced to almost nothing.

This isn’t just about truth. For instance, you can do a version of this thought experiment about equality (what if, in a particular group of people, you could make the group more equal in some dimension, but every single member of the group would be slightly less happy as a result). Or you can do it for almost any other value.

My guess is that these other values seem quite a bit less valuable (and perhaps to some not even valuable at all) when everyone is slightly less happy as a consequence, highlighting the potential importance of happiness in your value system.

Note that you may not necessarily feel this property is symmetrical with other values. For instance, suppose that someone reduces suffering a significant amount, but in doing so causes the people involved in the situation to have slightly less accurate beliefs. You may not feel that the slight reduction in accurate beliefs makes the reduction in suffering itself any less valuable.

(7) We can at least agree on suffering

Some people like apples and others like oranges. Some want to spread atheism, and others want to spread theism. Some people think you should obey authorities, while others value freedom of thought. But one of the few dimensions we are just about all similar on is that we don’t want to suffer ourselves, and we don’t want the people we love to suffer.

Some people are perhaps exceptions (e.g., Christopher Hitchens claimed Mother Teresa believed suffering to be at least sometimes good, quoting her as saying “I think it is very beautiful for the poor to accept their lot, to share it with the passion of Christ. I think the world is being much helped by the suffering of the poor people.”) I’m not sure what she meant by that or whether she would apply that to her own suffering or that of her loved ones, but it’s a possible exception.

That being said, though, disagreement on the badness of suffering seems really rare. Nearly everyone seems to find suffering bad, at least when it happens to themselves or their loved ones.

So if we all had to work as a species to reduce one thing, suffering seems like a pretty good contender. It’s hard to think of another thing we all dislike more.

— final thoughts —

Taken together, these thought experiments suggest (insofar as you buy into them) that you may believe:

(1) Suffering is bad when it happens to strangers

(2) You at least somewhat care about the suffering of many strangers by virtue of caring about the values of those people you care about

(3) More suffering of strangers is worse than less, and way, way more suffering is much worse still

(4) Your own self-interest is not more valuable than the potential for reducing all the suffering in the world

(5) We should put at least a little effort into reducing the suffering of strangers if it’s not too costly for us to do so, and we should not care whether those strangers are far away or near

(6) Most other values don’t seem as great if the result of producing them is to cause everyone involved to suffer slightly more, with no one benefiting, and these other values may even seem to lose their value in these cases

(7) We can all at least agree that suffering is bad and work together to reduce it

These points are not the same as classic utilitarianism, but they point in roughly the same direction as it does, I think. And anecdotally, some people seem to be quite impacted in their ethical views by thought experiments like these (though of course we can’t be sure it’s because they are revealing their deeper values as opposed to actually reshaping those values).

I don’t think that increasing the happiness of and/or reducing the suffering of conscious beings is the ONLY thing you care about. Nor do I think you SHOULD only care about those things.

But perhaps these thought experiments will make you realize that you care more about them than you thought you did, or that you’re more of a classic utilitarian than you realized.


This piece was first written on June 2, 2018, and first appeared on my website on December 2, 2025.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *