Agencies Are Not Agents

People will often debate questions like “why did the U.S. invade Iraq?” One group claims it was due to a fear of nuclear weapons. Another group claims that the action was oil related. Yet another claims that George Bush had a vendetta against Saddam. But proposed answers like these often seem to assume something that isn’t likely to be true: that there was a single reason why this action occurred. It is in fact not even clear that there was one predominant or overarching reason.

The actions of governments generally come about due to the behavior of many people, each with their own motivations, and the invasion of Iraq is probably not an exception. The house of representatives and senate both authorized the use of military action against Iraq by very wide margins, with 58% of democrats in the senate voting in favor, and 98% of republicans. (Incidentally, the Iraq War Resolution citesĀ more than ten different justifications for using force against Iraq, though it is unclear how many of these were taken seriously by senators.) What’s more, it is likely that George Bush’s decisions regarding Iraq were determined partly by the opinions of and information from his military and civilian advisors, each of whom had their own motivations for giving the advice that they did. Even in cases when a governmental decision seems to have been made by a single person, it can still be problematic to assume that the action was taken just for a single reason. Individual people often act based on multiple motivations which occur simultaneously. So, given that governments typically cannot reasonably be said to have a single reason for acting, why do we often ask questions that seem to assume that such a reason exists?

As philosopher Daniel Dennett has noted, the human brain can model entities at different levels of abstraction. Consider, for example, the game Pac-Man. We can consider the ghosts that chase our hero from at least three different perspectives. Taking a physical stance, the images of these ghosts are produced as result of snippets of computer code being executed by a computer’s processor. Viewing the ghosts from a design stance, we can think of them as having been created for the purpose of presenting a challenge to the gamer by providing something that the gamer will need to avoid to beat each level. Finally, we can consider these ghosts from an intentional stance, imagining that they desire to catch our hero, and that they act in accordance with those desires by chasing him.

One might argue that the intentional stance is inaccurate for Pac-Man, because the ghosts don’t truly have desires. But that does not stop the stance from being a useful model of the situation. If you want to explain how the game works to someone, saying “the ghosts want to catch you, and you don’t want to be caught” is an efficient way of transmitting a lot of information about the game. It is probably an even more intuitive way for us to think about this situation than “the ghosts are programmed to follow you, and you lose the game if they come into contact with you.” So the intentional stance (where we model an entity as if it had agency) can be quite useful even when we are not dealing with a conscious agent.

People often assume this intentional stance without even thinking about, especially when they discuss governments, corporations and movements. And in many cases it makes sense from a practical point of view to adopt this stance. For instance, it is useful to model the company General Electric as an entity with certain goals (profit maximization, for example) and then consider what this entity would do in various situations. But this method of modeling becomes less informative when we ask questions about why General Electric took a particular action. The answer “to maximize profits” merely gives us back the assumptions used in our model for General Electric (i.e. that it is a profit maximizer). General Electric may behave like a profit maximizer, but knowing that does not mean we now know WHY General Electric performs a particular action (nor does it explain how we are supposed to interpret the idea of “General Electric performing an action”). Profit maximizer is a description of the company, not the explanation of why the company (or more realistically, the collective of its employees) behaves in a particular manner. If we dig deeper, we see that the effects that the employees of General Electric have on the world come about due to the actions of individual agents with a variety of motivations. It is through analyzing the motivations of the individual agents that we can begin to arrive at an explanation.

When considering the motivations of an agency, it is useful to remind ourselves that agencies are not in fact agents. Though modeling them as agents can be very useful for prediction purposes, if we want to answer questions about why they acted we have to dig deeper and consider the motivations of their constituent agents.

Leave a Reply