Its Own Reward

So in Bioshock, a game I very much enjoyed, you are offered the opportunity early in the game and periodically throughout to choose between two different approaches to a problem: one which is apparently moral (although your adviser calls it naive) and one apparently immoral (although your adviser deems it utilitarian). The moral approach, Option X, provides you with about half the gain of Option Y, the immoral approach — and so there appears to be a choice between doing the right thing and paying for it, or doing the wrong thing and profiting. It’s not just a game, it’s a simulation!

In reality, though, it turns out that Option X just delays part of the reward and replaces the rest with specific tonics, plasmids and other prechosen fringe benefits, such that your overall reward for consistently choosing Option X is probably, all told, at least as high or higher than Option Y. Being moral isn’t a sacrifice in Bioshock; it’s a long-term investment, and the question isn’t how willing you are to do the right thing but how well you read the FAQ or how good you are at putting off short-term gain for future returns. It’s a useful lesson for anybody thinking about the stock market, but as a narrative statement, it’s a failure.

This is the corollary of my previous post. Many games seem to embrace the idea that you should reward people for making good moral choices. I call this “Disney morality,” because it shows up so often in those movies: being virtuous leads to material success, being evil leads to material failure. If you take the contrapositive (not having material success implies that you are not virtuous), it becomes clear that it’s also the prosperity gospel.

There are two big problems with this approach. The first one, straightforwardly, is that it doesn’t match the experience we actually tend to have making moral choices. Nobody gives us a cupcake because we let somebody in on the highway. People aren’t dumb. When you give somebody a game to play, they form a mental model of the system it offers them, and if a piece of that system is obviously flimsy or ridiculous, they’re going to notice, and they’re going to disengage.

The second is that you’re replacing an intrinsic reward with an extrinsic reward. Here is a picture of my dog, Welly:

picture of an objectively adorable dog

I hope you will agree that my dog is adorable, because it is objectively the case. What are the odds that you would kick my dog? If you are much like me, I suspect that they are low.

Now let’s suppose I say to you, “I will give you ten dollars if you make it an hour without kicking my dog.”

If you are much like me, your first reaction might be: “Why would you offer me ten dollars not to kick your dog? I wasn’t going to kick your dog in the first place.

“…what, exactly, is the problem with your dog?”

Originally you were avoiding kicking my dog because doing so would be the act of a heartless monster. Now you’re avoiding doing it because you won’t get $10. This is not a trade up from Welly’s perspective. Just like with Bioshock, we’ve taken a moral imperative and turned it into an economic transaction.

Let’s suppose that ten minutes into this hour you are distracted, or walking with pie, or something, and you accidentally kick Welly. Now you won’t get the $10 in any case, no matter how many times you kick him in the next 50 minutes. How motivated are you now to avoid kicking him?

This is called the overjustification effect — attempting to incentivize people doing something they already want to do makes them doubt their original reasons for doing it. If my dog was so unkickably adorable, why would I have to pay you not to kick him? There must be some reason, right? Maybe kicking him isn’t so terrible after all. When the money is removed from the equation, the incentive is gone, but the doubt remains.

The conclusion here is simple — if you want people to take moral choices seriously, you can’t do it by offering them bennies for being good people — they know that’s not how morality works. If anything, you need to incentivize the opposite. Offer them $10 to kick my dog. That way they can find out exactly how much they were willing to give up to do the right thing — or exactly how much it took to get them not to.

3 thoughts on “Its Own Reward

  1. Liam,

    What if the creators of the game believe “being moral pays off in the long run, but not the short, uh, run” to be true and are using this game to encourage others to believe it as well? Something which I believe, for privileged people, is more or less true.

    Being moral won’t give you the direct and clear benefits of being immoral, but if you’re privileged, in the long run, you’ll probably make out pretty well by being moral (although the maximum potential gains are probably much greater for the immoral than the moral). I say this as someone who passes for privileged and it’s been my experience.

    In that case, doesn’t it make sense to hardcode it into the game?

    I ask because I know that when it comes to boardgames, I’m pretty sure that I tend to play a lot of hardcoded games as far as morality is concerned. For example, Pandemic offers us a very quaint notion of how diseases are fought in terms of co-operation, when a game like Terra offers a much more realistic approach (Terra, of course, is highly problematic on a representation level).

    I’m not sure I’m being very clear.

    I think what I’m trying to say is that games make statements about the world, what makes some statements ok, and others not ok? As far as statements go, this one seems rather benign, and it seems to me that all games are going to be making absolute statements in their structure (and, hopefully, relative statements in their play).

    I’m not sure that’s any clearer.

    CS

    • Chris,

      I think that people can hardcode whatever they want to hardcode into their games — I just question its effectiveness in actually making people consider their moral choices. You can say something is a tough moral choice all you want, but if it’s really a choice between long-term and short-term investment, people will give it exactly that attention. Look at Mass Effect, a trilogy which has spent three games developing a system of morality without ever making any interesting moral statements, because in Mass Effect morality is only as important as what color shirt you’re wearing.

      I’m not trying to say that “morality pays off in the long run” is the wrong statement to make (although I do, actually, disagree with it as a statement about the world, rather than as a statement about relationships*), but I think that if your goal is to encourage people to believe that morality is a good idea for whatever reason, it’s the wrong way to go about it. Like I said, we already have plenty of Disney movies.

      * I do think that the biggest benefits of moral choices come from your relationships with others. In a multiplayer game, though, you don’t need mechanics to get those benefits — your actual relationships with others will do just fine. In a single-player game I think you’ll have enormous difficulty convincing people to take the agents in that world seriously as people, rather than as the finite state machines they clearly are.

      • It’s interesting you used Mass Effect as an example, because I think Dragon Age does a really good job of making you take the NPCs seriously and forcing you to make heavy moral choices.

        Though I guess it depends on how you play it. DA:O is two games: the game of fighting monsters and the game of steering an epic plot full of love and hate and loss and the fates of several civilizations. I focused on the second part; I’m sure a lot of people just wanted to kill stuff.

Leave a Reply to Dave Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>