So in Bioshock, a game I very much enjoyed, you are offered the opportunity early in the game and periodically throughout to choose between two different approaches to a problem: one which is apparently moral (although your adviser calls it naive) and one apparently immoral (although your adviser deems it utilitarian). The moral approach, Option X, provides you with about half the gain of Option Y, the immoral approach — and so there appears to be a choice between doing the right thing and paying for it, or doing the wrong thing and profiting. It’s not just a game, it’s a simulation!

In reality, though, it turns out that Option X just delays part of the reward and replaces the rest with specific tonics, plasmids and other prechosen fringe benefits, such that your overall reward for consistently choosing Option X is probably, all told, at least as high or higher than Option Y. Being moral isn’t a sacrifice in Bioshock; it’s a long-term investment, and the question isn’t how willing you are to do the right thing but how well you read the FAQ or how good you are at putting off short-term gain for future returns. It’s a useful lesson for anybody thinking about the stock market, but as a narrative statement, it’s a failure.

This is the corollary of my previous post. Many games seem to embrace the idea that you should reward people for making good moral choices. I call this “Disney morality,” because it shows up so often in those movies: being virtuous leads to material success, being evil leads to material failure. If you take the contrapositive (not having material success implies that you are not virtuous), it becomes clear that it’s also the prosperity gospel.

There are two big problems with this approach. The first one, straightforwardly, is that it doesn’t match the experience we actually tend to have making moral choices. Nobody gives us a cupcake because we let somebody in on the highway. People aren’t dumb. When you give somebody a game to play, they form a mental model of the system it offers them, and if a piece of that system is obviously flimsy or ridiculous, they’re going to notice, and they’re going to disengage.

The second is that you’re replacing an intrinsic reward with an extrinsic reward. Here is a picture of my dog, Welly:

I hope you will agree that my dog is adorable, because it is objectively the case. What are the odds that you would kick my dog? If you are much like me, I suspect that they are low.

Now let’s suppose I say to you, “I will give you ten dollars if you make it an hour without kicking my dog.”

If you are much like me, your first reaction might be: “Why would you offer me ten dollars not to kick your dog? I wasn’t going to kick your dog in the first place.

“…what, exactly, is the problem with your dog?”

Originally you were avoiding kicking my dog because doing so would be the act of a heartless monster. Now you’re avoiding doing it because you won’t get $10. This is not a trade up from Welly’s perspective. Just like with Bioshock, we’ve taken a moral imperative and turned it into an economic transaction.

Let’s suppose that ten minutes into this hour you are distracted, or walking with pie, or something, and you accidentally kick Welly. Now you won’t get the $10 in any case, no matter how many times you kick him in the next 50 minutes. How motivated are you now to avoid kicking him?

This is called the overjustification effect — attempting to incentivize people doing something they already want to do makes them doubt their original reasons for doing it. If my dog was so unkickably adorable, why would I have to pay you not to kick him? There must be some reason, right? Maybe kicking him isn’t so terrible after all. When the money is removed from the equation, the incentive is gone, but the doubt remains.

The conclusion here is simple — if you want people to take moral choices seriously, you can’t do it by offering them bennies for being good people — they know that’s not how morality works. If anything, you need to incentivize the opposite. Offer them $10 to kick my dog. That way they can find out exactly how much they were willing to give up to do the right thing — or exactly how much it took to get them not to.