GDC Takeaways IV: Shaping Online Behavior

In the fourth in the series of GDC 2015 takeaways, Eric combines lessons learned from two related talks: Anti-Social Behavior in Games: How Can Game Design Help? and More Science Behind Shaping Player Behavior in Online Games. He’ll also jump into rating systems for user-generated content based on a finding in one of the talks.


I have a love-hate relationship with online multiplayer games. Specifically, player vs. player games. Having humans instead of AI (usually) makes games more interesting, but it also creates hassle. Even when it comes to real world sports, I’ve never been competitive, so the kinds of anti-social behavior that seem to dominate online games is a big turn off for me.

But that leaves me with a question: If I want to include multiplayer in my own games, are there ways to make it fun for me if I were a player? I was hoping to get a few insights from the two multiplayer behavior talks at GDC. The first, Anti-Social Behavior in Games: How Can Game Design Help? was led by Dr. Ben Lewis-Evans (Player Research) who examines the topic in an academic setting using the incredibly generous data put out by places like Riot Games (League of Legends).

Ben started with a story about an unfortunately designed piece of hospital equipment, where the power output and the heart monitor used the same shape for the plug. I’m sure you can imagine the resulting accident, but, even when it occurred, the blame was put on the nurse who made the mistake, rather than the equipment. Ben gave this example to show that sometimes it’s the system at fault, not the person.

It’s easy to imagine that all of the anti-social players out there are horrible human beings, but when you pop the lid on the situation, in many cases, anti-social behavior is encouraged in games. Players who betray their team, or steal kills, or focus on sniping are often higher scorers, and most of the time rude behavior in forums and other online communities isn’t punished, providing examples of that behavior for others to see and emulate.

Although addressing a decades-long buildup of anti-social behavior on the internet isn’t something one game can take on, Ben did give a few steps designers could take to mitigate the problem. He focused on the three pillars: Education, Enforcement, and Engineering.

Education and enforcement might seem simple, but he stressed the factor that can make or break efforts to educate or enforce against bad behavior: quick and clear feedback. When players are punished weeks after the fact, long after reports have been submitted, and all they get is a message to the effect that “you’ve been banned for being bad,” they haven’t been given enough knowledge to correct the behavior. If anything, it could serve to make them more aggressive in the future, because they might see their punishment as unfair (handed down to them by people who are just trying to ruin their day).

And, of course, a system of constant, unclear punishment results in a police state feel within the multiplayer community. But Ben gave a good example from Riot Games, where antisocial behavior is quickly followed by a log of the offending action and information about the problem. Although the system is far from perfect, it gets much closer to solving the problem than some systems used in the past. It also takes the human out of the equation, which, when it comes to punishment, can make the pill a little easier to swallow. (Imagine the difference between running a red light and a camera taking a picture of your car  crossing the line on red vs. a police officer pulling you over and claiming you ran a red light – the automatic solution carries a lot more juice, which is why cities that can afford to are moving to it.)

The Engineering example Ben brought up was Journey, the PlayStation game where players can’t even talk to each other. The only communication is through in-game communication options, and there is no way for other players to really hurt your game, only help. Of course, not many games can get away with that minimal level of multiplayer interaction, but there are ways to engineer good behavior. For example, something simple like making sure friendly fire is disabled by default, or separating the loot systems so that people don’t fight over the rewards in battle, while allowing for trading are two ways to engineer cooperation. Another example given that I’ve experienced myself while hunting zombies with friends is the engineering example Valve used in Left4Dead. When they began, people never helped each other survive, but when they introduced the automatic “I’m getting attacked” outline system that clearly showed comrades struggling to stay alive, it gave other players the nudge they needed to help. Of course, even with that system, there’s no way to enforce good behavior (and there is still plenty of anti-social behavior in L4D), but it’s a start.

Another fantastic social engineering example from Valve is providing a benefit to other players when one player buys an item. (This not only makes all of the players happy, but makes Valve’s accountants happy.)

One final example of engineering multiplayer experiences to cut down on anti-social behavior that struck a chord with me was the idea of eliminating “n00b” identifiers like low levels or costumes. In Japan, where I live, new drivers, old drivers, and hard-of-hearing drivers are all forced to put plates on their cars signalling their status. I’m sure the idea was to make other drivers more considerate to the less able drivers, but all it really does is warn other drivers to be on the lookout for mistakes in the less able drivers (in other words, it primes people to be less considerate). Mistakes that would normally be overlooked in a “normal” driver stand out when a less able driver makes them. In multiplayer as well, teams who get “stuck” with a level one player are, without hesitation, going to consider that player a burden – a springboard for anti-social behavior.


The second talk, led by Jeffrey Lin of Riot Games, went deeper into the specific steps League of Legends took to cut down on anti-social behavior in their games. As Jeffrey mentioned, game companies should be interested in cutting down on anti-social behavior, because the number one predictor of whether a player will stick with a multiplayer game is whether or not they experience anti-social behavior. In other words, people playing nice is better for the bottom line.

One thing Jeffrey focused on was the idea of how a generation of silence has helped breed anti-social behavior on the internet as a whole. Because bad behavior goes unpunished, and the good people stay silent, the bad behavior is what stands out. And, because people are naturally more attuned to noticing bad behavior anyway, the final result is the feeling that everyone on the internet is anti-social. (With the next step: why shouldn’t I be?) Consider my comment at the very beginning of this post about how anti-social behavior seems to dominate online multiplayer. If I were to take a step back and count the varieties of people I play with, I would probably discover that very few players are actively anti-social, but it only takes a few to give me the impression that online multiplayer is inherently anti-social.

Jeffrey did bring up some interesting data from Riot Games and showed how anti-social players (“toxic players”) spread their anti-social behavior on to other players, like a virus sweeping through the online community. Riot Games’ goal was, like the CDC, to jump in and cut off anti-social behavior before it had a chance to spread, and they did that with their automatic behavior monitoring system. For the same reasons Ben mentioned in his talk, they knew they needed a system that would work fast and work accurately – giving offenders clear proof of their bad behavior in the hopes of mending it.

If you’re interested in why Jeffrey’s talk this year was More Science Behind Shaping Player Behavior in Online Games, the original talk he gave on the subject in 2013 is up on GDC’s Vault and appears to be free to watch.


Both Ben and Jeffrey’s talks about shaping player behavior online did an excellent job of getting me to rethink the anti-social player problem as a systematic one that a designer can take steps to reduce, and they gave me hope that I can explore ways to incorporate multiplayer in future games that don’t lead to the kind of trouble that so often steer me away from online games.

Before I jump into an aside on ratings systems in games, I want to throw a question out to you: Have you run across any games outside of the examples listed above that do a good job of discouraging anti-social behavior (or encouraging good social behavior?) Please leave a comment, I’d love to know.


An Aside about Ratings for Content Generators

Jeffrey also brought up an interesting finding regarding content generation that touches on something I dislike about YouTube: downvoting. He found that down-voted content generators are more likely to have a drop in quality in their content and are more likely to downvote other content creators in the future.

While I can see why YouTube thought it would be a good idea to have a thumbs down system to try and weed out misleading/click-bait videos, it also puts a powerful tool into the hands of internet trollers, and, if Jeffrey’s findings are correct, that trolling behavior spreads when the trolls are enabled. This warning does go beyond YouTube, however, because if you are considering user-generated content in your game, it’s a useful finding to keep in mind when designing the rating system. Any good rating system has to accomplish two goals:

1) Make great content easy to find
2) Weed out junk content (intentional junk)

But relying on upvoting for 1 and downvoting for 2 is a bit simple, and relying on stars is too simple as well. Unlike Amazon or Goodreads, where a star system makes some sense as an average indicator of quality, it doesn’t make sense when your goal is to encourage content creation. (Amazon doesn’t care about encouraging content creation, so they don’t have to mind if an author’s feelings are hurt by bad ratings.)

Some content creators are going to be better than others; many are just starting out and need time to get better. However, if someone puts their first level up and it gets hammered with a lot of thumbs down or one-star ratings, they’re less likely to keep trying, and more likely to downvote other content, discouraging other creators as well.

There are examples of systems for bringing good content to the top that work: Twitter’s retweet system or Pinterest’s repin system. They don’t discourage bad content, but they have a very strict line “good content” has to cross: someone has to think the content is so valuable, they’re willing to own a piece of it themselves.

When you retweet or repin something, you’re saying to your own friends or followers, “I personally think this is worth checking out.” That’s a much stronger vote of confidence than a thumbs up or a “Like”, which, because they don’t require any investment, can result in content getting liked or upvoted on a whim, by popularity, or by people who are just trying to get people to like their own stuff.

Perhaps a great user-generated content ratings system could take some hints from Twitter/Pinterest. And, of course, a hybrid system could work too; the “like” system is good for plain, old-fashioned encouragement, while a sharing system is good for floating truly great content to the top.

Leave a Reply

Scroll to Top