Thursday, April 26, 2007

How Aggregate Displays Change User Behavior

A fascinating study demonstrates how simply displaying aggregate data like Top 10 lists heavily influences the way people make decisions on social web sites.

Aggregate displays are everywhere, from the book ratings at Amazon.com to the most-emailed articles at the New York Times to the number of diggs at Digg.com. They’re a primary element of social design. They not only let people know how their actions relate to others, but they also alter the behavior of those who view them.

Columbia sociology professor Duncan Watts has written the fascinating piece Is Justin Timberlake the product of Cumulative Advantage?, describing a sociology experiment that has huge implications for the display of aggregate data on social web sites. (thankfully, the article isn’t about Justin Timberlake at all. The author doesn’t even mention his name…probably titled by an editor at the Times)

Description of the Experiment

Watt’s study demonstrates how socially influenced we are as we use software and make decisions online. He describes an experiment in which they built two web sites. In one web site they showed a list of songs to users and had the users rate the songs as they listened to them, allowing the users to download the songs if they wanted to. In the other, they also allowed the users to rate and download the songs but they also showed the download numbers to the users. In the second group the users could see how often the songs had been downloaded as they used the site. The difference was in the display: users either saw the aggregate display of downloads (called the social influence group) or they did not (called the independent group).

In addition, they split up the social influence group into 8 parts. They replicated the same test 8 times to see if there were any differences over time.

Watts describes how the download numbers could be used to test:

“This setup let us test the possibility of prediction in two very direct ways. First, if people know what they like regardless of what they think other people like, the most successful songs should draw about the same amount of the total market share in both the independent and social-influence conditions — that is, hits shouldn’t be any bigger just because the people downloading them know what other people downloaded. And second, the very same songs — the “best” ones — should become hits in all social-influence worlds.”

In other words, the mere fact that users can see the downloads is being tested. A simple display difference. A design decision.

Does Seeing Aggregate Data Change Behavior?

Does displaying aggregate download data change the behavior of users? The answer is Yes. Their findings:

  1. The most popular songs in the social influence group were way more popular than those in the independent group. In other words, the rich got richer when people could see the aggregate data.
  2. The popular songs in the 8 social influence groups were not the same! That is, the download numbers affected the popularity of the songs. Early download leaders continued to lead not just because they were good songs, but because they were already leading.
  3. The independent group was considered the test for quality. The songs that were most downloaded were considered the highest quality because everybody voted independently…there was no social influence since download numbers were not displayed. These songs did correlate slightly with the songs in the social influence group that did well, but did not have much of an effect overall.
  4. The social influence group was influenced much more by the number of downloads than by the quality of the songs. The social influence had a stronger effect on the song downloads than independent, unbiased decision making.

This result could be seen as a confirmation of the bandwagon effect, a known bias resulting from our tendency to follow the crowd. This bias is probably the result of ignorance…if we don’t know something we tend to rely on the opinion of others. In this case users probably paid attention to the download numbers because they didn’t have any prior experience with the music.

Outcome is Unpredictable

One outcome is that predicting what will happen in social influenced situations is impossible. Watts explains:

“In our artificial market, therefore, social influence played as large a role in determining the market share of successful songs as differences in quality. It’s a simple result to state, but it has a surprisingly deep consequence. Because the long-run success of a song depends so sensitively on the decisions of a few early-arriving individuals, whose choices are subsequently amplified and eventually locked in by the cumulative-advantage process, and because the particular individuals who play this important role are chosen randomly and may make different decisions from one moment to the next, the resulting unpredictably is inherent to the nature of the market.”

In other words, early leaders tend to stay in the lead simply because people see they are leading and are influenced by it. If a song gets an early lead then people assume it is because of quality, but the case may simply be that it happened to be one of the first listened to.

Huge Implications for Social Site Design

This result has huge implications for all social web sites, especially those that show aggregate data. Digg, for example, shows aggregate data everywhere on the site. This experiment, in addition to several other issues that I wrote about in Digg’s Design Dilemma, suggest that the results there are socially influenced to such an extent that it would be hard indeed to know where the quality lies…

It also leads to interesting questions for those building social sites. What data do we aggregate? What should we display? Where? What influence will it have on the future behavior of those who see it? Does it influence the quality of content? How so?….etc…etc…

Another level of complexity can be added on top of this. What if the users knew more about the download numbers? For example, what if a user knew their best friend hated one of these songs? Of course that’s going to affect our decision as well, and maybe moreso, because the user trusts their friends more than a simple download number.

A Note of Caution

Finally, a note of caution. Over at Publishing 2.0 Scott Karp extrapolates this finding further, suggesting (somewhat tongue-in-cheek) that it explains other phenomena like Web 2.0 and the blogging A-List. He wonders if the A-List is just riding a wave of initial popularity.

We should be careful to push this that far, however, because some things that Karp mentions happen over a lot longer a period of time and have many other factors involved. The A-Listers, for example, drop like a rock if they don’t continue to post and get linked to. They quickly lose their advantage in the face of low post output and other changes over time. In the study Watts tested the ratings for song preferences. Songs, of course, don’t change over time. My guess is that the result will apply relatively well to things that don’t change…songs, movies, texts…and less to things that don’t.

If Watt’s experiment is solid, however, it should help inform anybody building a social web site. If people are unduly influenced by aggregate data, then we cannot make any assumptions about the quality of it. In other words, much of the success in social systems is based on that most horrible of reasons…luck.

No comments: