CHI 2006: Using Intelligent Task Routing and Contribution Review to Help Communities Build Artifacts of Lasting Value
Dan Cosley (I think):
Here are the takeaways I hope you will get from my talk:
- Artifacts of lasting value matter.
- Designing for contribution is interesting.
- Modeling value over time is useful.
- Intelligent task routing works.
Communities like IMDB, RateYourMusic (35,000 people), and Wikipedia are building collective artifacts of lasting value. But not all communities succeed. Nupedia, how many smaller groups have faltered. Grouplens is kinda failing because everyone is using a different format.
Often, one enthusiastic guy does all this work of setting up the system, inviting users, maintaining, etc. You know, like Chad adds all the movies in MovieLens.
But you ideally want many people to do work. The virtues of the many: Scale (SlashDot), Speed (Wikipedia repairs < 3min), Robustness.
But... users say, "I don't want to add movies, I want them to be there for me."
CommunityLab is researchers from CMU, UMich and UMinn. We contribute theory insights and design ideas.
I want to talk about creating value in a specific community, the MovieLens community. When we started this project we had 8800 movies x 8 fields = 70,000 fields to fill, 23,000 of which were empty.
How to do it? Part I of my talk is about contribution reviews. We know that editing improves quality. Very often, in an editing process, you don't see the internals. You don't see how the sausage is made - it's pre-publication review. Wiki-like processes let people publish right away. We wondered which of those models (let people provide information and add a reviewing step, or not) would help the fields fill up faster. We tested it empirically.
As it turns out, wiki-like beats pre-publication review short term, and in both cases contributions taper over time. In the wiki model, quality hits an equilibrium, a steady state where the good elements and bad elements balance each other. Our model says that in the long run, it actually doesn't matter if review is pre- or post-publication - you reach the same state.
Part II of my talk is about intelligent task routing. How do people find tasks to perform in the system? Randomly (e.g. Slashdot metamoderation), chronologically, alphabetically? These kinda suck.
You want to help people find work they want to do. People often work on their interests - match people with tasks they'll like. Karay & Williams' collective effort model deals with social loafing. They posit that people decide whether to contribute according to how much it benefits them, the group, and whether the contribution matters to the group.
We assigned people to four groups; in each group people were given tasks according to different algorithms. We found that four times as many contributions were made into MovieLens if we asked them to fill in information about movies they were among the few to have seen. "This Needs Work" did not work so well as a motivator in terms of number of movies, but if you count the fields filled it's pretty good.
Based on this work I made SuggestBot, which suggests Wikipedia articles you might like to edit, and is intended to optimize participant contributions.
Q. (Alain Désilets, NRC) You asked people to edit movies they had not seen?
A. Yes, 3 of the 4 groups didn't depend on whether they had seen the movie.Q. (me) So your model predicted that eventually the good slows down and the bad balances it out - does that mean that eventually there are effectively no contributions being made to the system? Do you assume that the amount of uncharted territory is constant?
A. (The answers confused me. I think Dan basically said "yes, sorta" to both questions.)