Projects

Machine Learning as a Service for Free Knowledge

We are investigating the design of automated quality control in Wikimedia projects. We explore ways to enhance the impact of machine classifiers, while minimizing their detrimental effects.

scoring platform image

Project overview

Wikipedia reflects a complex interaction between humans and technology. The technology used for Wikipedia has shaped its social environments. Its social environments have shaped its technology. The two have evolved together; changes to one, often affect the other.

Between 2004 and 2007, Wikipedia grew quickly, and its early contributors created novel technology to ensure quality in Wikimedia projects. This technology was used to classify and analyze every edit to Wikipedia and to predict whether the edit was "good" or "bad." The technology used machine classifiers, which were evaluated by human patrollers who were looking for evidence of vandalism on Wikimedia projects.

The machine classifiers made it easier to identify "bad" edits, but the technology did not account for new contributors who made edits in good faith with poor results. Their edits were treated like vandalism, and this led to a dramatic decline in Wikipedia contributors. Wikimedia Foundation researchers discovered this problem and shared their findings.

While the Wikimedia Foundation has invested in efforts to improve the newcomer experience, quality control tools have remained unchanged. The Scoring Platform project was formed to evaluate this problem and to explore ways to enhance the positive impact of technology on Wikipedia contributors, while minimizing the negative effects of this technology on participation.

Recent updates

  1. Announcing the Scoring Platform team

    The new Scoring Platform team will be working on democratizing access to AI, developing new types of predictions, and pushing the state of the art with regards to ethical practice of AI development.
  2. Moving the needle on Wikipedia’s coverage of women scientists

    Using an article quality classifier, we quantified the "Keilana effect": the impact of outreach initiatives started by Emily Temple-Wood and other women to bridge the gender gap in Wikipedia.
  3. New dataset shows fifteen years of Wikipedia’s quality trends

    We’ve generated a dataset that tracks the quality of articles at monthly intervals over the entire 15-year history of Wikipedia across multiple languages—that’s 670 million assessments!
  4. Wikipedia Deploys AI to Expand Its Ranks of Human Editors

    "It turns out that the vast majority of vandalism is not very clever.": the launch of ORES featured in Wired.
  5. Artificial Intelligence Aims to Make Wikipedia Friendlier and Better

    The nonprofit behind Wikipedia is turning to machine learning to combat a long-standing decline in the number of editors: ORES featured in the MIT Technology Review.
  6. ORES service is officially launched

    A new AI service gives Wikipedians X-ray specs to see through bad edits and handle some of the highest-volume crowdsourcing issues on the internet.

Project team

Aaron Halfaker, Morten Warncke-Wang

Collaborators

Sumit Asthana, Andrew Hall (University of Minnesota), Amir Sarabadani (Wikimedia Deutschland), Adam Wight (Wikimedia Foundation)

Publications

Resources and links