Programs

Community Health

We are designing qualitative and quantitative methods to identify and target harassment in Wikimedia projects.

community health image

Project overview

Harassment is a pervasive issue in many online communities. A 2014 Pew survey found that 73% of internet users have witnessed online harassment, and 40% have experienced it. In 2015, the Wikimedia Foundation conducted its own survey and found about 38% of responding contributors had experienced some form of harassment. Over half felt a decrease in their motivation to contribute to Wikimedia projects in the future.

The Wikimedia Board of Trustees has identified the problem of harrassment in Wikimedia projects as a threat to "our ability to collect, share, and disseminate free knowledge." In a resolution on "healthy community culture, inclusivity, and safe spaces," the Wikimedia Board of Trustees identified responses to harassment as a priority for the movement.

Our team is working with other departments at the Wikimedia Foundation and outside research collaborators to better understand and combat harrassment in Wikimedia projects and discussion spaces. We have designed algorithms to help detect toxic behavior, and we are learning more about how this behavior affects contributors to Wikimedia projects. We have released data sets and open source tools to support open and reproducible research on online harassment.

Recent updates

  1. Release of WikiConv dataset

    We’re thrilled to announce the release of WikiConv—a multilingual corpus reconstructing the complete conversational history of multiple Wikipedia language editions.
  2. Presentation video available for Conversations Gone Awry

    Video is now available of Justine Zhang of Cornell University presenting our Conversations Gone Awry paper at ACL in June 2018.
  3. Research showcase for Conversations Gone Awry

    We're hosting our collaborators at Cornell University and Jigsaw at the Wikimedia Research Showcase in June to present this study, along with a new corpus of English Wikipedia Talk page conversations.
  4. Machine learning is helping computers spot arguments online before they happen

    ‘Hey there. It looks like you’re trying to rile someone up for no good reason?’ The Verge covers the Conversations Gone Awry study.
  5. Scientists are building a detector for conversations likely to go bad

    "Researchers say their tool can often spot when Wikipedia discussions will degenerate into personal attacks by watching for a few familiar linguistic cues." The Conversations Gone Awry study is featured in Fast.Company.
  6. Paper accepted at ACL '18: Conversations Gone Awry

    A new paper on detecting early signs of conversational failure in Wikipedia talk pages, with our collaborators at Cornell University and Jigsaw, is accepted at ACL '18 and will be presented at the conference in Melbourne in July.
  7. Characterizing Wikihounding on Wikipedia

    The Wikimedia Foundation's Caroline Sinders talks about challenges in designing quantitative and qualitative methods to identify Wikihounding – a form of digital stalking on Wikipedia.
  8. Toxic Comment Classification Challenge

    A new modeling challenge hosted on Kaggle by our collaborators at Jigsaw, aims to improve the performance of models to identify and classify toxic comments on English Wikipedia's talk pages.
  9. Conversation corpora, emotional robots, and battles with bias

    Jigsaw's Lucas Dixon talks about experimental setups for doing large-scale analysis of conversations in Wikipedia, part of ongoing research on the nature and impact of harassment in Wikipedia discussion spaces.
  10. Sockpuppet detection in Wikimedia projects

    We started a formal collaboration with researchers at Stanford University aiming to design and evaluate algorithmic strategies to identify potential sockpuppet accounts on Wikipedia. The aim is to develop high-precision detection models using previously identified, malicious sockpuppets.
  11. Collection of 13,500 Nastygrams Could Advance War on Trolls

    Our work in collaboration with Jigsaw, and the release of 100,000 comments on English Wikipedia talk pages labeled for toxicity, is covered in MIT Technology Review.
  12. Scaling up our understanding of harassment on Wikipedia

    The first results from our collaboration with the technology incubator Jigsaw is helping us better understand and explore technical solutions to harassment on Wikipedia.
  13. Detecting Personal Attacks on Wikipedia

    Ellery Wulczyn (WMF) and Nithum Thain (Jigsaw) present preliminary results from a research collaboration aiming to develop tools to detect and understand online personal attacks on Wikipedia.

Project team

Dario Taraborelli, Jonathan Morgan, Diego Sáez-Trumper

Collaborators

Jonathan Chang (Cornell University), Cristian Danescu-Niculescu-Mizil (Cornell University), Lucas Dixon (Jigsaw), Yiqing Hua (Cornell University), Srijan Kumar (Stanford University), Jure Leskovec (Stanford University), Tilen Marc (Stanford University), Caroline Sinders (Wikimedia Foundation), Nithum Thain (Jigsaw), Justine Zhang (Cornell University), Ellery Wulczyn (Wikimedia Foundation)

Publications

Resources and links