Community Health
We are designing qualitative and quantitative methods to identify and target harassment in Wikimedia projects.
Project overview
Harassment is a pervasive issue in many online communities. A 2014 Pew survey found that 73% of internet users have witnessed online harassment, and 40% have experienced it. In 2015, the Wikimedia Foundation conducted its own survey and found about 38% of responding contributors had experienced some form of harassment. Over half felt a decrease in their motivation to contribute to Wikimedia projects in the future.
The Wikimedia Board of Trustees has identified the problem of harrassment in Wikimedia projects as a threat to "our ability to collect, share, and disseminate free knowledge." In a resolution on "healthy community culture, inclusivity, and safe spaces," the Wikimedia Board of Trustees identified responses to harassment as a priority for the movement.
Our team is working with other departments at the Wikimedia Foundation and outside research collaborators to better understand and combat harrassment in Wikimedia projects and discussion spaces. We have designed algorithms to help detect toxic behavior, and we are learning more about how this behavior affects contributors to Wikimedia projects. We have released data sets and open source tools to support open and reproducible research on online harassment.
Recent updates
-
Release of WikiConv dataset
We’re thrilled to announce the release of WikiConv—a multilingual corpus reconstructing the complete conversational history of multiple Wikipedia language editions. -
Presentation video available for Conversations Gone Awry
Video is now available of Justine Zhang of Cornell University presenting our Conversations Gone Awry paper at ACL in June 2018. -
Research showcase for Conversations Gone Awry
We're hosting our collaborators at Cornell University and Jigsaw at the Wikimedia Research Showcase in June to present this study, along with a new corpus of English Wikipedia Talk page conversations. -
Machine learning is helping computers spot arguments online before they happen
‘Hey there. It looks like you’re trying to rile someone up for no good reason?’ The Verge covers the Conversations Gone Awry study. -
Scientists are building a detector for conversations likely to go bad
"Researchers say their tool can often spot when Wikipedia discussions will degenerate into personal attacks by watching for a few familiar linguistic cues." The Conversations Gone Awry study is featured in Fast.Company. -
Paper accepted at ACL '18: Conversations Gone Awry
A new paper on detecting early signs of conversational failure in Wikipedia talk pages, with our collaborators at Cornell University and Jigsaw, is accepted at ACL '18 and will be presented at the conference in Melbourne in July. -
Characterizing Wikihounding on Wikipedia
The Wikimedia Foundation's Caroline Sinders talks about challenges in designing quantitative and qualitative methods to identify Wikihounding – a form of digital stalking on Wikipedia. -
Toxic Comment Classification Challenge
A new modeling challenge hosted on Kaggle by our collaborators at Jigsaw, aims to improve the performance of models to identify and classify toxic comments on English Wikipedia's talk pages. -
Conversation corpora, emotional robots, and battles with bias
Jigsaw's Lucas Dixon talks about experimental setups for doing large-scale analysis of conversations in Wikipedia, part of ongoing research on the nature and impact of harassment in Wikipedia discussion spaces. -
Sockpuppet detection in Wikimedia projects
We started a formal collaboration with researchers at Stanford University aiming to design and evaluate algorithmic strategies to identify potential sockpuppet accounts on Wikipedia. The aim is to develop high-precision detection models using previously identified, malicious sockpuppets. -
Collection of 13,500 Nastygrams Could Advance War on Trolls
Our work in collaboration with Jigsaw, and the release of 100,000 comments on English Wikipedia talk pages labeled for toxicity, is covered in MIT Technology Review. -
Scaling up our understanding of harassment on Wikipedia
The first results from our collaboration with the technology incubator Jigsaw is helping us better understand and explore technical solutions to harassment on Wikipedia. -
Detecting Personal Attacks on Wikipedia
Ellery Wulczyn (WMF) and Nithum Thain (Jigsaw) present preliminary results from a research collaboration aiming to develop tools to detect and understand online personal attacks on Wikipedia.