[Corpora-List] Call for Papers: The Fourth Workshop on Online Abuse and Harms

Zeerak Waseem z.w.butt at sheffield.ac.uk
Fri Apr 17 19:45:43 CEST 2020

[Apologies for cross-posting]

4th WOAH: The 4th Workshop on Online Abuse and Harms (previously the Workshop on Abusive Language Online) =====================================

Virtually co-located with EMNLP 2020 Submission deadline: July 23, 2019 Author Notification: August 18, 2020 Camera Ready: August 31, 2020 Workshop Date: 20 November, 2020

Website: www.workshopononlineabuse.com Submission link: www.softconf.com/emnlp2020/WOAH4/

Overview =========

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours, from interpersonal aggression to bullying and hate speech, to reach large audiences and for their negative effects to be amplified. The negative effects are further compounded as marginalised and vulnerable communities are disproportionately at the risk of receiving abuse. As policymakers, civil society and tech companies devote more resources and time to tackle online abuse, there is a pressing need for scientific research that rigorously investigates how we define harms, how it is detected, moderated and countered.

Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in detecting and modelling online abuse. Primarily, this has been through leveraging state-of-the-art ML and NLP techniques, such as contextual word embeddings, transfer learning and graph embeddings. However, concerns have been raised about the potential societal biases that many of these ML-based detection systems reflect, propagate and sometimes amplify. These concerns are magnified by the lack of explainability and transparency of these models. For example, many detection systems have different error rates for content produced by different people or perform better at detecting certain types of abuse. Such issues are not purely engineering challenges but raise fundamental questions of fairness and social harms: any interventions that employ biased models to detect and moderate online abuse could end up exacerbating the social injustices they aim to counter. For instance, women are 27 times more likely to be the target of online harassment and black people report more incidents of racially motivated online harassment; if tools further exacerbate harms through poor classification performance, such marginalized communities can face additional barriers to digital spaces. Developing reliable and robust tools, developed in collaboration with key stakeholders such as the policy-makers and in particular civil society, is crucial as the field matures and automated detection systems become ubiquitous online.

For the fourth edition of the Workshop on Online Abuse and Harms (4th WOAH!) we address these issues through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Additionally, in this iteration we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements we can develop computational tools which address the issues faced by those on the front-lines of tackling online abuse.

Types of Contributions =================

Academic/Research Papers ----------------------------

We invite long (8 pages) and short (4 pages) academic/research papers on any of the following general topics.

Related to developing computational models and systems:

NLP models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc. Application of NLP tools to analyze social media content and other large data sets NLP models for cross-lingual abusive language detection Computational models for multi-modal abuse detection Development of corpora and annotation guidelines Critical algorithm studies with a focus on content moderation technology Human-Computer Interaction for abusive language detection systems Best practices for using NLP techniques in watchdog settings Submissions addressing interpretability and social biases in content moderation technologies

Related to legal, social, and policy considerations of abusive language online:

The social and personal consequences of being the target of abusive language and targeting others with abusive language Assessment of current (computational and non-computational) methods of addressing abusive language Legal ramifications of measures taken against abusive language use Social implications of monitoring and moderating unacceptable content Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

Contributions from Civil Society --------------------------------

In addition to academic submissions, we also invite organisations in civil society to submit reports, case studies and findings on any of the general topics:

Case studies and examples of harassment and abuse experienced online, Successes and failures of content moderation systems and policies, Outline of national/global legal and/or technical challenges faced, Best practices of working in partnership with other actors, Interventions that have helped victims and survivors of online abuse gather evidence, Policy, practice and content moderation systems recommendations for tech platforms and researchers, and Documentation of policy gaps that require data and academic support

Please see the WOAH Call for contributions from Civil society webpage for more details: www.workshopononlineabuse.com/cfp/civil-society

Shared Exploration -------------------

A special Shared Exploration is being launched this year for the 4th Workshop on Online Abuse and Harms (WOAH). Using the dataset provided by Wulczyn et al. (2017), we are encouraging innovative analyses which align with this year’s Workshop theme: Bias and Unfairness in the Detection of Online Abuse.

Compared with traditional Shared Tasks we have taken a more unorthodox approach. We will review performance on the datasets in accordance with three criteria rather than just one evaluation metric. This means that we can adopt a more holistic approach and reward innovative and rigorous analyses -- rather than basing our assessment on a single metric, which can encourage submissions which have sophisticated engineering but pay less attention to work’s wider impact and significance.

Please see the WOAH shared exploration webpage (www.workshopononlineabuse.com/shared-exploration) for more detail.

Workshop Program ==============

Our plan for this one-day workshop include:

Two keynote presentations from leading experts on the topic of social bias and abuse detection, A multidisciplinary panel discussion, A forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection, and Presentations of original academic work as well as contribution from the civil society

EMNLP has moved to being an entirely virtual conference and WOAH 2020 will now be held remotely via videoconference. As a result, we have widened the programme to include a greater range of talks and activities. We will provide updates as the program is confirmed.

Organizing Committee ================

Seyi Akiwowo, Glitch UK Vinodkumar Prabhakaran, Google Research Bertie Vidgen, The Alan Turing Institute Zeerak Waseem, University of Sheffield

Zeerak Waseem -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 9568 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200417/1514265b/attachment.txt>

More information about the Corpora mailing list