5-6 August, Bangkok, Thailand
Gender and other demographic biases (e.g. race, nationality, religion) in machine-learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, which are present in widely used products and can lead to poor user experiences. There is a growing body of research into improved representations of gender in NLP models. Popular approaches include building and using balanced training and evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and changing the learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need to create widespread awareness of bias and a consensus on how to work against it, for instance by developing standard tasks and metrics. Our workshop provides a forum to achieve this goal. Our workshop follows up two successful previous editions of the Workshop collocated with ACL 2019 and COLING 2020, respectively. Following the successful introduction of bias statements at GeBNLP 2020, we continue to require bias statements in this year’s workshops and will again ask the program committee to engage with the bias statements in the papers they review. This helps to make clear (a) what system behaviors are considered as bias in the work, and (b) why those behaviors are harmful, in what ways, and to whom. We encourage authors to engage with definitions of bias and other relevant concepts such as prejudice, harm, discrimination from outside NLP, especially from social sciences and normative ethics, in this statement and in their work in general. Also, we will be keeping pushing the integration of several communities such as social sciences as well as a wider representation of approaches dealing with bias.
Topics of interest
We invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets exploring demographics such as metrics to identify and assess relevant biases or focusing on fairness in NLP systems. Finally, the workshop is also open to non-technical work addressing sociological perspectives, and we strongly encourage critical reflections on the sources and implications of bias throughout all types of work.
Paper Submission Information
Submissions will be accepted as short papers (4-6 pages) and as long papers (8-10 pages), plus additional pages for references, following the ACL-IJCNLP 2021 guidelines. Supplementary material can be added, but should not be central to the argument of the paper. Blind submission is required.
Each paper should include a statement that explicitly defines (a) what system behaviors are considered as bias in the work and (b) why those behaviors are harmful, in what ways, and to whom (cf. Blodgett et al. (2020) <https://arxiv.org/abs/2005.14050>). More information on this requirement, which was successfully introduced at GeBNLP 2020, can be found on the workshop website <https://genderbiasnlp.talp.cat/gebnlp2020/how-to-write-a-bias-statement/>. We also encourage authors to engage with definitions of bias and other relevant concepts such as prejudice, harm, discrimination from outside NLP, especially from social sciences and normative ethics, in this statement and in their work in general.
April 26, *April 28, 2021: Workshop Paper Due Date*
May 28, 2021: Notification of Acceptance
June 7, 2021: Camera-ready papers due
August 5-6, 2021: Workshop Dates
Sasha Luccioni, MILA, Canada
Svetlana Kiritchenko, National Council Canada, Canada
Sharid Loßiciga, University of Gothenburg, Sweden
Kaiji Lu, Carnegie Mellon University, US
Marta Recasens, Google, US
Bonnie Webber, University of Edinburgh, UK
Ben Hachey, Harrison.ai Australia
Mercedes GarcÝa MartÝnez, Pangeanic, Spain
Sonja Schmer-Galunder, Smart Information Flow Technologies, US
Matthias GallÚ, NAVER LABS Europe, France
Sverker Sikstr÷m, Lund University, Sweden
Dirk Hovy, Bocconi University, Italy
Carla Perez Almendros, Cardiff University, UK
Jenny Bj÷rklund, Uppsala University
Su Lin Blodgett, UMass Amherst
Will Radford, Canvas, Australia
Marta R. Costa-jussÓ, Universitat PolitŔcnica de Catalunya, Barcelona
Hila Gonen, Amazon
Christian Hardmeier, IT University of Copenhagen/Uppsala University
Kellie Webster, Google AI Language, New York
Marta R. Costa-jussÓ: marta (dot) ruiz (at) upc (dot) edu
-- Marta Ruiz Costa-jussÓ martaruizcostajussa at gmail.com http://www.costa-jussa.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 22762 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20210427/c7b68932/attachment.txt>