[Corpora-List] Final CFP: COLING 2nd Workshop on Gender Bias for Natural Language Processing

Marta Ruiz Costa-Jussa marta.ruiz at upc.edu
Sun Aug 23 12:59:56 CEST 2020


COLING 2nd Workshop on Gender Bias for Natural Language Processing

http://genderbiasnlp.talp.cat

13th December, Barcelona

Gender and other demographic biases in machine-learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, which are present in widely used products and can lead to poor user experiences. There is a growing body of research into improved representations of gender in NLP models. Key example approaches are to build and use balanced training and evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and to change the learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need to create widespread awareness of bias and a consensus on how to work against it, for instance by developing standard tasks and metrics. Our workshop provides a forum to achieve this goal.

Topics of interest

We invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets exploring demographics such as metrics to identify and assess relevant biases or focusing on fairness in NLP systems. Finally, the workshop is also open to non-technical work addressing sociological perspectives, and we strongly encourage critical reflections on the sources and implications of bias throughout all types of work.

Paper Submission Information

Submissions will be accepted as short papers (4-6 pages) and as long papers (8-10 pages), plus additional pages for references, following the *ACL 2020 guidelines*. Supplementary material can be added. Blind submission is required.

This year, *we introduce the requirement that papers include a bias statement *which explicitly defines (a) what system behaviours are considered as bias in the work and (b) why those behaviours are harmful, in what ways, and to whom (cf. Blodgett et al. (2020) <https://arxiv.org/abs/2005.14050>). We encourage authors to engage with definitions of bias and other relevant concepts such as prejudice, harm, discrimination from outside NLP, especially from social sciences and normative ethics, in this statement and in their work in general.

Please refer to this link <https://genderbiasnlp.talp.cat/gebnlp2020/how-to-write-a-bias-statement/> on *how to write a bias statement.*

Paper submission link: https://www.softconf.com/coling2020/GeBNLP/

Important dates

Aug 4. Anonymity period begins

Sep 4. Deadline for submission

Oct 9. Notification of acceptance

Nov 1. Camera-ready submission

Keynote

Natalie Schluter, IT University of Copenhagen, Denmark

Dirk Hovy, Bocconi University, Italy

Programme Committee

Svetlana Kiritchenko, National Research Council of Canada, Canada

Kai-Wei Chang, University of Washington, US

Sharid Loáiciga, University of Gothenburg, Sweden

Zhengxian Gong, Soochow University, China

Marta Recasens, Google, US

Bonnie Webber, University of Edinburgh, UK

Ben Hachey, Harrison.ai Australia

Mercedes García Martínez, Pangeanic, Spain

Sonja Schmer-Galunder, Smart Information Flow Technologies, US

Matthias Gallé, NAVER LABS Europe, France

Sverker Sikström, Lund University, Sweden

Dorna Behdadi, University of Gothenburg, Sweden

Steve Wilson, University of Edinburgh, UK

Kathleen Siminyu, Artificial Intelligence for Development – Africa Network

Dirk Hovy, Bocconi University, Italy

Carla Pérez Almendros, Cardiff University, UK

Jenny Björklund, Uppsala University, Sweden

Organizers

Marta R. Costa-jussà, Universitat Politècnica de Catalunya, Barcelona

Christian Hardmeier, Uppsala University

Kellie Webster, Google AI Language, New York

Will Radford, Canva, Sydney

Contact persons

Marta R. Costa-jussà: marta (dot) ruiz (at) upc (dot) edu -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 23444 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200823/69f52579/attachment.txt>



More information about the Corpora mailing list