[Corpora-List] Second CFP: Workshop on Language Resources for Responsible AI at LREC-2020

Svetlana Kiritchenko svkir06 at gmail.com
Mon Jan 27 17:16:50 CET 2020


*Workshop on Language Resources for Responsible AI* held in conjunction with the 12th Conference on Language Resources and Evaluation (LREC-2020)

*May 12, 2020, Marseille, France*

*Website:* https://sites.google.com/view/LR4Responsible-AI-2020

This one-day workshop will provide a forum to present and discuss research work on creation and use of language resources and tools specifically designed to ensure ethical behavior of Artificial Intelligence (AI) systems.

Traditionally, AI systems have been developed to maximize accuracy on benchmark tasks and datasets. However, when these systems are deployed in the real world, ethical considerations need to be taken into account in order to build trust in users and make sure that the systems do not cause any harm to individuals or society. With the emergence of societal awareness about the need for responsible AI, new regulations and standards are being released, such as GDPR enforced by the European Union (2018), China’s CyberSecurity Law and the General Principles of the Civil Law (2017), and Canadian national standards for the ethical design and use of Artificial Intelligence (2019). However, the technology in its current state lacks the necessary tools for the AI developers to comply with these regulations. There is an urgent need for tools that can help:

· Researchers - to investigate how ethical considerations should be taken into account while designing AI systems; · Companies - to ensure their products meet ethical requirements, to apply ethics-by-design frameworks, and to earn the trust of their clients; · End users - to be able to understand and to challenge automatic decisions when necessary; · Policy makers and governments - to be able to audit and scrutinize AI systems for compliance with policies and regulations.

*Topics of Interest*

We invite papers describing original research on design, creation, and use of language resources (annotated and unlabeled corpora, lexicons, dictionaries, templates, language representations, evaluation metrics, etc.) and tools to address any of the issues in responsible AI, including (but not limited to):

- Fairness and unintended biases - Confidentiality and privacy - Interpretability and explainability - Safety and security - Transparency - Accountability - Integrity.

The language resources and tools can be designed for any one or several NLP (or non-NLP) applications, including (but not limited to):

- Syntax parsing and tagging - Lexical semantics - Language representation - Discourse analysis - Information retrieval - Information extraction - Natural language generation - Textual inference - Speech processing - Dialogue systems - Argument mining - Sentiment and emotion analysis - Machine translation - Question answering - Summarization - Social media analysis - Computational social science - Health and wellness applications - Auditing in highly regulated fields, such as medical, financial, and legal.

*Paper Submission*

We solicit original papers that describe language resources, evaluation metrics, and tools designed to assist in developing and assessing ethical AI systems. We also welcome papers highlighting ethics related problems in existing, widely used language resources (e.g., labeled datasets, word embeddings). We invite regular papers describing completed projects, emerging research papers presenting ongoing work, and position papers arguing an opinion on one of the topics of interest.

The papers can be up to 8 pages long (plus unlimited pages for references) and should be formatted according to the LREC style guidelines <https://lrec2020.lrec-conf.org/en/submission2020/submission-guidelines/>. The review process will be double-blind, so please do not include the authors’ names and affiliations in the submission. The submissions will be reviewed by at least two members of the Program Committee. Accepted papers will be invited for an oral (or poster) presentation during the workshop and will be published as workshop proceedings at the LREC website. At least one author for each accepted paper has to attend the workshop to present the paper.

Submissions to multiple venues are allowed, but papers must be withdrawn from other venues if accepted by the workshop.

Papers must be submitted electronically through the START system <https://www.softconf.com/lrec2020/LR4Responsible-AI/user/>.

*Identify, Describe and Share your LRs*

Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.

As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2020 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time.

*Important Dates:*

Paper submission deadline: *Feb. 19, 2020* Notification of acceptance: *Mar. 11, 2020* Camera-ready paper deadline: *Apr. 2, 2020* Workshop: *May 12, 2020*

All deadlines are 11.59 pm UTC -12h ("anywhere on Earth").

*Confirmed Invited Speaker*

Emily M. Bender, University of Washington

*Program Committee*

Alberto Bugarín Diz, Universidade de Santiago de Compostela Marta Ruiz Costa-jussà, Universitat Politècnica de Catalunya Sepideh Ghanavati, University of Maine Randy Goebel, University of Alberta Christian Hardmeier, Uppsala University Graeme Hirst, University of Toronto Stan Matwin, Dalhousie University Saif Mohammad, National Research Council Canada Noman Mohammed, University of Manitoba Cataldo Musto, University of Bari Malvina Nissim, University of Groningen Vicente Ordóñez Román, University of Virginia Viviana Patti, Università di Torino Ehud Reiter, University of Aberdeen Maarten Sap, University of Washington Sameer Singh, University of California, Irvine Patricia Thaine, University of Toronto Lyle Ungar, University of Pennsylvania Rob Voigt, Stanford University Steve Wilson, University of Edinburgh Jürgen Ziegler, University of Duisburg-Essen

*Organizing Committee*

Svetlana Kiritchenko, National Research Council Canada Isar Nejadgholi, National Research Council Canada

*Contact Information*

Website: https://sites.google.com/view/LR4Responsible-AI-2020 Email: LR4Responsible-AI at googlegroups.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 8198 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200127/a708b885/attachment.txt>



More information about the Corpora mailing list