Neural networks have rapidly become a central component in NLP systems in the last few years. The improvement in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: How do we assess what the representations and computations are that the network learns? The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics, and neuroscience. The topics of the workshop will include, but are not limited to:
- Applying analysis techniques from neuroscience to analyze high-dimensional vector representations (such as Haxby et al., 2001; Kriegeskorte, 2008) in artificial neural networks; - Analyzing the network's response to strategically chosen inputs in order to infer the linguistic generalizations that the network has acquired (e.g., Linzen et al., 2016; Loula et al., 2018); - Examining the performance of the network on simplified or formal languages (e.g., Hupkes et al., 2018; Lake et al., 2018); - Proposing modifications to neural network architectures that can make them more interpretable (e.g., Palanki et al., 2017); - Scaling up neural network analysis techniques developed in the connectionist literature in the 1990s (Elman, 1991); - Testing whether interpretable information can be decoded from intermediate representations (e.g., Adi et al., 2017; Chrupała et al., 2017; Hupkes et al., 2017); - Translating insights on neural networks interpretation from the vision domain (e.g., Zeiler & Fergus, 2014) to language; - Explaining model predictions (e.g., Lei et al., 2016; Alvarez-Melis & Jaakkola, 2017): What are ways to explain specific decisions made by neural networks? - Adversarial examples in NLP (e.g., Ebrahimi et al., 2018; Belinkov & Bisk, 2018): How to generate them and how to evaluate their quality? - Open-source tools for analyzing neural networks in NLP (e.g., Strobelt et al., 2018; Rikters, 2018). - Evaluation of analysis results: How do we know that the analysis is valid?
BlackboxNLP 2019 is the second BlackboxNLP workshop. The programme and proceedings of the previous edition, which was held at EMNLP 2018, can be found here: https://blackboxnlp.github.io/2018/ .
We call for two types of papers:
- Archival papers. These are papers reporting on completed, original and unpublished research, with maximum length of 8 pages + references. Papers shorter than this maximum are also welcome. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. They should report on obtained results rather than intended work. These papers will undergo double-blind peer-review, and should thus be anonymized. - Extended abstracts. These may report on work in progress or may be cross submissions that have already appeared in a non-NLP venue. The extended abstracts are of maximum 2 pages + references. These submissions are non-archival in order to allow submission to another venue. The selection will not be based on a double-blind review and thus submissions of this type need not be anonymized.
Submission information: - Submissions should follow the official ACL 2019 style guidelines. - Submissions should be made through the Sonftconf START system: https://www.softconf.com/acl2019/blackboxnlp .
April 19 - Submission deadline May 17 - Notification of acceptance June 3 - Camera ready deadline August 1 - Workshop
Note: All deadlines are 11:59PM UTC-12:00 (anywhere on Earth).