Date: December 13, 2020, Barcelona, Spain, in conjunction with COLING 2020
After a successful inaugural workshop in 2019 in Florence Italy, the Second International Workshop on Designing Meaning Representations (DMR 2020) will be held on December 13, 2020, in Barcelona, Spain, in conjunction with COLING 2020.
Background: While deep learning methods have led to many breakthroughs in practical natural language applications, most notably in Machine Translation, Machine Reading, Question Answering, Recognizing Textual Entailment, and so on, there is still a sense among many NLP researchers that we have a long way to go before we can develop systems that can actually “understand” human language and explain the decisions they make. Indeed, “understanding” natural language entails many different human-like capabilities, and they include, but are not limited to, the ability to track entities in a text, understand the relations between these entities, track events and their participants, understand how events unfold in time, and distinguish events that have actually happened from events that are planned or intended, are uncertain, or did not happen at all. “Understanding” also entails human-like ability to perform qualitative and quantitative reasoning, based on knowledge acquired about the real world. We believe a critical step in achieving natural language understanding is to design meaning representations for text that have the necessary meaning “ingredients” that help us achieve these capabilities.
There has been a growing body of research devoted to the design, annotation, and parsing of meaning representations in recent years. The meaning representations that have been used for semantic parsing research are developed with different linguistic perspectives and practical goals in mind and have different formal properties. Formal meaning representation frameworks such as Minimal Recursion Semantics (MRS) and Discourse Representation Theory (as exemplified in the Groningen Meaning Bank and the Parallel Meaning Bank) are developed with the goal of supporting logical inference in reasoning-based AI systems and are therefore easily translatable into first-order logic, requiring proper representation of semantic components such as quantification, negation, tense, and modality. Other meaning representation frameworks such as Abstract Meaning Representation, Tectogrammatical Representation (TR) in Prague Dependency Treebanks and the Universal Conceptual Cognitive Annotation (UCCA), put more emphasis on the representation of core predicate-argument structure, lexical semantic information such as semantic roles and word senses, or named entities and relations. The automatic parsing of natural language text into these meaning representations, and to a lesser degree the generation of natural language text from these meaning representations, are also very active areas of research, and a wide range of technical approaches and learning methods have been applied to these problems. In addition, there have also been early attempts to use these meaning representations in natural language applications.
This workshop intends to bring together researchers who are producers and consumers of meaning representations and through their interaction gain a deeper understanding of the key elements of meaning representations that are the most valuable to the NLP community. The workshop will also provide an opportunity for meaning representation researchers to critically examine existing frameworks with the goal of using their findings to inform the design of next-generation meaning representations. A third goal of the workshop is to explore opportunities and identify challenges in the design and use of meaning representations in multilingual settings. A final goal of the workshop is to understand the relationship between distributed meaning representations trained on large data sets using network models and the symbolic meaning representations that are carefully designed and annotated by CL researchers and gain a deeper understanding of areas where each type of meaning representation is the most effective, and how they can be linked.
Solicitation: We solicit papers that address one or more of the following topics:: * Design and annotation of meaning representations; * Cross-framework comparison of meaning representations; * Challenges in automatic parsing of meaning representations; * Challenges in automatically generating text from meaning representations; * Using meaning representations in real-world natural language applications such as Information Extract, Question Answering, Text Summarization, etc.; * Issues in applying meaning representations to a diverse set of languages, accommodating typological generalizations; * Issues in developing meaning representations for low-resource and under-resourced languages; * The relationship between symbolic meaning representations and distributed semantic representations; * Formal properties of meaning representations; * A discussion of critical criteria for the evaluation of meaning representations, such as cross-lingual applicability, annotation consistency or parsing accuracy; * Any other topics that address the design, processing, and use of meaning representations.
Confirmed Invited Speakers: Daniel Gildea, Lori Levin, Mark Steedman
* Workshop papers due: August 24, 2020 * Notification of acceptance: October 7, 2020 * Camera-ready papers due: November 1, 2020 * Workshop date: December 13, 2020
Submissions should report original and unpublished research on topics of interest to the workshop. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. They should emphasize obtained results rather than intended work, and should indicate clearly the state of completion of the reported results.
A paper accepted for presentation at the workshop must not be or have been presented at any other meeting with publicly available proceedings. Submission is electronic, using the Softconf START conference management system. The submission site can be found on the workshop website ( http://www.cs.brandeis.edu/~clp/dmr2020).
Long/short paper submissions must use the official templates (which can be found here: https://coling2020.org/pages/submission). Long papers must not exceed nine (9) pages of content. Short papers and demonstration papers must not exceed five (5) pages of content. References do not count against these limits.
Note: The supplementary material does not count towards the page limit and should not be included in the paper, but should be submitted separately using the appropriate field on the submission website. All submissions must be in PDF format and must conform to the official style guidelines, which are contained in the template files.
Reviewing of papers will be double-blind. Therefore, the paper must not include the authors' names and affiliations or self-references that reveal the authors’ identity--e.g., "We previously showed (Smith, 1991) ..." should be replaced with citations such as "Smith (1991) previously showed ...". Papers that do not conform to these requirements will be rejected without review.
Authors of papers that have been or will be submitted to other meetings or publications must provide this information to the workshop organizers ( dmr2020-chairs at googlegroups.com). Authors of accepted papers must notify the program chairs within 10 days of acceptance if the paper is withdrawn for any reason.
Co-Organizers: * Nianwen Xue, Brandeis University * Johan Bos, University of Groningen * William Croft, University of New Mexico * Jan Hajič, Charles University * Chu-Ren Huang, The Hong Kong Polytechnic University * Stephan Oepen, University of Oslo * Martha Palmer, University of Colorado * James Pustejovsky, Brandeis University -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 8732 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200808/fac92783/attachment.txt>