Final Call for Papers
Wordnets in the Deep Learning Era 2022 Workshop
Date: Friday June 24, 2022
Venue: Palais du Pharo, Marseille, France
*Deadline extension: 18 April 2022*
Submission page: https://www.softconf.com/lrec2022/Wordnets/
============== Call for Papers
In recent years, the NLP community is contributing to the emergence of powerful new deep learning techniques and large multilingual pre-trained language models that are revolutionizing the approach to most NLP tasks. Just a short time ago, nobody could have predicted the recent breakthroughs that have resulted in systems able to deal with unseen tasks (Wei et al. 2021 <https://arxiv.org/abs/2109.01652>; Sanh et al. 2021 <https://arxiv.org/abs/2110.08207>; Min et al. 2021 <https://arxiv.org/abs/2110.15943>).
An NLP task that can largely contribute from this approach is building large-scale lexical knowledge bases such as wordnets, as it is very time consuming and requires large research groups and long periods of development (Miller 1995; Fellbaum 1998; Gonzalez-Agirre et al. 2012; Bond and Paik 2012).
Lately, several new approaches have been devised towards its automatic development. For instance, Watset (Ustanov et al. 2017 <http://www.aclweb.org/anthology/P17-1145>) has been used for the automatic induction of English and Russian synsets. Noraset et al. (2017) <https://dl.acm.org/doi/10.5555/3298023.3298042> and Gadetsky et al. (2018 <https://arxiv.org/abs/1806.10090>) propose different systems for automatically providing definitions of words in their context. Sainz and Rigau (2020 <https://adimen.si.ehu.es/~rigau/publications/gwc21-sr.pdf>) infer without training the domain label of a particular definition. Qi et al. (2020 <https://www.aclweb.org/anthology/2020.emnlp-demos.23/>) propose a reverse dictionary system that returns words semantically matching the input definitions. Feng et al. (2021 <https://aclanthology.org/2021.inlg-1.21/>) addresses the concept-to-text generation task. Barba et al. (2021 <https://www.ijcai.org/proceedings/2021/0520.pdf>) generate usage examples for a given set of words with their definitions. Chen et al. (2021 <https://arxiv.org/abs/2010.12813>) automatically construct taxonomies from pretrained language models.
On the other hand, as constructing benchmarks that test the abilities of modern natural language understanding models is difficult, large-scale knowledge bases are used to generate lexical semantic, world knowledge and common sense probes (Ma et al 2021 <https://arxiv.org/abs/2011.03863>). For instance, Richardson and Sabharwal (2020 <https://arxiv.org/abs/1912.13337>) use links in WordNet to generate question-answer pairs to evaluate language models. Aspillaga et al. (2021 <https://openreview.net/forum?id=ghKbryXRRAB>) define a probing classifier based on concept relatedness according to WordNet.
Additionally, it is worth investigating possible opportunities to leverage both structured and unstructured information sources (Lauscher et al. 2020 <https://aclanthology.org/2020.coling-main.118/>; Colon-Hernandez et al. 2021 <https://arxiv.org/abs/2101.12294>; Lu et al. 2021 <https://arxiv.org/abs/2109.04223>). For instance, Peters et al. (2019 <https://arxiv.org/abs/1909.04164>) enhance contextual representations with structured, human-curated knowledge.
In this workshop we wish to look at how large language models can productively interact with existing semantic networks. We also welcome approaches that use language models for existing tasks, such as word sense disambiguation, or that use semantic networks to augment language models. Topics of Interest
We invite submissions with original contributions addressing all topics related to the productive interaction between large pre-trained language models and large semantic networks. Areas of interest include, but are not limited to, the following:
- Building and enriching monolingual, multilingual and cross-lingual
lexical knowledge bases, semantic networks and wordnets using deep learning
techniques and large pre-trained language models.
- Exploiting lexical knowledge bases, semantic networks and wordnets for
creating world knowledge and common sense probes for testing large
pre-trained language models.
- Using lexical knowledge bases, semantic networks and wordnets for
creating prompts for zero-shot or few-shot or transfer learning NLP tasks.
- Leveraging lexical knowledge bases, semantic networks and wordnets and
large pre-trained language models towards natural language understanding.
Submission & Publication
We accept research papers addressing WordNets and deep learning techniques. Authors must declare if part of the paper contains material previously published elsewhere.
We accept the following typologies of papers:
- Research papers.
- Research posters (work-in-progress, projects in early stage of
development or description of new resources or methods).
Papers should be written in English and all typologies are allowed a maximum of 8 pages, references excluded. The program committee reserves the right to decide whether a paper submitted as a research paper is better suited for a poster presentation.
Accepted papers will be published in online proceedings.
Papers must strictly comply with the LREC stylesheet ( https://lrec2022.lrec-conf.org/en/submission2022/authors-kit/) and be submitted in unprotected PDF format.
Submission page: https://www.softconf.com/lrec2022/Wordnets/
Each submission will be reviewed by three programme committee members. In compliance with the LREC rules, papers must *not* be anonymized. Important dates
- Paper submission deadline: 11 April 2022
- Notification of acceptance: 3 May 2022
- Camera-ready paper: 23 May 2022
- Workshop date: 24 June 2022
*TBA* Organizing Committee
Javier Alvez (UPV/EHU)
Begoņa Altuna (HiTZ, UPV/EHU)
Francis Bond (NTU)
Bolette Pedersen (U Copenhaguen)
Alexandre Rademaker (IBM Research and FGV/EMAP)
German Rigau (HiTZ, UPV/EHU)
Piek Vossen (VU)
To contact the organizers, please email Javier Alvez (name.surname[at]ehu.eus) or Begoņa Altuna (name.surname[at]ehu.eus) using Subject: [WDLE 2022]. Programme Committee
Rodrigo Agerri (HiTZ, UPV/EHU)
Eneko Agirre (HiTZ, UPV/EHU)
Montse Cuadros (Vicomtech)
Filip Ilievski (ISI, USC)
Itziar Gonzalez-Dios (HiTZ, UPV/EHU)
Michael Goodman (LivePerson)
Egoitz Laparra (University of Arizona)
Luis Morgado da Costa (Palacky University Olomouc)
Maciej Piasecki (WUT)
Roberto Navigli (Sapienza University)
Didier Schwab (Grenoble)
Kiril Simov (BulTreeBank)
Aitor Soroa (HiTZ, UPV/EHU)
Pia Sommerauer (VU) Identify, Describe and Share your LRs!
- Describing your LRs in the LRE Map is now a normal practice in the
submission procedure of LREC (introduced in 2010 and adopted by other
conferences). To continue the efforts initiated at LREC 2014 about “Sharing
LRs” (data, tools, web-services, etc.), authors will have the possibility,
when submitting a paper, to upload LRs in a special LREC repository. This
effort of sharing LRs, linked to the LRE Map for their description, may
become a new “regular” feature for conferences in our field, thus
contributing to creating a common repository where everyone can deposit and
- As scientific work requires accurate citations of referenced work so
as to allow the community to understand the whole context and also
replicate the experiments conducted by other researchers, LREC 2022
endorses the need to uniquely Identify LRs through the use of the
International Standard Language Resource Number (ISLRN, www.islrn.org),
a Persistent Unique Identifier to be assigned to each Language Resource.
The assignment of ISLRNs to LRs cited in LREC papers will be offered at
submission time. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 44098 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20220408/a32e41f8/attachment.txt>