[Corpora-List] SEMEVAL 2022 Task 3: Last call for participation

Roberto Zamparelli roberto.zamparelli at unitn.it
Mon Jan 10 22:06:26 CET 2022


Abbreviated Title: PreTENS Location: Online State: Country: Contact Email: semeval2022-task3-organizers at googlegroups.com City: Contact: Roberto Zamparelli Website: https://sites.google.com/view/semeval2022-pretens/ Submission Deadline: Monday, 20 January 2022

Apologies for cross-posting.

This shared task focuses on the ability of neural networks to detect the semantic deviance triggered by the failure of presuppositions on the taxonomic status (subset-superset) of two linguistic arguments.

============================================ CALL FOR PARTICIPATION

SemEval 2022 Task 3 Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS)

Task Page: https://sites.google.com/view/semeval2022-pretens/

This SemEval 2022 Task is aimed to encourage the development of general methods to detect semantically infelicitous sentences.

All participating teams will be invited to submit a task description paper in the proceedings published by ACL.

Motivation ================================================

A growing body of literature on computational linguistics has tried to probe the metalinguistic abilities of modern language models, including the ability to recognize linguistic structures that are deviant at the syntactic and/or semantic level (e.g. ??"Who does speaking to bothers Anna", ??"I like cats, and in particular hamsters"). This can be used to probe the cognitive plausibility of modern NLP models, but can also find application in the detection of writing/reasoning errors more subtle than what current grammar checkers can detect. We focus on a case of purely semantic deviance that requires the ability to recognize lexical relationships between words and a capacity for generalization.

Task Overview ================================================

SEMEVAL 2022 Task 3 will comprise datasets in 3 languages: English, Italian, French. The French and Italian are slightly adapted, randomly ordered translations of the English dataset. Each dataset will contain about 20,394 artificially generated sentences that exemplify constructions which enforce presuppositions on a certain taxonomic status of their arguments A and B (i.e. whether A denotes a subset of B or vice-versa). Some constructions require their arguments not to be in a taxonomic relation (e.g. comparatives "I like A more than B", see ??"I like trees more than oaks"), others require a taxonomic relation in a specific order, e.g. exemplifications (I like A, and in particular B) or generalizations (I like A, and B in general), yet others may be ambiguous with respect to their taxonomic requirements.

The argument nouns A and B are taken from 30 semantic categories (e.g. dogs, birds, mammals, cars, motorcycles, cutlery, clothes, trees, plastics...).

Participants have the freedom to choose a subset of sub-tasks or settings that they'd like to participate in (see sections detailing each of the subtasks). The evaluation will be carried out as the average of the three languages (with 0 given to languages not submitted).

This task consists of two subtasks: Subtask A ------------------------------------------------

A binary classification task aimed at determining whether a sentence contains the correct taxonomic configuration. All sentences for this sub-task will be provided with an acceptability label such as in the following examples:

I like trees, and in particular birches 1

I like oaks, and in particular trees 0

Where the labels (1 = acceptable, 0 = unacceptable) are derived from the theoretical semantic analysis of the various constructions.

For this binary classification sub-task, the evaluation metric will be based on Precision, Recall and F-score; the final ranking will be based on F-score.

Subtask B ------------------------------------------------

A set of 1,533 sentences (mostly a subset of the whole dataset), corresponding to about 5% of the total and representative of the patterns considered, was judged by human annotators via a crowdsourcing campaign on a seven point Likert-scale, ranging from 1 (not at all acceptable) to 7 (completely acceptable). In this case, the sentences will be provided with the average judgment they received, which could be affected by plausibility considerations, argument order and other factors. Examples of data for this conditions are:

I like politicians, an interesting type of farmer 1.42

I like governors, an interesting type of politician 6.16

For this sub-task, the evaluation metric will be based on Spearman’s rank correlation coefficient between the task participants’ scores and the test set scores

The two sub-tasks are independent. Participants can decide to participate in just one of them, though we encourage participation in multiple subtasks.

Important Dates ================================================ Training data available: September 3, 2021

Evaluation starts: January 15, 2022

Evaluation ends: January 20, 2022

Paper submissions due: (TBC) February 23, 2022

Notification to authors: March 31, 2022

Organization: Dominique Brunato - Institute for Computational Linguistics "A. Zampolli" (CNR), Pisa, Italy Cristiano Chesi - University School for Advanced Studies (IUSS), Pavia, Italy Shammur Absar Chowdhury - Qatar Computing Research Institute, HBKU, Qatar Felice Dell'Orletta - Institute for Computational Linguistics "A. Zampolli" (CNR), Pisa, Italy Simonetta Montemagni - Institute for Computational Linguistics "A. Zampolli" (CNR), Pisa, Italy Giulia Venturi - Institute for Computational Linguistics "A. Zampolli" (CNR), Pisa, Italy Roberto Zamparelli - CIMEC - Mind/Brain Center - University of Trento, Italy

================================================

For more information, see: https://sites.google.com/view/semeval2022-pretens/home-page/task-description?authuser=0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 6621 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20220110/476a527d/attachment.txt>



More information about the Corpora mailing list