[Corpora-List] Call for participation: SemEval-2022 shared task on Structured Sentiment Analysis

Andrey Kutuzov akutuzov72 at gmail.com
Tue Sep 21 15:26:28 CEST 2021


We are inviting you to participate in the SemEval 2022 Task 10: Structured Sentiment Analysis. The training and development data is published now.

Submission interface: https://competitions.codalab.org/competitions/33556 Github repo: https://github.com/jerbarnes/semeval22_structured_sentiment The repository contains the datasets, baselines, and other useful information. Mailing list (Google group) for the task: structured-sent-participants at googlegroups.com You can also follow us on Twitter: https://twitter.com/structured_sent

Affective computing is a fundamental step towards enabling human computer interaction, as human communication is filled with affective content which conveys a speaker's private state, i.e., their current mood, their emotional state, or their sentiment towards a certain object of conversation. Along with emotion detection, sentiment analysis is an important stepping stone towards this goal. On a more practical level, being able to automatically determine what people think about an idea, product, or policy is of interest to companies, governments, and private citizens.

We argue that the division of fine-grained sentiment into various sub-tasks (aspect-based sentiment, targeted sentiment, end2end, sentiment target extraction) has become counter-productive, as it is often unclear whether improvements in a subtask (e.g., extracting targets) cause improvements in the overall task. Additionally, explicitly predicting all elements increases the transparency and interpretability of sentiment models. This is beneficial for error analysis, as well as explaining biases that models learn, as we can more easily inspect if certain targets are correlated with a certain polarity.

We propose the task of Structured Sentiment Analysis, where one attempts to predict the sentiment graphs. Formally, the task is to extract all of the opinion tuples O = Oi, ... ,On in a text. Each opinion Oi is a tuple (h, t, e, p), where h is a holder who expresses a polarity p towards a target t through a sentiment expression e, implicitly defining the relationships between the elements of a sentiment graph.

Shared task schedule:

- Training data ready: September 3, 2021 - Evaluation data ready: December 3, 2021 - Evaluation start: January 10, 2022 - Evaluation end: by January 31, 2022 (latest date; task organizers may choose an earlier date) - Paper submissions due: roughly February 23, 2022 - Notification to authors: March 31, 2022

Subtasks:

The shared task has monolingual and crosslingual subtasks. Teams are free to participate in one or both. The two subtasks will be evaluated separately. In both tasks, the evaluation will be based on Sentiment Graph F1. This metric defines true positive as an exact match at graph-level, weighting the overlap in predicted and gold spans for each element, averaged across all three spans. For precision we weight the number of correctly predicted tokens divided by the total number of predicted tokens (for recall, we divide instead by the number of gold tokens), allowing for empty holders and targets which exist in the gold standard. The leaderboard for each dataset, as well as the average of all. The winning submission will be the one that has the highest average Sentiment Graph F1.

Task organizers

- Jeremy Barnes, University of Oslo - Andrey Kutuzov, University of Oslo - Jan Buchman, TU Darmstadt - Laura Ana Maria Oberländer, University of Stuttgart - Enrica Troiano, University of Stuttgart - Rodrigo Agerri, University of the Basque Country UPV/EHU - Lilja Øvrelid, University of Oslo - Erik Velldal, University of Oslo - Stephan Oepen, University of Oslo

In case of any questions concerning the datasets or code for the shared task, we encourage you to raise an issue in our Github repository (https://github.com/jerbarnes/semeval22_structured_sentiment/issues)

-- Solve et coagula! Andrey



More information about the Corpora mailing list