[Corpora-List] Call for Participation SemEval 2016 Task 1: Semantic Textual Similarity (STS)

Daniel Cer cer at google.com
Fri Sep 11 20:04:15 CEST 2015

Call for Participation

SemEval 2016 Task 1

Semantic Textual Similarity (STS)

Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding (NLU) problem.

To stimulate research in this area and encourage the development of creative new approaches to modeling sentence level semantics, the STS shared task has been held annually since 2012, as part of the SemEval/*SEM family of workshops. Each year the competition brings together numerous participating teams, diverse approaches, and ongoing improvements to state-of-the-art methods.

Task Definition


Given two sentences, participating systems are asked to return a continuous valued similarity score on a scale from 0 to 5, with 0 indicating that the semantics of the sentences are completely independent and 5 signifying semantic equivalence. Performance is assessed by computing the Pearson correlation between machine assigned semantic similarity scores and human judgements.

STS 2016 offers two tracks: STS Core and Cross-lingual STS. The former is a traditional STS task with paired monolingual sentences drawn from English data sources. The latter involves assessing paired English and Spanish sentences.

2016 Data


[STS Core]

This year's shared task includes five distinct evaluation sets:

* News Article Headlines

* Plagiarism Detection (new)

* Post-Edited Machine Translations (new)

* Q&A Answer-Answer

* Q&Q Question-Question (new)

New this year: We are explicitly allowing and encouraging participants to train unsupervised and/or semi-supervised models on the evaluation data sources. See the task website for details.

[Cross-lingual STS]

We are excited to announce a new pilot subtask involving computing the semantic similarity between cross-lingual pairings of English and Spanish sentences. The evaluation data will be sourced from two types of data: (i) parallel and comparable literary works in English and Spanish; (ii) data from the same English sources as the monolingual Core STS Task but with one of the sentences within each pair manually translated into Spanish.




To register, please complete the following form: https://docs.google.com/forms/d/1HTRtP7B94gqdW5YuRfRh5pEBhukuRIh5hXR1nOEib90/viewform?usp=send_form

[Website and trial data]

For more details, including trial data, see the STS SemEval 2016 Task 1 webpage at: http://alt.qcri.org/semeval2016/task1/

[Mailing List]

Join the mailing list for task updates and discussion at http://groups.google.com/group/STS-semeval.

Important dates


Evaluation start: January 10, 2016

Evaluation end: January 31, 2016

Paper submission due: February 28, 2016

Paper reviews due: March 31, 2016

Camera ready due: April 30, 2016

SemEval workshop: Summer 2016



[STS Core]

Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab,

Aitor Gonzalez-Agirre, Weiwei Guo

[Cross-lingual STS]

Carmen Banea, Daniel Cer, Rada Mihalcea, Janyce Wiebe



Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria and Janyce Wiebe. SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability. Proceedings of SemEval 2015.

Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau and Janyce Wiebe. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity. Proceedings of SemEval 2014.

Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre and WeiWei Guo. *SEM 2013 shared task: Semantic Textual Similarity. Proceedings of *SEM 2013.

Eneko Agirre, Daniel Cer, Mona Diab and Aitor Gonzalez-Agirre. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. Proceedings of SemEval 2012. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 19552 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20150911/b74c6774/attachment.txt>

More information about the Corpora mailing list