[Corpora-List] 2nd Call for Participation and new data: SEMEVAL 2015 Task 2 -- Semantic Textual Similarity

Eneko Agirre e.agirre at ehu.es
Thu Oct 16 22:40:03 CEST 2014


[Apologies for duplication]

2nd Call for Participation SEMEVAL 2015 Task 2 Semantic Textual Similarity

*NEW*: train data for pilot on interpretable STS, including chunk-level alignments. Task definition has been updated.

Semantic textual similarity (STS) has received an increasing amount of attention in recent years, culminating with the Semeval/*SEM tasks organized in 2012, 2013 and 2014, bringing together more than 60 participating teams. Please check http://ixa2.si.ehu.es/stswiki/ for details on previous tasks.

Given two sentences of text, s1 and s2, the systems participating in this task should compute how similar s1 and s2 are, returning a similarity score, and an optional confidence score. The annotations and systems will use a scale from 0 (no relation) to 5 (semantic equivalence), indicating the similarity between two sentences. Participating systems will be evaluated using the same metrics traditionally employed in the evaluation of STS systems, and also used in previous Semeval/*SEM STS evaluations, i.e., mean Pearson correlation between the system output and the gold standard annotations.

In 2015 we will continue to evaluate STS systems on the following subtasks:

- *NEW* for 2015, we devised a *pilot subtask on interpretable STS*. With this pilot task we want to explore whether STS systems are able to explain WHY they think the two sentences are related / unrelated, adding an explanatory layer to the similarity score. As a first step in this direction, participating systems will need to*align the segments* in one sentence in the pair to the segments in the other sentence, describing what kind of *relation* exists between each pair of segments. This pilot subtask will provide specific training data.

- *English STS*, with sentence pairs extracted from encyclopedic content and newswire.

- *Spanish STS*, with sentence pairs extracted from encyclopedic content and newswire, and text snippet pairs obtained from news headlines.

Please join the mailing list for updates at http://groups.google.com/group/STS-semeval. Check out the task's webpage at http://alt.qcri.org/semeval2015/task2/ for more details.

Important dates:

Evaluation start: December 5, 2014 [updated due to clash with NAACL-2015 deadline]

Evaluation end: December 20, 2014 [updated due to clash with NAACL-2015 deadline]

Paper submission due: January 30, 2015

Paper reviews due: February 28, 2015

Camera ready due: March 30, 2015

SemEval workshop: Summer 2015

Organizers: * Coordination: Eneko Agirre, Carmen Banea, Mona Diab, Montse Maritxalar * STS English: Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, and German Rigau. * STS Spanish: Carmen Banea, Claire Cardie, Rada Mihalcea, and Janyce Wiebe. * STS pilot on interpretability and segment alignment: Eneko Agirre, Aitor Gonzalez-Agirre, Iņigo Lopez-Gazpio, Montse Maritxalar and German Rigau.

References:

**Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, Janyce Wiebe. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity. Proceedings of SemEval 2014. [pdf <http://anthology.aclweb.org/S/S14/S14-2010.pdf>]

Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, WeiWei Guo. *SEM 2013 shared task: Semantic Textual Similarity, Proceedings of *SEM 2013. [pdf <http://aclweb.org/anthology//S/S13/S13-1004.pdf>]

Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. Proceedings of SemEval 2012. [pdf <http://aclweb.org/anthology-new/S/S12/S12-1051.pdf>]

-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 5150 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20141016/a0305df4/attachment.txt>



More information about the Corpora mailing list