[Corpora-List] Shared task on automatic identification of verbal multiword expressions – edition 1.1 - call for participation

Agata Savary agata.savary at univ-tours.fr
Wed Mar 7 13:37:47 CET 2018


*

** CALL FOR PARTICIPATION**

*

*

Shared task on automatic identification of verbal multiword expressions – edition 1.1

http://multiword.sourceforge.net/sharedtask2018

=======================================================================

*Apologies for cross-posting*

**

Thesecond edition of the PARSEME <http://parseme.eu/>shared task on automatic identification of verbal multiword expressions (VMWEs) aims at identifying verbal MWEs in running texts.  Verbal MWEs include, among others, idioms (*to let the cat out of the bag*), light verb constructions (*to make a decision*), verb-particle constructions (*to give up*), multi-verb constructions (*to make do*) and inherently reflexive verbs (*se suicider*'to suicide' in French).  Their identification is a well-known challenge for NLP applications, due to their complex characteristics including discontinuity, non-compositionality, heterogeneity and syntactic variability.

The shared task is highly multilingual: PARSEME members have elaborated annotation guidelines based on annotation experiments in about 20 languages from several language families.  These guidelines take both universal and language-specific phenomena into account. We hope that this will boost the development of language-independent and cross-lingual VMWE identification systems.

Participation

-------------

Participation is open and free worldwide.

We ask potential participant teams to register using the expression of interest form:

https://docs.google.com/forms/d/e/1FAIpQLSd6L8IntkNKXbMp8QVLLvCYzzhoH-_8ovSW0DL3BtYGNnsFhA/viewform?c=0&w=1

Task updates and questions will be posted to our public mailing list:

http://groups.google.com/group/verbalmwe

More details on the annotated corpora can be found here:

https://typo.uni-konstanz.de/parseme/index.php/2-general/202-parseme-shared-task-on-automatic-identification-of-verbal-mwes-edition-1-1

The annotation guidelines used in manual annotation of the training and test sets are available here:

http://parsemefr.lif.univ-mrs.fr/parseme-st-guidelines/1.1

Publication and workshop

------------------------

Shared task participants will be invited to submit a system description paper to a special track of the<http://multiword.sourceforge.net/mwe2017>Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018) at COLING 2018, to be held on August 25-26, 2018, in Santa Fe, New Mexico, USA:http://multiword.sourceforge.net/lawmwecxg2018

Submitted system description papers must follow the workshop submission instructions and will go through double-blind peer reviewing byother participants and selected LAW-MWE-CxG-2018 program committee members. Their acceptance depends on the quality of the paper rather than on the results obtained in the shared task.  Authors of the accepted papers will present their work as posters/demos in a dedicated session of the workshop, collocated with<http://eacl2017.org/>COLING 2018  The submission of a system description paper is not mandatory.

Provided data

-------------

For each language, we will provide to the participants corpora in which VMWEs are annotated according to universal guidelines:

* Manually annotated **training corpora** made available to the participants in advance, in order to allow them to train their systems.

* Manually annotated **development corpora** also made available in advance so as to tune/optimize the systems' parameters.

* Raw (unannotated) **test corpora**to be used as input to the systems during the evaluation phase. The VMWE annotations in this corpus will be kept secret.

The training and test sets of edition 1.0 of the shared task exemplify the type of data and annotations (with a slightly different set of VMWE categories) that we will provide, and are available at: http://hdl.handle.net/11372/LRT-2282

When available, morphosyntactic data  (parts of speech, lemmas, morphological features and/or syntactic dependencies) will also be provided.

 Depending on the language, the information will come from treebanks (e.g., Universal Dependencies) or from automatic parsers trained on treebanks (e.g., UDPipe).

We are currently preparing corpora for the following languages: Arabic, Basque, Bulgarian, Croatian,  English, Farsi, French, German, Greek, Hebrew, Hindi, Hungarian, Italian, Lithuanian, Polish, Brazilian Portuguese, Romanian, Slovene, Spanish, Turkish.

The amount of annotated data will depend on the language, and the list of covered languages may vary until the release of the training corpora.

Tracks

------

System results can be submitted in two tracks:

* **Closed track**: Systems using only the provided training data - VMWE annotations + morpho-syntactic data (if any) - to learn VMWE identification models and/or rules.

* **Open track**: Systems using or not the provided training data, plus any additional resources deemed useful (MWE lexicons, symbolic grammars, wordnets, raw corpora, word embeddings, parserslanguage models trained on external data, etc.). This track includes notably purely symbolic and rule-based systems.

Teams submitting systems in the open track will be requested to describe and provide references to all resources used at submission time. Teams are encouraged to favor freely available resources for better reproducibility of their results.

Evaluation metrics

------------------

Participants will provide the output produced by their systems on the test corpus. This output will be compared with the gold standard (ground truth).

 Evaluation metrics are precision, recall and F1, both strict (MWE-based) and fuzzy (token-based, that is, taking partial matches into account).

Specific metrics will be used for VMWEs not occurring in the training dataset, variants of VMWEs and discontinuous VMWEs.

Important dates

-----------------

March 21, 2018: shared task trial data and evaluation script released

April 4, 2018: shared task training data released

April 30, 2018: shared task blind test data released

May 4, 2018: submission of system results

May 11, 2018: announcement of results

May 25, 2018: submission of system description papers

June 20, 2018: notification

June 30, 2018: camera-ready papers

August 25-26, 2018: shared task workshop colocated with<http://multiword.sourceforge.net/mwe2017>LAW-MWE-CxG-2018

Organizing team

---------------

Silvio Ricardo Cordeiro, Carlos Ramisch, Agata Savary, Veronika Vincze

Contact: parseme-st-core at nlp.ipipan.waw.pl <mailto:parseme-st-core at nlp.ipipan.waw.pl>

* -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 36511 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20180307/caa89134/attachment.txt>



More information about the Corpora mailing list