[Corpora-List] Deadline Extended: Call for Task Proposals: SemEval-2017: International Workshop on Semantic Evaluations

David Jurgens jurgens at stanford.edu
Thu Mar 31 20:18:47 CEST 2016


The deadline for task proposals has been extended to April 8, 2016.

SemEval-2017: International Workshop on Semantic Evaluations

Call for Task Proposals

We invite proposals for tasks to be run as part of SemEval-2017. SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.

The SemEval evaluations explore the nature of meaning in natural languages in practical terms, by providing an emergent mechanism to identify the problems (e.g., how to characterize meaning and what is necessary to compute it) and to explore the strengths of possible solutions by means of standardized evaluation on shared datasets. SemEval evaluations initially focused on identifying word senses computationally, but have later grown to investigate the interrelationships among the elements in a sentence (e.g., semantic relations, semantic parsing, semantic role labeling), relations between sentences (e.g., coreference), and author attitudes (e.g., sentiment analysis), among other research directions.

For SemEval-2017, we welcome any task that can test an automatic system for semantic analysis of text, be it application-dependent or application-independent. We especially welcome tasks for different languages, cross-lingual tasks, tasks requiring semantic interpretation, and tasks with both intrinsic and application-based evaluation. See the websites of previous editions of SemEval to get an idea about the range of tasks explored, e.g., for SemEval-2016: http://alt.qcri.org/semeval2016/<http://alt.qcri.org/semeval2015/>

We strongly encourage proposals based on pilot studies that have already generated initial data which can provide concrete examples and discuss the challenges of when preparing the full task. In the event of receiving many proposals, preference will be given to tasks that have already run a pilot study for the proposed task.

We encourage the following aspects in task design:

Application-oriented tasks We welcome tasks that are devoted to developing novel applications of computational semantics. As an analogy, the TREC Question-Answering (QA) track was solely devoted to building QA systems to compete with current IR systems. Similarly, we will encourage tasks that have a clearly defined end-user application showcasing and are enhancing our understanding of computational semantics, as well as extending the current state-of-the-art.

Umbrella tasks In order to reduce fragmentation of similar tasks and increase community effort towards solving the underlying research problems, we encourage task organisers to propose larger tasks that include several related subtasks. For example, a Semantic Similarity umbrella task might include subtasks for different kinds of similarity and different languages. Similarly, a Sentiment Analysis umbrella task might include subtasks for Twitter, Product Reviews, and Service Reviews. We also welcome task proposals for umbrella tasks focusing on different aspects of the same phenomena. For example, an Attitude Inference task might have subtasks for detecting an author’s emotional state, the sentiment of their writing, and the writing’s objectivity. In addition, the program committee will actively encourage task organisers proposing similar tasks to combine their efforts into larger umbrella tasks.

Task Selection

Task proposals will be reviewed by experts, and the reviews will serve as the basis for acceptance decisions. In case of conflict, more innovative new tasks will be given preference over task re-runs. In case of very similar task proposals, the selection committee will propose task mergers. If no consensus can be reached, the task with the better reviews will be given preference. In case of task proposals leaving important questions open, a task might also be conditionally selected and might finally be dropped if sufficient answers are not provided in time.

Task Organization

Task organizers are expected to provide to task participants format checkers and standard scorers. Moreover, in order to lower the obstacles to participation, we encourage task organizers to provide baseline systems that participants can use as a starting point. A baseline system typically contains code that reads the data, creates a baseline response (e.g., random guessing, majority class prediction, etc.), and outputs the evaluation results. Whenever possible, baseline systems should be written in widely used programming languages and/or should be implemented as a component for standard NLP pipelines such as UIMA or GATE.

New Tasks vs. Task Reruns

We welcome both new tasks and task reruns. For a new task, a major concern to be addressed in the proposal is whether it would be able to attract participants. For task reruns, the organizers should in their proposal defend the need for another iteration of their task, e.g., because there is a need for a new form of evaluation (e.g., a new metric to test new phenomena, a new application-oriented scenario, etc.), or there is a need to test on new types of data (e.g., social media, domain-specific corpora), or there is significant expansion in scale over a previous trial run of the task, etc.

In the case of a rerun, we further discourage carrying over the same subtasks year after year and just adding new tasks as this can lead to the accumulation of too many subtasks. Evaluating on a different dataset with the same task formulation typically should not be considered a separate subtask.

IMPORTANT DATES

SemEval-2017

* 1st Call for task proposals March 3, 2016

* Task proposals due March 31, 2016 April 8, 2016

* Reviewers assigned April 5, 2016 April 15, 2016

* Task reviews due April 20, 2016 April 29, 2016

* Task selection notification May 5, 2016 May 6, 2016

* Tasks merged May 31, 2016

* Trial data ready July 1, 2016

* Training data ready September 1, 2016

* Test data ready December 1, 2016

* Evaluation start January 10, 2016

* Evaluation end January 31, 2016

* Paper submission due February 28, 2017 [TBC]

* Paper reviews due March 31, 2017 [TBC]

* Camera ready due April 30, 2017 [TBC]

* SemEval workshop Summer 2017

The SemEval-2017 Workshop will be co-located with a major NLP conference in 2017.

SUBMISSION DETAILS

The task proposals should be self-contained document of roughly 4-8 pages. Each proposal should contain the following: Overview:

* A summary of the task in general

* Motivation for why this task is needed and which communities would be interested in participating.

* What the expected impact of the task will be

Data & Resources

* How the training/testing data will be built and/or procured

* What source texts/corpora are going to be used? Please discuss whether existing corpora have been re-used or not.

* How much data is going to be produced

* How will quality of the data be ensured and evaluated

* An example how a data instance would look like

* The anticipated availability of the necessary resources to the participants (copyright, etc.)

* The resources required to prepare the task (computation and annotation time, costs of annotations, etc.) and their availability

Pilot Task

* Details of the pilot task, if any

* What lessons were learned and how these will impact the future task design

Evaluation

* The evaluation methodology to be used, including clear evaluation criteria

For Task Reruns

* Justification why a new iteration of the task is needed, using the criteria discussed above

* What will differ from the previous instance

* The expected impact of the re-run compared with the previous instance

Task organizers

* Names, affiliations, brief description of research interests and relevant experience, contact information (email).

Proposals will be reviewed by an independent group of area experts who may not have familiarity with recent SemEval tasks and therefore, all proposals should be written in a self-explanatory manner and contain sufficient examples.

Please submit proposals by mail in PDF format to the SemEval email address: semeval-organizers at googlegroups.com<mailto:semeval-organizers at googlegroups.com>

In case you are not sure whether a task is suitable for SemEval, please feel free to get in touch to discuss your idea.

CHAIRS Steven Bethard, University of Alabama at Birmingham Daniel Cer, Google Marine Carpuat, University of Maryland David Jurgens, Stanford University

The SemEval DISCUSSION GROUP Please join our discussion group at semeval3 at googlegroups.com<mailto:semeval3 at googlegroups.com> in order to receive announcements and participate in discussions.

The SemEval-2017 Website: http://alt.qcri.org/semeval2017/<http://alt.qcri.org/semeval2016/> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 40982 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20160331/db64a8a8/attachment.txt>



More information about the Corpora mailing list