[Corpora-List] SemEval 2019: Call for Task Proposals

Ekaterina Shutova katia at icsi.berkeley.edu
Tue Feb 6 14:20:44 CET 2018

*SemEval-2019: International Workshop on Semantic EvaluationsCall for Task ProposalsWe invite proposals for tasks to be run as part of SemEval-2019. SemEval(Semantic Evaluation) is an ongoing series of evaluations of computationalsemantic analysis systems, organized under the umbrella of SIGLEX, the SpecialInterest Group on the Lexicon of the Association for ComputationalLinguistics.The SemEval evaluations explore the nature of meaning in natural languages inpractical terms, by providing an emergent mechanism to identify the problems(e.g., how to characterize meaning and what is necessary to compute it) and toexplore the strengths of possible solutions by means of standardizedevaluation on shared datasets. SemEval evaluations initially focused onidentifying word senses computationally, but have later grown to investigatethe interrelationships among the elements in a sentence (e.g., semanticrelations, semantic parsing, semantic role labeling), relations betweensentences (e.g., coreference), and author attitudes (e.g., sentimentanalysis), among other research directions.For SemEval-2019, we welcome any task that can test an automatic system forsemantic analysis of text, be it application-dependent orapplication-independent. We especially welcome tasks for different languages,cross-lingual tasks, tasks requiring semantic interpretation, and tasks withboth intrinsic and application-based evaluation. See the websites of previouseditions of SemEval to get an idea about the range of tasks explored, e.g.,for SemEval-2018: http://alt.qcri.org/semeval2018/ <http://alt.qcri.org/semeval2018/>We strongly encourage proposals based on pilot studies that have alreadygenerated initial data which can provide concrete examples and discuss thechallenges of preparing the full task. In the event of receiving manyproposals, preference will be given to tasks that have already run a pilotstudy for the proposed task.We especially welcome tasks that are devoted to developing novel applications of computational semantics. We will encourage tasks that have a clearly defined end-user application showcasing and enhancing our understanding of computational semantics, as well as extending the current state-of-the-art.*Task Selection*Task proposals will be reviewed by experts, and the reviews will serve as the basis for acceptance decisions. Everything else being equal, more innovative new tasks will be given preference over task re-runs. Task proposals will be evaluated on:- Novelty: Is the task on a compelling new problem that has not been explored much in the community? Is the task a rerun, but covering substantially new ground (new sub-tasks, new types of data, new languages, etc.)?- Interest: Is the proposed task likely to attract a sufficient number of participants?- Data: Are the plans for collecting data convincing? Will the resulting data be high quality? Will the data annotation be ready on time?- Evaluation: Is the methodology for evaluation sound? Is the necessary infrastructure available or can it be built in time for the shared task?- Impact: What is the expected impact of the data in this task on future research beyond the SemEval Workshop?*New Tasks vs. Task Reruns*We welcome both new tasks and task reruns. For a new task, an aspect to be addressed in the proposal is whether it would be able to attract participants. Preference will be given to novel tasks that have not received much attention yet. For task reruns, the organizers should in their proposal defend the need for another iteration of their task. Valid reasons for a rerun would be: the need for a new form of evaluation (e.g., a new metric to test new phenomena, a new application-oriented scenario, etc.), the need to test on new types of data (e.g., social media, domain-specific corpora), a significant expansion in scale over a previous trial run of the task, etc.In the case of a rerun, we further discourage carrying over the same tasks and just adding new subtasks as this can lead to the accumulation of too many subtasks. Evaluating on a different dataset with the same task formulation typically should not be considered a separate subtask.Tasks that have already run for three years will not be accepted for SemEval-2019. If however the organizers estimate there is need for another iteration of their task, they are welcome to submit a task rerun proposal for SemEval-2020 (the calendar for submissions will be announced in Feb, 2020).Solid justification for the rerun will be needed highlighting its novel aspects compared to previous editions, in respect to the criteria discussed above.*Task Organization*We welcome people who have never organized a SemEval task before, as well, as those who have. Apart from providing trial, training, and test data, task organizers are expected to:- provide to task participants format checkers and standard scorers.- provide baseline systems that participants can use as a starting point (in order to lower the obstacles to participation). A baseline system typically contains code that reads the data, creates a baseline response (e.g., random guessing, majority class prediction, etc.), and outputs the evaluation results. Whenever possible, baseline systems should be written in widely used programming languages and/or should be implemented as a component for standard NLP pipelines such as UIMA or GATE.- create a website and mailing group for the task and post all relevant information there.- create a CodaLab competition for the task and upload the evaluation script.- manage submissions on CodaLab- write a task description paper to be included in SemEval proceedings- manage participants’ system description submissions to the task; and possibly shepherd papers that need additional help in improving the writing- review one or two other task description papers*Important dates*Task proposals due March 21, 2018Task selection notification May 4, 2018*Preliminary dates for SemEval-2019*Trial data ready July 31, 2018Training data ready September 4, 2018Test data ready December 3, 2018Evaluation start January 10, 2019Evaluation end January 31, 2019Paper submission due February 28, 2019Notification to authors April 6, 2019Camera ready due April 30, 2019SemEval workshop Summer 2019Tasks that fail to keep up with crucial deadlines such as the dates for having the task and CodaLab website up and dates for uploading trial, training, and test data may be cancelled at the discretion of SemEval organizers. While consideration will be given to extenuating circumstances, our goal is to provide sufficient time for the participants to develop strong and well-thought-out systems. Cancelled tasks will be encouraged to submit proposals for the subsequent year's SemEval.The SemEval-2019 Workshop will be co-located with a major NLP conference in 2019.*Submission Details*The task proposals should be a self-contained document of roughly 4-8 pages. References do not count against the page limit. Each proposal should contain the following:- Overview-- A summary of the task in general-- Motivation, why this task is needed and which communities would be interested in participating-- What the expected impact of the task will be- Data & Resources-- How the training/testing data will be built and/or procured-- What source texts/corpora are going to be used? Please discuss whether existing corpora have been re-used or not.-- How much data is going to be produced-- How will quality of the data be ensured and evaluated-- An example of how a data instance would look like-- The anticipated availability of the necessary resources to the participants (copyright, etc.)-- The resources required to prepare the task (computation and annotation time, costs of annotations, etc.) and their availability- Pilot Task-- Details of the pilot task, if any-- What lessons were learned and how these will impact the future task design- Evaluation-- The evaluation methodology to be used, including clear evaluation criteria- For Task Reruns-- Justification for why a new iteration of the task is needed, using the criteria discussed above-- What will differ from the previous instance-- The expected impact of the re-run compared with the previous instance- Task organizers-- Names, affiliations, brief description of research interests and relevant experience, contact information (email).-- The names of SemEval tasks you have run in the past along with year the task was run.Proposals will be reviewed by an independent group of area experts who may nothave familiarity with recent SemEval tasks and therefore, all proposals shouldbe written in a self-explanatory manner and contain sufficient examples.Submission will be electronic in PDF format through the START conference management system at: https://www.softconf.com/naacl2018/SemEval-2019-TaskProposals <https://www.softconf.com/naacl2018/SemEval-2019-TaskProposals>Please use the SemEval 2019 Task Proposal Submission page.In case you are not sure whether a task is suitable for SemEval, please feel free to get in touch with the SemEval organizers at semeval-organizers@ googlegroups.com <http://googlegroups.com> to discuss your idea. *Chairs*Jonathan May, ISI, University of Southern CaliforniaEkaterina Shutova, University of CambridgeMarianna Apidianaki, LIMSI-CNRS & University of PennsylvaniaSaif M. Mohammad, National Research Council Canada*The SemEval discussion group*Please join our discussion group at semeval3 at googlegroups.com <semeval3 at googlegroups.com> in order toreceive announcements and participate in discussions.The SemEval-2019 Website:http://alt.qcri.org/ <http://alt.qcri.org/> semeval2019/Contact: semeval-organizers at googlegroups.com* -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 45908 bytes Desc: not available URL: <https://www.uib.no/mailman/public/corpora/attachments/20180206/acec13dd/attachment.txt>

More information about the Corpora mailing list