[Corpora-List] Multimodal Corpora 2018, LREC 2018 Workshop, 1st CfP

Patrizia Paggio paggio at hum.ku.dk
Wed Nov 15 16:55:39 CET 2017


********************************************************

WITH APOLOGIES FOR MULTIPLE POSTINGS

********************************************************

First Call for Papers

MULTIMODAL CORPORA 2018: Multimodal Data in the Online World

LREC 2018 Workshop 12 May 2018, Phoenix Seagaia Conference Center, Miyazaki, Japan

Introduction ========= The creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc. An increasing number of research areas have transgressed or are in the process of transgressing from focused single modality research to full-fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.

We are pleased to announce that in 2018, the 12th Workshop on Multimodal Corpora will once again be collocated with LREC.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, LREC 2012, IVA 2013, LREC 2014, and LREC 2016. The workshop series has established itself as of the main events for researchers working with multimodal corpora, i.e. corpora involving the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc.

Special theme and topics =================== As always, we aim for a wide cross-section of the field of multimodal corpora, with contributions ranging from collection efforts, coding, validation, and analysis methods to tools and applications of multimodal corpora. Success stories of corpora that have provided insights into both applied and basic research are welcome, as are presentations of design discussions, methods and tools. This year, to comply with one of the hot topics of the main conference, we would also like to pay special attention to multimodal corpora collected and adapted from data occurring online rather than especially created for specific research purposes.

In addition to this year’s special theme, other topics to be addressed include, but are not limited to: · Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar and human-robot interaction, etc.) and descriptions of existing multimodal resources · Relations between modalities in human-human interaction and in human-computer or human-robot interaction · Multimodal interaction in specific scenarios, e.g. group interaction in meetings or games · Coding schemes for the annotation of multimodal corpora · Evaluation and validation of multimodal annotations · Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora · Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization) · Collaborative coding · Metadata descriptions of multimodal corpora · Automatic annotation, based e.g. on motion capture or image processing, and its integration with manual annotations · Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters) · Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions) · Machine learning applied to multimodal data · Multimodal dialogue modelling

Programme ========= The workshop will consist primarily of paper and poster presentations. In addition, we want to start discussing a shared task involving multimodal corpus development and/or use for predicting communication behaviour. Therefore, prior to the workshop, participants will be asked to submit ideas for such a shared task. The goal is for the task to be launched next time the workshop is held.

There will also be one or two keynote speakers.

Important dates ============ Deadline for paper submission: 12 January

Notification of acceptance: 9 February

Final version of accepted paper: 23 February

Final program and proceedings: 9 March

Workshop: 12 May

Submissions ========== Submissions should be 4 pages long, must be in English, and follow the LREC’s submission guidelines.

Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

Submissions should be made at the following address:

https://www.softconf.com/lrec2018/MMC2018/

Time schedule and registration fee ========================== The workshop will consist of a morning session and an afternoon session.

Registration and fees are managed by LREC – see the LREC 2018 website (http://lrec2018.lrec-conf.org/).

Identify, Describe and Share your Language Resources (LRs)! ===============================================

Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.

As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2018 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org<http://www.islrn.org>), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time.

Organizing Committee =================

Patrizia Paggio Centre for Language Technology, Univ. of Copenhagen, Denmark Institute of Linguistics and Language Technology, Univ. of Malta, Msida, Malta

Kirsten Bergmann Cluster of Excellence in Cognitive Interaction Technology, Univ. Bielefeld, Germany Institute of Cognitive Science, Univ. Osnabrück, Germany

Jens Edlund KTH Speech, Music and Hearing, Stockholm, Sweden

Dirk Heylen Univ. Twente, Human Media Interaction, Enschede, The Netherlands

Patrizia Paggio

Senior Researcher University of Copenhagen Centre for Language Technology paggio at hum.ku.dk<mailto:paggio at hum.ku.dk>

Associate Professor University of Malta Institute of Linguistics and Language Technology patrizia.paggio at um.edu.mt<mailto:patrizia.paggio at um.edu.mt>

-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 10761 bytes Desc: not available URL: <https://www.uib.no/mailman/public/corpora/attachments/20171115/860f78ba/attachment.txt>



More information about the Corpora mailing list