HCI 2022 Session: "Semantic, artificial and computational interaction studies: Towards a behavioromics of multimodal communication"
Manual gestures, facial expressions, head movements, shrugs, laughter, body orientation, speech, pauses: they all contribute to constituting what is called "multimodal interaction". Aiming at natural (for humans) interfaces, the field of HCI paid attention to this social fact early on. It is also a vital topic in Conversation Analysis and the Cognitive Sciences and begins to percolate to theoretical linguistics and (formal) semantics. Simultaneously, due to the digital turn, work on multimodal communication is expanded by data analytics, that is, statistical means to describe the form of communication. However, while conjoint in investigating a common empirical domain, there is little exchange between these fields. This session aims at bringing these branches together. Potential goals are to delineate experimental studies, computational methods, resource building, and exploration to integrate symbolic, statistical, laboratory, field, and corpus-based approaches - a joint methodological endeavor that might be called "behavioromics." A focus of the 2022 edition of the session on behavioromics is the issue of representation:
- How can means of multimodal interaction be represented symbolically? - How can types and tokens of multimodal interaction (e.g., multimodal gestures that are realized in more than one channel, e.g., the coupling of facial expressions and body postures) be distinguished? - How can such entities be recognized in a model-based manner on the basis of symbolic, heuristic discovery procedures? - How, on the other hand, or in combination with machine learning methods, can they be recognized in a data-based manner? - How to get from low-level tracking data to symbolic representations? - What are the limitations of data analytics or symbolic approaches to multimodal communication?
Besides this focal area, the session is open to topics such as the following:
- phenomena under discussion: past, present, future - dialogue semantics and dialogue systems - big gesture data - networked multimodal interaction data - multimodal ensembles and networking of ensembles - creation and exploitation of multimodal corpora - avatars as experimental setting - cross-modal tracking - data-based multimodal analysis - detecting multimodal gestalts - automatic annotation - representation schemes for multimodal communication
We want to emphasize that conceptual contributions are highly welcome!
The conference session aims at providing a platform for bringing together semanticists, computer scientists and researchers from related fields that deal with multimodal interaction. We all work on virtually the same topic but from different angles, but there are way to few opportunities to get in touch. But exchange and seeing what others are doing is crucial to approach the above-outlined, methodological, empirical and theoretical challenges.
*NEXT STEP: December 15, 2021: 1-2 page abstract from the authors through the CMS at https://cms.hci.international/2021.* Make sure to select the correct session!
Important dates: - December 15, 2021: upload abstract - Notification of Review Outcome: 2 January 2022 - February 14, 2022: full paper - June 26--1 July, 2022: conference (virtual?)
Cornelia Ebert (https://www.linguistik-in-frankfurt.de/personal/cornelia-ebert/) Andy Lücking (https://www.texttechnologylab.org/team/andy-luecking/; http://www.llf.cnrs.fr/en/Gens/L%C3%BCcking) Alexander Mehler (https://www.texttechnologylab.org/team/alexander-mehler/)