[Corpora-List] Any multimodal annotation standards of conversations (multiauthored text)? ...

Tech Monk tekmonk2005 at yahoo.com
Fri Jan 18 23:19:18 CET 2019


 I have been looking for annotation standards relating to multiauthored texts.

 As part of the communicative back and forth that happens when people talk (Alice says "A", Bob" "B", ...), there should be a way to address each of the participants, when they said whatever (a timestamp) and in reference to what/whomever previous comments.

 By its very nature, communication is multiauthored one way of another. It may happen:

 * as roles and personas (in literature)  * in citations (in research papers)  * relatively spontaneously between participants who have communicated before (socially)  * relatively asymmetrically (between trainers and trainees)  * asymmetrically (during job interviews)  * mostly topically (relating to a specific domain)  ...

 Some research emphasizes prosody and even gestural articulations, but apparently the authoring part of it is not included.

 They talk about RFC for conversational and pedagogical corpora, but I couldn't find annotations for those kinds of corpora.

// __ Multimodal Annotation of Conversational Data. P. Blache, R Bertrand, B Bigi, E Bruno, E Cela, R. Espesser, G. Ferre, M Guardiola, D. Hirst, E.-P Magro, et al

 https://hal.archives-ouvertes.fr/hal-01720424/document ~ // __ OTIM - Tools for Multimodal Information Processing

 http://www.lpl-aix.fr/~otim/ ~ // __ Tools for Multimodal Information Processing

 http://www.lpl-aix.fr/~otim/documents/OTIM-anr.pdf ~  lbrtchx



More information about the Corpora mailing list