1. "Joint Attention in Human-Agent-Interaction" in Psycholinguistics/Human-Agent-Interaction
People interact using speech as well as other non-verbal cues. Gaze, for instance, ubiquitously accompanies utterances in face-to-face interaction and may provide additional referential information from the speaker or may reveal whether the listener has understood. Following the partner's gaze and interpreting her posture and gestures can not only enrich but even be essential for successful and efficient communication. Understanding and modeling the dynamic interplay of speech and non-verbal cues such as gaze and how they combine to encode a particular message is a complex enterprise. The use of virtual agents as interaction partners provides one way to approach this problem: The artificial partner introduces a precise, controllable, and yet dynamic component to the interaction with humans. Using them as test beds enables us to observe, model and test complex multi-modal behaviors.
The proposed PhD project will develop interactive behaviors of a virtual character using state-of-the-art virtual agent software and modern eye- and motion-tracking systems. Besides the developing component, this research is also empirical and will involve the design of user studies and data analysis in order to tackle questions such as how joint attention and/or certain gestures or facial expressions are employed and affect spoken content.
Applicants should hold a Master degree in computational linguistics, computer science, cognitive science, psychology or psycholinguistics (or equivalent) and should have an interest in modeling and understanding the dynamics of human interaction. Basic programming skills are necessary. Experience with experiment design and statistics are an advantage but not required. Most importantly, the successful applicant should be enthusiastic about the general research questions and be prepared to learn new methods.
2. "Interpreting Listener Behavior to Inform Navigational Guidance" in NLP/Psycholinguistics
People look at what is being talked about. By following referring expressions (RE) to their referent, a person can ground that expression in the environment and fully understand and validate the utterance that contains it. In turn, fixating the intended referent signals to the speaker that the listener has understood. Thus, listener eye-movements fulfill several roles: While (privately) seeking visual information linked to the utterance content, they also (publicly) reflect (un-)successful reference resolution or, more generally, belief states of the listener to the speaker. In a dynamic and complex environment, such as the GIVE challenge, in which other tasks are involved as well, it becomes increasingly difficult to infer *what* precisely listeners understand and intend to do next and *how* e.g. their eye-movements help to infer this. Understanding listener behavior and how an artificial speaker (e.g. a dialog system) can exploit this information most efficiently, will be a major concern of this research project.
One possible PhD project would thus consist in developing strategies and algorithms for a system that is informed about the user's visual attention at any given time by modern remote eye-tracking technology. Besides the developing component, this line of research is also empirical and will involve the design of user studies and data analysis to develop and test these strategies. Alternative settings, such as in-car tracking along with navigational instructions, are also conceivable in order to explore effective ways of using listener gaze for giving efficient and safe instructions.
Applicants should hold a Master degree in computational linguistics, computer science, cognitive science or equivalent, and should have an interest in modeling and understanding the dynamics of spoken interaction. Good programming skills are necessary. Experience with experiment design and statistics are an advantage but not required. Most importantly, the successful applicant should be enthusiastic about the general research questions and be prepared to learn new methods.
The Embodied Spoken Interaction (ESI) group is part of the "Multi-Modal Computing and Interaction" Cluster of Excellence <http://www.mmci.uni-saarland.de/> at Saarland University which provides a very fruitful and constructive research environment with excellent opportunities for exchange and cooperation. The group has access to numerous state-of-the-art eye-tracking laboratories, a 64 channel EEG/ERP lab, and modern computing infrastructure, and conducts research at the level of international excellence.
The candidate will be expected to contribute to the high standards of the group and to be actively involved in the preparation and publication of new results. Further information about the group can be found at: http://www.mmci.uni-saarland.de/en/independent_research_groups/esi <http://www.mmci.uni-saarland.de/en/independent_research_groups/esi>
Applicants should submit their research statement, a CV, a copy of their school and university degrees, a representative reprint (thesis or paper if applicable), and names and contact information of two references. The position remains open until filled, but preference will be given to applications received by *1 August*. All documents should be e-mailed as a single PDF to: masta AT coli DOT uni-saarland DOT de
Thanks and Best Regards, Maria Staudte
-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 6841 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20130716/7b87935d/attachment.txt>