Overview -------- Interaction with digital systems traditionally exhibit an explicitly noticeable character: The system informs the user about the current state upfront, such as a dialogue box on a visual display, and the user responds with a decision manifested in evident mechanical action, such as a mouse movement and click.
There are strong reasons for system designers to revisit and rethink the idea of the classic interactive structure between system and users, as indicated by the significant interest in alternative modes of interaction in recent years from within the HCI community, including Peripheral Interaction (Bakker, 2013), Lived Informatics (Rookbsy et al., 2014), Mindless Computing (Adams et al., 2015), Ambient Displays (Hausen, 2014); Subtle Gaze Direction (Bailey et al., 2009), Implicit HCI (Schmidt, 2000) as well as from the AI communities such as Mixed-Initiative Interaction (Horvitz, 1999; Yannakakis et al., 2014), Human-centered ML (Fiebrink and Gillies, 2018), humans in the loop interactive AI approaches, etc. Furthermore, there is a joint understanding from within the HCI community (e.g. “Intelligibility”, Belotti and Edwards, 2001) and the AI community that there are transparency challenges associated with deep learning systems.
Acknowledging this complexity, researchers within HCI advocate a shift towards designing for a lived experience rather than focusing mainly on user behaviour and instrumental goals. This workshop observes: • Interaction with intelligent digital systems has become so pervasive in everyday life that much of it does not involve giving it a second thought, thanks to the human mind’s tendency to automate routine tasks (habituation). • New ways of communicating information between system and user, in both directions, emerging from recent developments in AI, data science, and sensor or actuation technology invisibly move decision-making tasks from humans towards machines. • Evidence shows positive results from taking a collaborative approach to interaction between humans and AI in, for instance, complex creative tasks, if compared to non-hybrid approaches.
Human decision-making will increasingly be invisibly influenced by pervasive AI through various forms of existing and new kinds of HCI, often without users being aware of the AI system ‘under the hood’. The growing field of explainable AI involves the transparency, interpretability, explainability, and control of AI algorithms, although more focus is needed on how to make AI transparently under the control of the end user, while balancing the engagement level of the user experience (Zhu et al., 2018). The human-centered perspective fostered within HCI could contribute here. Also, HCI provides methodologies that can boost AI system evaluation in terms of these constructs. The workshop will also explore how system designers are designing human-AI collaboration today, and what our ideas are for tomorrow in order to make this collaboration more transparent, interpretable, and explainable for the end user.
Key topics of this workshop include but are not limited to ---------------------------------------------------------- • Transparent, interpretable and explainable AI systems – establishing user awareness when desirable • Ethics and privacy issues with invisible pervasive AI systems • Designing for lived experiences with invisible and pervasive AI systems • Designer-centered and/or mixed-initiative co-creativity systems • Co-evolution of the user and AI system interaction; intelligent and adaptive UIs • Humans-in-the-loop systems with invisible and pervasive AI • Machine learning algorithms for hybrid decision-making with a focus on end users • Attention-aware systems based on eye tracking, human sensing technologies, etc. • Models for unconscious and conscious HCI beyond implicit/explicit input and output • Perceptual and cognitive methods for subtle cueing and priming users such as subtle gaze direction, language-based priming and interaction; persuasive technologies • Spatiotemporal properties of emerging AI-HCI systems (wearable and context-aware systems theoretically opens for split-second hybrid decision-making everywhere)
Important Dates --------------- • Deadline for paper submissions (three categories, see web site): August 21, 2020 • Notification for accepted papers: September 4, 2020 • Camera ready: September 25, 2020 • Workshop date: October 25 or 26, in conjunction with NordiCHI 2020
Organizers ---------- • Cecilia Ovesdotter Alm, CLaSP Lab, Rochester Institute of Technology, New York, USA • Alberto Alvarez, Egocentric Interaction research group, Malmö University, Sweden • José Font, Egocentric Interaction research group, Malmö University, Sweden • Antonios Liapis, Institute of Digital Games, University of Malta • Thomas Pederson, Egocentric Interaction research group, Malmö University, Sweden • Johan Salo, Data Society research program, Malmö University, Sweden
Email to organizers: invisibleai2020 at googlegroups.com<mailto:invisibleai2020 at googlegroups.com> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 8871 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200813/f01d069d/attachment.txt>