Human Robot Interaction
Our lab is dedicated to exploring the fascinating field of HRI and pushing the boundaries of human-robot collaboration. Through interdisciplinary research and cutting-edge technology, we aim to create innovative solutions that enhance the interaction and cooperation between humans and social robots. With the support of our dedicated team and our collaborators, we strive to shape the future of human-robot interaction, bringing robots closer to becoming valuable and trusted companions in various domains.

How we work with Human-Robot Interaction
At the HRI Lab, our research focuses on understanding the dynamics of human-robot interaction and developing novel approaches to improve the user experience and effectiveness of collaborative robots. We investigate a wide range of topics, including:
-
Social and emotional interactions between humans and robots
-
Adaptive and personalized robot behavior
-
Trust, acceptance, and transparency in HRI
-
Robot-assisted therapy and healthcare applications
-
Ethical considerations and societal impact of HRI
In our lab, we adopt a multidisciplinary approach, combining expertise from computer science, and cognitive science. By leveraging state-of-the-art technologies, methodologies, and tools, we conduct practical research. We develop an interactive HRI framework, design user studies, and analyze human-robot interaction data to gain valuable insights into the intricacies of HRI.
Current Projects

I-AIMS1&2 (Impairment-Aware Intelligent Mobility System): 2023-2024 + 2025-2027
See HVI Research for details

Socially Affective Robots for Digitized Cognitive Interventions (SARP-DCI)
This project looks to investigate the effects of a social robot partner (Furhat), which adapts its behaviours and interactions based on a human partner’s affective and emotional states, on the long-term viability (acceptance, adherence, motivation, efficacy) of digitised cognitive training therapy. In combination with interactions with a social robot and a cognitive training task, this project seeks to find out: (1) the optimal combination of multimodal signals that can best capture a user’s affective/emotional states, particularly when interacting with these tools, (2) the effects of real-time adaptation of these technologies (robot interactions, characteristics of training tasks) based on these affective states on various behavioural and performance measures and (3) the viability of this approach in pre-clinical populations with Mild Cognitive Impairments.

Socially Affective Robots for Mutual Learning
In this project we test how affective-linguistic communication, in combination with differential
outcomes training, affects mutual learning in a human-robot context. Taking
inspiration from child-caregiver dynamics, our human-robot interaction setup
consists of a (simulated) robot (Reachy) attempting to learn how best to communicate internal,
homeostatically-controlled needs; while a human “caregiver” attempts to learn the
correct object to satisfy the robot’s present communicated need. We study the
effects of i) human training type, and ii) robot reinforcement learning type, to assess
mutual learning terminal accuracy and rate of learning (as measured by the average
reward achieved by the robot).
Featured Publications
Ravandi, B.S., Khan, I., Markelius, A., Bergström, M., Gander, P., Erzin, E., Lowe, R. Advanced Robotics, 2025
Heikkinen, E., Silvennoinen, E., Khan, I., Lemhaouri, Z., Cohen, L., Cañamero, L., Lowe, R. In Proceedings in Affective Computing and Intelligent Interaction (ACII), 2024.
Collaborations



.png)