Fundamentals of HRI and Bridging the Gap Between People and Machines
I recently completed 16-701: Fundamentals of Human-Robot Interaction, a comprehensive course designed to teach how to create successful interactions between people and robots. The core goal of the class is enabling students to understand how a robot's behavior and appearance influence interactions, and how psychological principles like communication and theory of mind can be utilized. The learning journey was thoughtfully structured across four foundational sections: Robot Fundamentals, Human Fundamentals, Interactions, and Tools.
The Four Pillars of Human-Robot Interaction
The course structure effectively broke down the complex field of HRI into manageable and interconnected areas:
1. Robot Fundamentals: This section introduced the basics of robot operation, focusing on aspects like appearance and anthropomorphism. We learned that anthropomorphism, the act of attributing human characteristics to a non-human entity, can be perceived through physical appearance (like facial or body features), motion, and behavior. While anthropomorphism can facilitate social interaction and improve acceptance toward robots, it also carries the risk of creating unrealistically high expectations about robot capabilities or leading to the perception of robots as animate when inappropriate. We also covered how robot autonomy exists on a spectrum, ranging from teleoperation (human control only) to full autonomy (robot policy only), which is defined as the extent to which a robot can sense, plan, and act to reach a goal without external control.
Beer et al.
2. Human Fundamentals: This pillar explored the psychological underpinnings of human interaction that are vital for designing robots. Key concepts included verbal and non-verbal communication, such as eye gaze (mutual gaze, deictic gaze) and gestures (iconic, metaphoric, deictic, and beat). We also delved into intentions and Theory of Mind (ToM), which is the practice of attributing mental states; such as beliefs, desires, and intentions to oneself and others. Understanding ToM is crucial, as intentions are seen as the driving force behind rational action.
3. Interactions: This section examined the practical application of the fundamentals in collaborative settings. Topics covered the dynamics of Collaboration and Trust, and Social Navigation. Collaboration requires acting toward a common goal, often involving establishing joint intentions and multi-agent planning. We studied trust as an attitude that an agent will help achieve an individual’s goals in situations marked by uncertainty and vulnerability. This attitude is dynamic, involving formation, dissolution, and restoration, and is influenced by system features (e.g., capability, predictability) and operator factors. Social navigation focuses on respecting human social conventions, such as lane formation and proxemics, when robots move around people.
4. Tools: Finally, the course provided crucial methodologies for Designing for Interaction and conducting User Studies. This includes the steps necessary to conduct HRI experiments: defining the research question and hypothesis, designing and executing the study, analyzing the data, and drawing conclusions. We learned about defining independent variables (IVs) and dependent variables (DVs), selecting appropriate metrics (objective vs. subjective, quantitative vs. qualitative), and handling confounds like training or fatigue effects using techniques like counterbalancing. While every section of this course offered valuable insights, the concept that most fundamentally changed my thinking was Beer's Levels of Robot Autonomy. Before taking this class, I think I implicitly viewed autonomy as a binary destination either a robot is autonomous or it isn't. The course showed me that autonomy is actually a continuum, with teleoperation at one end and full autonomy at the other, and countless meaningful stops in between.
This reframing has practical implications for how I'll approach system development going forward. Rather than trying to leap straight to full autonomy, the smarter path is iterative: start with more human control, validate that the system works reliably at that level, then gradually shift responsibility to the robot as trust and capability grow. Each step up the autonomy ladder should be earned, not assumed.
This iterative mindset also connects back to the course's emphasis on trust—trust that forms, dissolves, and must be carefully restored. By taking incremental steps, we give humans the chance to build appropriate mental models of what the robot can and cannot do, avoiding the pitfalls of anthropomorphism and inflated expectations.
Ultimately, 16-701 taught me that successful human-robot interaction isn't about building the most autonomous robot possible, it's about finding the right level of autonomy for the task, the context, and the people involved.