How I study social behaviour
I study social and moral behavior in naturalistic interactions, focusing on automatic mimicry, physiological synchrony, and human–robot interaction to understand how alignment emerges across systems.
Moral Behavior
Humans constantly make moral decisions, from everyday choices to complex dilemmas. We don’t only base those decisions on social norms, values, or consequences: much of our moral decisions happen intuitively, shaped by rapid, unconscious signals we exchange with others. Just like other animals, humans rely on clever evolved strategies to detect threats and opportunities quickly. We can assess (un)trustworthiness within milliseconds from subtle cues such as facial expressions or pupil size. Aligning with someone, by mimicking their smile or synchronizing with their physiological state, is another shortcut for deciding whether we will cooperate, compete, or trust them. My research has shown that these signals, like emotions, convey rich information and can either strengthen or undermine trust in ambiguous situations.
Humans also extend moral consideration beyond other people. We care for animals, respect ecosystems (even though not enough!), and sometimes treat invisible entities as worthy of moral attention. But what about artificial agents? Social robots and virtual assistants are a very interesting case: they are neither alive nor passive objects, yet they are designed to interact with us, to produce emotional cues, and sometimes to even resemble humans. This raises questions about the boundaries of our moral behavior: is it acceptable to mistreat a chatbot? Do we lie more frequently when faced with robot? Cross-cultural research, including my study on how people respond morally to artificial agents in diverse contexts, shows that these patterns vary across societies and depending on the specific emotional cues the agent is displaying reflecting both universal and culturally specific moral processes.
Interpersonal Alignment
Across evolution, social species have always found unique strategies to navigate their social environments, and interpersonal alignment stands out among the most enduring. Interpersonal alignment is the process by which individuals within a social group coordinate their behaviors and emotions with one another, enabling smoother cooperation, reducing conflict, and maintaining social cohesion. Across evolution, social species have always found unique strategies to navigate their social environments, and interpersonal alignment stands out among the most enduring. Interpersonal alignment is the process by which individuals within a social group coordinate their behaviors and emotions with one another, enabling smoother cooperation, reducing conflict, and maintaining social cohesion. This alignment operates at multiple levels. At the most visible, we mirror each other’s behaviors and expressions: you smile, I smile; you yawn, and suddenly I want to yawn too. This is no coincidence. It is called automatic mimicry: the tendency to unconsciously copy the behaviors and expressions of others. But this alignment runs deeper than what the eye can see. We also synchronize on a physiological level: our pupil size, heart rate, and skin conductance can all come to mirror those of another person, completely without our awareness.
Why do we do this? What is this alignment actually for? These are the questions I try to answer in my research. The prevailing view assumes that mimicry acts as a kind of social glue, as in we mimic to foster cooperation and liking. Yet the picture is more complex: mimicking negative expressions and synchronizing with negative emotional states does not always bring people together; sometimes it pulls them apart. In my research, I see automatic mimicry and physiological synchrony as mechanisms by which we recreate another person’s affective state within our own body, not only to connect, but also to predict. To anticipate what they will do next, and to track changes in the environment around us.
But what happens when the agent on the other side is not human? If the effects of interpersonal alignment depend on what is actually being mimicked and synchronized, then we must be deliberate about which emotional cues we build into artificial agents. People will mimic them, automatically and unconsciously, potentially leading to unwanted effects. This is not a distant concern: mimicry is already being embedded in artificial agents. Understanding how interpersonal alignment works may therefore be as important for the design of machines as it is for the study of human nature.
Human-Robot Interaction
Artificial agents offer something that human interaction rarely permits: experimental control over the very features that define a social partner. This makes them an exceptionally powerful tool for probing the boundaries of social and moral cognition. For instance, we can manipulate, in ways impossible with human participants, whether there is a mind behind a partner’s actions (e.g., by having a robot teleoperated by a human, or acting fully autonomously) and whether that partner has a physical presence in the world, comparing an embodied robot to a virtual agent on a screen. By placing people in interactions that sit just outside the social world their brain evolved to navigate, we can reveal where intuitions about trust, cooperation, and moral consideration come from, and where they break down.
But this is not only a question of fundamental science. As social robots and virtual assistants become woven into everyday life, in healthcare, education, and beyond, it becomes urgent to understand how people respond to them morally and socially. Do people deceive robots more readily than humans? Does interacting with an artificial agent that displays fear or distress change how people treat it? These questions have direct implications for how these technologies should be designed and deployed, so that they can adapt to human behavior without inadvertently encouraging harmful patterns in return.
Finally, I see artificial agents as a new window onto the evolution of social behavior itself. Because they can be designed to selectively possess or lack the cues our social brain evolved to respond to, a face, a heartbeat, an emotional expression, they allow us to ask, in a controlled way, which features of a social partner actually drive cooperation, trust, and moral concern. In this sense, robots are not just products of our social world. They are instruments for understanding how that world came to be.


