How I study social behaviour
I study social and moral behavior in naturalistic interactions, focusing on automatic mimicry, physiological synchrony, and human–robot interaction to understand how alignment emerges across systems.
Moral Behavior
Humans constantly make moral decisions, from everyday choices to complex dilemmas. We don’t only base those decisions on social norms, values, or consequences: much of our moral decisions happen intuitively, shaped by rapid, unconscious signals we exchange with others. Just like other animals, humans rely on clever evolved strategies to detect threats and opportunities quickly. We can assess (un)trustworthiness within milliseconds from subtle cues such as facial expressions or pupil size. Aligning with someone, by mimicking their smile or synchronizing with their physiological state, is another shortcut for deciding whether we will cooperate, compete, or trust them. My research has shown that these signals, like emotions, convey rich information and can either strengthen or undermine trust in ambiguous situations.
Humans also extend moral consideration beyond other people. We care for animals, respect ecosystems (even though not enough!), and sometimes treat invisible entities as worthy of moral attention. But what about artificial agents? Social robots and virtual assistants are a very interesting case: they are neither alive nor passive objects, yet they are designed to interact with us, to produce emotional cues, and sometimes to even resemble humans. This raises questions about the boundaries of our moral behavior: is it acceptable to mistreat a chatbot? Do we lie more frequently when faced with robot? Cross-cultural research, including my study on how people respond morally to artificial agents in diverse contexts, shows that these patterns vary across societies and depending on the specific emotional cues the agent is displaying reflecting both universal and culturally specific moral processes.
Interpersonal Alignment
Human-Robot Interaction
Blablabla


