Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Multidisciplinary Perspectives on Human-AI Team Trust

Date/deadline: Friday, 9 May 2025

This workshop will be a full day event organised in conjunction with the HHAI 2025 conference, held in Pisa, Italia.

 This workshop focuses on different perspectives and layers of trust dynamics in teams consisting of both humans and AI agents. With the increasing prominence of human-agent interaction in hybrid teams in diverse industries, human-agent teamwork is no longer a topic of the future, but of the present. However, several challenges arise that still need to be addressed carefully. One of these challenges is understanding how trust is defined and how it functions in human-agent teams. Psychological literature suggests that within human teams, team members rely on trust to make decisions and to be willing to rely on their team. Moreover, the multi-agent systems (MAS) community has been adopting trust mechanisms to support decision-making of the agents regarding their peers and for delegating tasks to agents. Finally, in the last couple of years, researchers have been focusing on how humans trust AI agents and how such systems can be trustworthy. However, bringing this knowledge on teams and trust together in a HI setting brings its own unique perspectives. When we think of a team composed of both humans and agents, with recurrent (or not) interactions, how do these all come together? Currently, we are missing approaches that integrate the prior literature on trust in teams in these different disciplines. In particular, when looking at dyadic or team-level trust relationships in such a team, we also need to look at how an AI should trust a human teammate. In this context, trust, or rather the factors that influence it, must be formally defined so that the AI can evaluate them, rather than using questionnaires at the end of a task, as is usually assessed in psychology. Furthermore, a human's trust in an artificial team member, and vice-versa, will change over time, affecting the trust dynamics. In this workshop, we want to motivate the conversation across the different fields and domains.

Together, we intend to shape the road to address these questions to guarantee a successful and trustworthy human-AI agent teamwork.

Submission & List of Topics

This workshop calls for contribution and/or participation from several disciplines, including Psychology, Sociology, Cognitive Science, Computer Science, Artificial Intelligence, Multi-Agent Systems, Robotics, Human-Computer Interaction, Design and Philosophy.

We will invite authors to submit two types of submissions. Firstly, extended abstracts of 3-pages introducing the author and their interest in the topic. These can be overviews of past work, preliminary work in progress, or plans for future research. The goal of these papers is to serve as a base for introducing the participant's to each-others work and expertise.

Secondly, we will call for progress papers of maximum 7-page long. This submission type is aimed at describing a specific piece of new work, which can also be work in progress or recently published work (published 2023 or later). The goal of these submissions is to allow attendees to share new results and insights, as well as discuss work in progress.

Topics of interest include, but are not limited to:

  • Measures of team trust in human-AI teams.
  • Human’s trust and trustworthiness in human-AI teams.
  • Dynamics of trust between human and AI in teamwork.
  • Hybrid techniques (knowledge-driven and data-driven) to assess trust and trustworthiness in human-AI teams.
  • Machine learning techniques to detect trust and trustworthiness in human-AI teams and teammates.
  • Evaluation methods for trust and trustworthiness models in human-AI teams.
  • Experimental settings for trust dynamics in human-AI teams
  • Design of systems that take into account trust dynamics in human-AI teams.
  • Trust dynamics among team members.
  • Understanding of collective agency and action in human-AI teams.
  • Agent-based Social Simulation (ABSS)
  • Social norms, reputation, etc.

Submission Guidelines

Authors should submit their papers formatted according to the IOS formatting guidelines, which is also used for contributions to the main conference. Papers should be written in English and detailed submission instructions can also be found here. Use the following templates to create the paper and generate or export a PDF file: LaTeX or MS-Word.

Authors need to submit their PDF via EasyChair. Each paper will receive at least two reviews. All papers are reviewed using a single-blind review process: authors declare their names and affiliations in the manuscript for the reviewers to see, but reviewers do not know each other's identities, nor do the authors receive information about who has reviewed their manuscript.

Committees

  • Dr. Myrthe Tielman,  Delft University of Technology (Netherlands)
  • Dr. André Meyer-Vitali, DFKI (Germany)
  • Dr. Susanne Uusitalo, University of Oulu (Finland)
  • Dr Alessandra Rossi, Department of Electrical Engineering and Information Technology, University of Naples Federico II (Italy)
  • Raffaella Esposito, ICAROS, University of Naples Federico II (Italy)

Contact

All questions about submissions should be emailed to Myrthe Tielman <M.L.Tielman@tudelft.nl>, and Susanne Uusitalo <Susanne.Uusitalo@oulu.fi>.

Workshop website: https://multittrust.github.io/4ed/

Submission link: https://easychair.org/conferences?conf=multittrust40

Submission deadline: May 9th, 2025