Papers that investigate philosophical issues related to AI value alignment are invited for the 2025 Bovay Engineering and Applied Ethics Workshop. The workshop will take place on April 10th, 2025 at Texas A&M University. Travel funding (up to $1,000) will be available for participants whose abstracts are accepted (link to phil events CFP). Possible topics include:
- How moral uncertainty and disagreement impact value alignment;
- The role of social choice and public input in value alignment;
- Discrimination, bias, or unfairness in existing AI systems; methods for mitigating these issues;
- Methods for value alignment; methods for measuring value alignment (i.e., measuring the extent to which an AI system aligns with a set of ethical principles).
ABOUT THE WORKSHOP
The Bovay Engineering and Applied Ethics Workshop Series aims to develop a community of scholars and practitioners interested in applied ethics, especially on issues that affect engineering ethics. The theme of the 2025 workshop is “AI Value Alignment.” The workshop will highlight paper presentations from the invited participants, which will each be followed by extended discussion with other workshop participants. We are expecting to set aside at least two sessions for papers received through this Call for Papers, schedule permitting.
ABSTRACT SUBMISSION
The (extended) deadline for submission of abstracts is Friday, February 24st. Notifications of acceptance will be made around March 7th. Please send your abstract of no more than 500 words in the body of an email message to emriesen@tamu.edu with the subject line “Bovay Workshop.” In writing your abstract, please bear in mind that full papers should be suitable for a 35-minute presentation, which will be followed by 20-minutes of Q&A.
ORGANIZERS
- Martin Peterson (Bovay Chair of History and Ethics of Professional Engineering, Texas A&M)
- Erich Riesen (Postdoctoral Fellow in Technology Ethics, Texas A&M)
INVITED SPEAKERS
- Arianna Manzini (Google DeepMind)
- Pamela Robinson (University of British Columbia)
- Rebecca Raper (Cranfield University)