Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

© Image by Oguzhan Akdogan on Unsplash

From design intentions to critical practice

13/12/2023

Value Sensitive Design (VSD) calls for designers to be proactive toward embedding values into technologies (Davis & Nathan, 2015). To create designs that are more ethical and more purposive about how they support social justice, VSD emphasises how intended values are to be specified at the earliest point of project planning. Correspondingly, VSD advocates for a design process that flows “top-down” from a set of intended ethical impacts to an elaboration of the technical specifications that might be reasonably expected to achieve these impacts (van de Poel, 2021, p.305). We suggest that while such a top-down process is ideal, it may encounter considerable challenges when applied to real-world design scenarios. Not least among the reasons for this is that technologies often will not have a single, or altogether clear, starting point - rather emerging from iterations and adaptations of already existing technologies. In such imperfect situations, when technologies are already in use, proponents of VSD principles must - on the fly - adopt a proactive repertoire for a reactive intervention.

We suggest that such a reactive stance is, to an extent, prefigured in the tools and materials of VSD. Drawing from the literature on Critical Design and VSD itself, we recommend a change of emphasis to further incorporate real-world design scenarios. Beyond a mere increase in operational scope, at stake is a VSD inclusive of marginalised populations and positions. We argue for a VSD that might speak better to those who wield less direct influence in design processes, are less likely to be invited to contribute during the early stages of design, and are more at risk of injustice. Drawing from studies into implementations of medical AI and workers in logistics warehouses, we call for a VSD that incorporates the politics of ethical design interventions.

Case: machine learning in medicine

The case we would like to consider is machine learning used to predict patients’ likelihood of developing an opioid addiction or misuse. Such algorithmic prediction systems provide a risk score to patients according to previous medical history and to what we might consider aspects of a person’s life that are not necessarily related to their drug consumption levels (e.g., having experienced sexual assault, criminal history, etc.). It is worth noting that these algorithmic systems are technically advanced versions of more rudimentary record-keeping systems previously used to keep track of patients’ data (Oliva, 2022). Thus, this technology was not designed from scratch at a precise point in time. This evades the possibility of intentionally building values into it at the beginning of the design process, as VSD envisages. 

This technology, in its current deployment, brings about numerous ethical issues. Particularly, as soon as these systems enter the space of the patient-physician relationship, they play an important role in how patients’ credibility is perceived by medical professionals. For instance, a person whose risk score does not mirror her actual drug consumption levels. If her testimony is dismissed as irrelevant because the ML system’s score is considered, by default, more authoritative in providing the information needed to assess her situation, she could be the victim of an epistemic injustice (Fricker, 2007). This would be the case because she would not be able to express her epistemic agency, crucially due to the role played by the ML system. In the face of ethical issues that emerge once the technology is already in use, it seems paramount to look for possible solutions in medias res.

Toward reflexivity in VSD

Such an exploration might first pose an operational question of VSD: If the technology-to-be-designed already exists, how does VSD proceed? Literature describes a tripartite methodology in VSD, spanning “conceptual, empirical and technical investigations” (Davis & Nathan, 2015, p.30). Davis and Nathan (2015) follow Friedman et al. (2006) to recommend practitioners embark with a round of stakeholder analysis, and specifically the conceptual work of defining stakeholder groups. Although this is by no means uncontested, and different researchers offer amendments and follow their own idiosyncratic paths through these steps. A relevant amendment comes from the work of De Reuver et al. (2020) who argue for an extended VSD methodology that can address technologies across their entire life cycles, in their study of VSD and digital platform design. To achieve this, they propose the addition of a fourth step to the tripartite methodology, reflexivity: “[The] reflective inquiry into whether or not current applications still help realize important values and/or raise new value issues that need to be addressed” (De Reuver et al, 2020, p. 260). As a form of second-order learning, reflexivity can change and course-correct the intended values according to which the other three steps of the VSD method proceed. To the extent that such a change of intended values will restart the VSD process, the resultant four-part TERC model is cyclic or iterative, and theoretically open-ended.

De Reuver et al. place discussion of who is responsible for the ongoing process of design at the heart of their interpretation of VSD: “Our goal in developing the TERC model is to suggest that 
 it should be the task of the platform provider to consistently and systematically ensure that the platform they have provided is able to account for changing behaviours and ethical norms” (De Reuver et al, 2020, p. 264). Such a moral imperative is well and good but offers little practical assistance in cases such as medical AI where – for whatever reason – this onus remains unfulfilled. In these cases, there seems to be a need for a value-led design intervention by groups who hold little formal influence over the technology at hand. In the case of medical AI this might be patient activists. And, given that such interventions could stand to gain from the VSD toolkit, we seek to take a step towards methods that can help such activist practice. 

The Turkopticon as an empowering tool

An example of how designers can play a positive role in mitigating the negative effects of existing technologies is the Turkopticon, a tool designed to bring together workers in Amazon Mechanical Turk (AMT) (Irani and Silberman, 2016).  AMT is a website in which workers can take up a certain number of micro tasks for remuneration. Under the promise of flexibility and autonomy in choosing one’s working tasks, AMT de facto legitimises unfair working conditions. For instance, the workers are not entitled to minimum wage or other benefits ensuing from regular working contracts. The purpose of the Turkopticon is to include in the ATM’s interface workers’ reviews in real-time so that they have access to more information diversity and can make informed decisions regarding whether or not to take up a particular task. Moreover, it facilitates connections among workers, incentivizing opinion exchanges and critical scrutiny. The broader set-up of this solution gives voice to the workers, by not only involving them in the design process. They were also actively asked for their input regarding how they perceived the narrative revolving around the Turkopticon once it drew public attention.

Overall, this is an instance of how relevant stakeholders’ epistemic agency can be empowered through technological design when a technology is already in use and not only at the initial design stage. Moreover, this could be seen as an initial step towards ameliorating instances of epistemic injustice that emerge when users are only receivers but not conveyors of information in technology-mediated activities. 

 

References

Davis, J. & Nathan, L. P. (2015) Value Sensitive Design: Applications, Adaptations, and Critiques. In Handbook of Ethics, Values, and Technological Design, edited by Jeroen van den Hoven, Pieter E. Vermaas, Ibo van de Poel. (pp. 11-40). Springer Reference.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Friedman B, Smith IE, Kahn PH Jr, Consolvo S, Selawski J (2006) Development of a privacy addendum for open source licenses: value sensitive design in industry. In: Proceedings of Ubicomp 2006. Springer, Heidelberg, pp 194–211

Irani, L. C., & Silberman, M. S. (2016, May). Stories We Tell About Labor: Turkopticon and the Trouble with" Design". In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 4573-4586).

Oliva, J. (2022). Dosing Discrimination: Regulating PDMP risk scores (January 18, 2021). 110 California Law Review 47, Available at SSRN: https:// ssrn. com/ abstr act= 37687 74 or https:// doi. org/ 10. 2139/ ssrn. 37687 74

Szalavitz, M. (2021). The pain was unbearable. So why did doctors turn her away? In Wired. Retrieved September 2023, from

De Reuver, M., Van Wynsberghe, A., Janssen, M. & Van de Poel, I. (2020) Digital platforms and responsible innovation: expanding value sensitive design to overcome ontological uncertainty. Ethics and Information Technology 22 (pp. 257-267). https://doi.org/10.1007/s10676-020-09537-z

Van de Poel, I. R. (2021). Values and Design. In D. P. Michelfelder, & N. Doorn (Eds.), The Routledge Handbook of the Philosophy of Engineering (pp. 300-314). Routledge - Taylor & Francis Group.

 

About the author

Giorgia is a PhD candidate at TU Delft working on the intersection between ethics and epistemology of AI. More specifically, her research focuses on forms of epistemic injustice that emerge in AI-mediated medical contexts.

Alie is reading for a graduate degree in Comparative Literary Studies at Utrecht University. His research examines the relations of design practice and ethics of representation, especially in the context of video games, game design and other related softwares.Â