Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Ā© Image by Betterimagesofai.org

Responsible Innovation as Organisation of the Social

29/05/2023

Stakeholder involvement is a crucial aspect of responsible innovation. Ideally, by including different perspectives, a more comprehensive picture of the issue at hand is achieved. The question arises, however, as to what is considered as ā€œtheā€ issue, which has corresponding effects on who is included in the conversation.

In the following I want to briefly argue that the current notion of innovation and responsible innovation focuses too much on the visible and tangible product, i.e., technological achievement.
Ā 

Ethical AI

For example, the current ethical debate about various machine learning models revolves mainly around the question of how to ā€œunbiasā€ them in order to make the models more ā€œfairā€ and ā€œtrustworthyā€. This is undoubtedly a normative task, for which there is no universal solution, but which depends heavily on the context.

However, by merely applying ethics to ā€œunbiasā€ a system in order to make the technology ā€œbetterā€, the underlying social inequalities do not disappear. Instead, we might overlook the appropriateness of the model in the first place. A ā€œfairā€ predictive policing system that uses facial recognition and shows no bias towards black people but stops people equally is probably not desirable.

Machine learning models ultimately entrench and reinforce current values and dynamics. ā€œEthical AIā€ is therefore not a question of bias, i.e., of right or wrong, but a question of power (Hao 2021). For this reason, we should not focus on how to ā€œalignā€ certain products with presumably definite or fixed values; but also consider the conditions under which the technologies are produced, and what new conditions these technologies produce.
Ā 

Hidden labour

In the case of the much-debated ChatGPT, the large language model can generate texts that contain less prejudices and are less harmful compared to its predecessor GPT-3. In that sense, ChatGPT could be considered as a ā€œresponsible innovationā€ ā€“ or rather an incremental improvement.

Nevertheless, for ChatGPT to filter out harmful content people in Kenya had to flag or label certain text outputs so that the model could be adjusted accordingly (Perrigo 2022). Not only were the workers paid less than $2 per hour, but they also had to suffer psychological distress from being confronted with texts describing child abuse, murder or torture. And yet, despite all the efforts and the disastrous effects, ChatGPT continues to generate offensive, toxic and dangerous content.

Given the requirement of human knowledge in the form of data labelling, ā€œartificial intelligenceā€ is not intelligent. But it is not artificial either.
Ā 

Materiality of Software

Although software is intangible and operates seemingly invisibly in the background of our environment, the Internet and the devices we interact with (to use ChatGPT), depend heavily on a material infrastructure.

Various oversea cables connect large server farms consisting of various hardware components for which miners need to extract metals under inhumane conditions. Further, the energy consumption required to train current models has drastic ecological consequences. And the more data these model process in order to improve their ā€œaccuracyā€ and hence ā€œfairnessā€, the more energy they require; while additional generated data must be stored, which then require larger data farms, and so on. Besides, the data for training often stem from people, who have not given their consent.

Considering these physical aspects and impacts of software, Kate Crawford (2018) argues that the entire AI infrastructure is based on extractive and exploitative practices. Consequently, the notion of ā€œethical AIā€ or ā€œfairnessā€ is expanding.
Ā 

Creating desirable conditions

It is not that these two problems ā€“ the hidden labour and the materiality of software ā€“ are unknown or that there are no actions taken in that regard. Still, I would argue that we tend to overlook these issues in responsible innovation. Instead, when faced with a problem, technology or engineering seems to be ā€œtheā€ solution. Many, if not most, issues however require social and political solutions.

A first step in shifting the focus away from the product would be for people working on responsible innovation to become aware of the limits of their knowledge on the issue at hand. We should reflect on our assumptions and in doing so try to broaden our perspectives when it comes to understanding the larger dynamics at play.

As mentioned, stakeholder engagement is strongly emphasised in responsible innovation literature. It is crucial, however, which stakeholders are included in the conversation, e.g., people working on human rights, labor rights, climate justice, people affected, etc.

So instead of asking ā€œhow should technology X function so that it benefits most stakeholdersā€, a question that is often framed in economic terms, the question should shift to ā€œhow can we improve the conditions that produce a desirable technology and with it a particular way of lifeā€.

Responsible innovation should thus consider the many steps that are necessary to achieve a desired goal. These include rules, regulations, organisational structures, procedures, etc. ā€“ none of which are technological.

Changing the entire machine-learning ecosystem is without doubt an ongoing and tedious task that cannot be solved overnight. Nevertheless, innovation should not be understood as a technological fix, but as a process of interaction and organisation1 of the social. For example, it would also be conceivable to build a more ethical language model (i.e., ChatGPT) by changing the working conditions of content moderators.

The starting point of innovation is therefore not technology, but the social and political conditions surrounding it.

Footnotes

  1. Organisation is understood as both verb and noun.


References

Crawford, Kate and Joler, Vladan. (2018). Anatomy of an AI System. https://anatomyof.ai/

Hao, Karen. (2021-04-24). Stop talking about AI ethics. Itā€™s time to talk about power. MIT Technology Review. https://www.technologyreview.com/2021/04/23/1023549/kate-crawford-atlas-of-ai-review/

Perrigo, Billy. (2022-01-18). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/

About the author

Simon W.S. Fischer is a PhD candidate at the Donders Institute for Brain, Cognition and Behaviour focusing on the societal implications of AI, in particular on AI-based Decision Support Systems used in healthcare. Website: https://www.ru.nl/en/people/fischer-s