Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

© Brian Woodlief: https://it.pinterest.com/pin/8725793024613269/

User ethics: from individual responsibility to collective action

18/04/2025

Guest author: Daniël E. Brouwer and Suzanne Galletly

Introduction

In a world where software decides mortgage applications, algorithms determine what news we see, and apps shape our daily habits, one question is central: who bears responsibility for ethical technology use?

User ethics is about how we deal with technology in a fair way. Everyone who uses technology bears responsibility - not only for their own use, but also for the impact on others. It is not just about following rules, but about making conscious choices in a digital society.

Until now, the focus has often been on designers and developers. They are considered responsible for the development of technology that aligns with ethical standards and for making decisions about what constitutes ‘ethical’ or ‘responsible’ AI. 

But this perspective is too limited. This blog explores why the end user plays a crucial role in ethical technology use and how responsibility should be viewed in a digital world where technology creates both constraints and opportunities.

The user as an autonomous acting being?

Technology is often seen as a neutral tool: the user determines how it is deployed. This idea assumes that people act rationally and consciously. In reality, this is rarely the case.

Psychologist Daniel Kahneman shows that 95 to 99 per cent of our behavior is unconscious. This means that many interactions with technology are not the result of conscious deliberation, but reflexive behavior. Think of automatically accepting cookies, routinely clicking 'OK' on system notifications or blindly trusting AI-driven decisions.

In addition, users are subtly controlled by design choices. Robert Cialdini identifies seven fundamental principles of influence - such as authority, social affirmation and scarcity - that subconsciously determine our behavior. In technology, we see these principles reflected in ecosystems such as the Apple iPhone, where exclusive features and social pressure make users less inclined to look at alternatives.

The idea of a completely autonomous user is thus an illusion. Choices are guided, constrained or even preempted by technology, often without users being aware of it.

The unpredictable user

Although technology use is largely influenced by systems and design choices, humans prove to be an unpredictable factor. Users often deviate from the intended use of technology and discover ways of applying it that designers had not anticipated. Twitter became a political mobilization tool, TikTok influences music charts, and the microwave oven was created by accident.

A striking example of unanticipated use is the evolution of 3D printing technology. Originally developed for rapid prototyping in industrial settings, the technology became accessible to consumers and hobbyists thanks to initiatives such as RepRap. This led to a thriving community sharing 3D-printed designs for educational materials, art objects and everyday objects. But in 2013, the technology was also used to design and share a functional 3D-printed firearm: the 'Liberator'.

This development shows how technological appropriation leads to ethical dilemmas:

  • The designers of 3D printers probably never envisaged them being used for this purpose. 
  • The developer of the weapon design saw himself as an activist for free information, not primarily as a weapon producer.
  • Regulators were unprepared for this convergence of digital information and physical weapons.
  • The technology itself was ethically neutral, but the way it was applied had far-reaching ethical implications.

This shows how technology develops through a complex interplay of intentional design and creative appropriation, with ethical issues often only becoming apparent in hindsight. This raises the question: who bears responsibility for unintended uses? Should designers anticipate all possible scenarios, or does the responsibility lie with the user?

An ethical digital environment requires a balance between innovation and protection. Technology must allow for creative use, but at the same time be robust enough to limit harmful effects. This requires a combination of ethical design, adaptive regulation and shared learning between users, designers and policymakers.

The impact of technology on ethical conduct

Many modern technologies make it easier to shift responsibility to 'the system'. When decisions are automated, users and organizations become accustomed to no longer thinking critically about the consequences of their actions. This is what Hannah Arendt meant by the banality of evil: the greatest ethical dangers lie not in conscious evil, but in unthinking obedience to systems.

The Dutch ‘toeslagaffaire’ (tax benefits scandal) shows how technology creates ethical distance. Because victims were seen not as individuals but as data points within an automated decision-making system, empathy and nuance disappeared. This begs the question: would the same scale of injustice have been possible if victims had interacted directly with people instead of a system?

In addition, AI systems are changing the way we attribute responsibility. John Rawls' theory of justice argues that ethical systems should be designed to protect the least advantaged. However, in many AI decision-making processes, we see systems reinforcing existing inequalities by relying on historical data and patterns. An example of this was the recruitment algorithm that was scrapped by Amazon in 2018 after it proved to be biased against female candidates. The artificial intelligence system was trained on data submitted by applicants over a 10-year period, much of which came from men, leading to the system effectively teaching itself that male candidates were preferable.

This calls for a review of who is liable: the user, the designer, or the system itself?

From reactive to proactive user ethics

User ethics is often reactive: people intervene only when technology has harmful effects. In a world where technological systems are becoming increasingly complex, this is insufficient. Ethics is concerned with what is morally right and wrong, as well as the moral principles governing behavior. As such, we need to encourage ethical action before problems arise.

This calls for a shift from rules to character building. Users should not just follow technology but actively reflect on their choices. This means ethics is not just based on compliance with laws, but on developing a moral compass. Even if you can do something technically, and you may do it legally, you should still ask yourself if you should do it morally.

Consider the recent tools offered by companies such as Heritage.com and Deep Nostalgia AI, whereby a user uploads an old photo, and it is brought to life in a moving picture. This raises concerns about accuracy and authenticity, as the animation process might add details or movements that are not present in the original scene. In addition, animating old photos without proper consent, particularly when the people concerned are no longer alive, can raise ethical concerns about misrepresentation. It is up to the individual user to make a conscious and informed decision on how they view this from an ethical lens.

A proactive user ethic requires users not only to be aware of their own choices, but also to recognize how their digital behavior affects others. Digital citizenship is crucial here. This is not just about legal compliance, but about core values such as responsibility, respect and transparency.

Towards a future of shared user ethics

User ethics is not an individual responsibility, but a shared process. Hannah Arendt shows that ethics always has a political dimension. People can only act ethically if the structures in which they operate allow them to do so. John Rawls adds that technology must be fair to all users, not just the majority.

Making technology more ethical requires structural changes:

  • Transparent and ethical design - Technology should be designed to allow users to make conscious and informed choices.
  • Ethical guidelines as an integral part of policy - Not limited to separate documents but woven into daily practice and embedded into culture.
  • User control - Privacy settings should be accessible and understandable so that users retain control.
  • Ethical leadership - Organizations must take responsibility for the impact of their technology.

The issue of digital ethics is not a new one, but risks have significantly increased with the growth of AI as a General Purpose Technology. There are many useful resources which can be called upon to assist with the responsible use of AI, such as ethical guidelines, ISO standards, and European harmonized standards. Whilst some of these resources are more targeted towards organizations, there is also immense value in users being aware of these ethical guidelines, to help them be more critical in the decisions they make about the use of AI in their daily life. The future of user ethics lies not in passively following, but in actively shaping. Only through collaboration between users, organizations and developers can we create a digital environment where technology is not only efficient, but also ethical.

In an era where technology is intertwined with virtually every aspect of our lives, a renewed perspective on digital ethics is necessary. User ethics offers this perspective by not placing responsibility unilaterally on designers or users but recognizing that ethical action takes place in a complex interplay of systems, designs and human behavior. The paradox of technology use - simultaneously controlled and creative, limited and expanding - calls for an ethical framework that recognizes this tension. If we want to build a technological future centered on human values, we need to look beyond the illusion of total autonomy as well as complete determinism.

User ethics invites us not to abdicate responsibility, but to share it - not as a burden, but as a shared mission to make technology work for the good of all users, including the most vulnerable among us. The road to ethical technology starts not with perfect systems, but with engaged users who are willing to reflect critically, act consciously and work together to create a digital world where efficiency and ethics go hand in hand.


Authors Bio: 

Daniël E. Brouwer is Functioneel Beheer Expert, and Suzanne Galletly is Digital Skills Director at EXIN, where she is accountable for the design and positioning of EXIN’s certification portfolio for digital skills. Calling on nearly 20 years of experience in the field of digital skills development, she helps ensure that EXIN is at the forefront of industry developments and contributes to a number of industry forums, including the Artificial Intelligence Skills Alliance (ARISA), the Digital4Sustainability Advisory Board, and the SIAM Community NL Board. Suzanne is passionate about everything concerned with the human side of digital, including lifelong learning, workforce transformation, and digital ethics.