top of page
  • Writer's pictureCharlotte Franziska Unruh

Algorithms at Work and Human Rights

According to a survey of UK workers that was recently published by the Trades Union Congress, 60 per cent of workers think that they have been subject to surveillance and monitoring, and three in ten say that monitoring at work has increased since the start of the Covid-19 pandemic.

Increased monitoring is part of a broader trend towards digitalisation and the use of Artificial Intelligence (AI) technologies at the workplace. In industry, for example, large-scale collection of data supports decision-making in production and logistics. Tracking goods and batches through the production process can help companies plan and centrally steer processes to make them more efficient. Moreover, data about employees can be used by employers to support management decisions in areas such as recruitment and the organisation of day-to-day work. This data can be collected, for example, via email monitoring, video and voice recording, productivity apps or wearable smart devices. Data-driven methods can be used to analyse skillsets in the workforce, predict team performance, or recommend optimal staffing schedules.

Ethical reflection on data collection and algorithmic systems at work is important. Algorithmic systems can have a significant impact on the wellbeing of workers by affecting hiring decisions, work organization, and performance evaluation. Moreover, algorithmic systems used in the workplace can have effects on families and communities. For example, consider scheduling software that schedules workers’ shifts based on real-time data. Such software can save costs, but, as noted by Pegah Moradi and Karen Levy (2020), it also shifts business risks arising from varying customer demand onto workers. For workers, irregular shifts can make it very difficult to arrange childcare or personal appointments. Further, artificial intelligence systems, including those used at workplaces, can require a lot of energy to run. Their environmental effect might be significant (the environmental cost of AI has been discussed by Aimee van Wynsberghe (2021)).

[D]igital technology has dramatically increased capacities for ubiquitous and far-reaching worker surveillance.

It is not new that managers supervise workers and use information to improve work processes. However, digital technology has dramatically increased capacities for ubiquitous and far-reaching worker surveillance. It is often not transparent which data is being collected, how data influences decisions, and who can be held accountable for algorithmic decisions. In this sense, algorithmic management adds a new dimension to existing surveillance and optimization practices.

The speed at which the digitalisation of work increased over the last years left little time for ethical analysis and public debate. We should discuss how we imagine the future of work, and include different perspectives in this discussion. (The need for debate and regulation of artificial intelligence technology has been recognized by lawmakers. For example, the European Commission has proposed a draft of the Artificial Intelligence Act that will regulate what kinds of artificial intelligence can be used in the EU.) Ethical theories and concepts can inform this discussion. In what follows, I show how ethical theorizing about human rights might be used to evaluate digital surveillance and monitoring in the workplace.

Human rights are norms that protect the dignity and autonomy of all people. Appeal to the human rights of workers and other stakeholders is increasingly made in the debate on corporate responsibility, and also in the ethics of artificial intelligence. (For example, in a report by the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems’ on ‘Ethically Aligned Design’, human rights are the first of eight general principles that are put forward.) What could it mean in practice to respect the human rights of workers in the context of workplace technology? In the following, I offer some thoughts drawing on a paper by Alexander Kriebitz and Christoph Lütge (2020), in which they discuss the obligations of companies regarding human rights in the context of AI. In their paper, Kriebitz and Lütge distinguish scenarios for understanding human rights violations related to AI: scenarios in which the use of AI conflicts with human rights concerning its input, output, or the intention of use. (A fourth scenario, which I will not discuss in this post, is the context in which AI is used.) I suggest that these scenarios can help us identify concerns that arise in the use of algorithmic systems at work.

Human rights considerations are crucial for retrospective evaluation and responsible forward-looking design, use, and regulation of algorithmic systems.

The first of Kriebitz’s and Lütge’s scenarios concerns situations in which input for AI violates human rights. Such situations arise, for example, when the collection of data violates someone’s right to privacy. At the workplace, data collection is sometimes necessary for administrative, legal or organizational purposes (such as recording a worker’s work hours). However, monitoring and surveillance at work today is much more far-reaching and intrusive, for example when there is blanket monitoring of a worker’s online activities or interactions, or when monitoring continues outside work hours.

A further scenario concerns the output of AI. The recommendations given by software can have unintended problematic consequences. Kriebitz and Lütge (2020, 98) give the example of biased hiring software that discriminates against women. Unintended rights violations might also occur in other instances, such as in software that makes work decisions with the aim of optimizing efficiency. Consider the example of algorithms that set the pace for workers in warehouses, and might enforce performance targets that intensify work, leaving workers overly exhausted, stressed, and at risk of injury or illness. Moreover, algorithmic outputs can be flawed, with potentially tragic consequences. For example, the Post Office scandal in the UK even saw workers imprisoned over alleged theft due to faulty accounting software. Accountability and remedy mechanisms are needed to ensure that workers can appeal algorithmic decisions.

Moreover, artificial intelligence can be used as an instrument by actors with problematic intentions, leading to human rights violations. An example mentioned by Kriebitz and Lütge (2020, 101) is software that is used to discriminate against trade union members in promotion processes. A further example might be algorithmic systems that can predict where the likelihood of unionisation is high, and can be used by actors to take steps to actively prevent unionisation. In such a case, the use of artificial intelligence undermines the right of workers to form trade unions to represent their interest.

Does the risk of human rights violation through algorithmic technologies at the workplace affect all workers equally? There is reason to think that vulnerable workers, such as those in insecure employment, are especially affected. Data-driven decisions and monitoring originated in the gig economy, where workers take temporary and flexible jobs. Evaluation of human rights impact of workplace technology should consider justice implications that arise if risks fall disproportionately on workers who might already be disadvantaged.

When done right, algorithmic decision-making can be compatible with workers’ rights and might even have the potential to strengthen them.

Having said this, the use of AI at the workplace is not necessarily problematic. Sometimes, AI hardly affects human workers. An example might be software that predicts when machine parts need replacement to prevent them from breaking during busy periods. More interestingly, some uses of artificial intelligence and similar technologies can offer benefits for workers. When done right, algorithmic decision-making can be compatible with workers’ rights and might even have the potential to strengthen them. In a current research project at the Technical University of Munich, our research group investigates how workplace algorithms can take into account the needs and preferences of workers. We think that if workplace technology gives workers greater freedom in deciding how they work, and authority over which data they collect and share, then technology can strengthen the rights of workers, rather than undermine them. Examples might be scheduling software that allows workers to enter their preferred shifts directly in the system, or productivity apps that workers can use for their own benefit and without sharing data further. (Of course, much more needs to be said about how these and similar technologies could be designed and used responsibly. The point here is not that all such technologies would be ethical, it is rather that some technologies can plausibly offer benefits to workers, without violating their human rights.)

Human rights considerations can provide an important ethical perspective for debating the risks and chances of AI at the workplace. In particular, they can be used to draw ethical limits for permissible AI use, and to provide guiding principles for beneficial AI use. Human rights considerations are crucial for retrospective evaluation and responsible forward-looking design, use, and regulation of algorithmic systems.


Kriebitz, Alexander, and Christoph Lütge. 2020. ‘Artificial Intelligence and Human Rights: A Business Ethical Assessment’. Business and Human Rights Journal 5 (1): 84–104.

Moradi, Pegah, and Karen Levy. 2020. ‘The Future of Work in the Age of AI: Displacement or Risk-Shifting?’ In The Oxford Handbook of Ethics of AI, by Pegah Moradi and Karen Levy, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 269–88. Oxford University Press.

Nickel, James. 2021. ‘Human Rights’. In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Fall 2021. Metaphysics Research Lab, Stanford University.

Wynsberghe, Aimee van. 2021. ‘Sustainable AI: AI for Sustainability and the Sustainability of AI’. AI and Ethics 1 (3): 213–18.

Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.


bottom of page