top of page
  • Writer's pictureMaximilian Kiener

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality



77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.


Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.


There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.


Even the best AI is not perfect, and when things go wrong [...] we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.

First, many scholars argue that a key condition of responsibility is control: one can be responsible for something only if one had meaningful control over it. Yet, AI systems afford very little control to humans. Once in use, AI systems can operate at a speed and level of complexity that make it impossible for humans to intervene. Admittedly, people may be able to decide whether to apply AI in the first place, but once this decision has been made, and justifiably so, there is not much control left. The mere decision to risk a bad outcome, if it is itself justified and not negligent or reckless, may not be sufficient for genuine moral responsibility. Another reason for the lack of control is the increasing autonomy of AI. ‘Autonomy’ here means the ability of AI systems not only to execute tasks independently of immediate human control, but also (via machine learning) to shape the principles and algorithms that govern the operation of these systems; such autonomy significantly disconnects AI from human control and oversight. Lastly, there is also the so-called problem of many hands: a vast number of people are involved in the development and use of AI, and each of them has, at most, only a very marginal degree of control. Hence, insofar as control is required for responsibility, responsibility for the outcome of AI may be lacking.


Second, scholars have argued that responsibility has an epistemic condition: one can be responsible for something only if one could have reasonably foreseen or known what would happen as a result of one’s conduct. But again, AI makes it very difficult to meet this condition. The best AI systems tend to be those that are extremely opaque. We may understand what goes into an AI system as its input data, and also what comes out as either a recommendation or action, but often we cannot understand what happens in between. For instance, deep neural networks can base a single decision on over 20 million parameters, e.g. the image recognition model ‘Inception v3’ developed by Google, which makes it impossible for humans to examine the decision-making process. In addition, AI systems’ ways of processing information and making decisions is becoming increasingly different from human reasoning so that even scrutinising all the steps of a system’s internal working processes wouldn’t necessarily lead to an explanation that seems sensible to a human mind. Finally, AI systems are learning systems and constantly change their algorithms in response to their environment, so that their code is in constant flux, leading to some sort of technological panta rhei. For these reasons, we often cannot understand what an AI will do, why it will do it, and what may happen as a further consequence. And insofar as the epistemic condition of responsibility requires the foreseeability of harm to some degree of specificity, rather than only in very general terms (e.g. that autonomous cars ‘sometimes hit people’), meeting the epistemic condition presents a steep challenge too.


Third, some theorists argue that one is responsible for something when it reflects one’s quality of will, which could be either one’s character, one’s judgment, or one’s regard for others. On this view, control and foresight may not be strictly necessary, but even then, the use of AI poses problems. When an autonomous car hits a pedestrian, for instance, it may well be that the accident does not reflect the will of any human involved. We can imagine a case in which there is no negligence but just bad luck, so that the accident would not reflect poorly on anyone’s character, judgment, or regard for others.


[T]he debate surrounding responsibility gaps in AI is not just another case study in applied ethics, but an important opportunity for us to discuss our future moral frameworks and to determine the place of responsibility within them.

Thus, various approaches to responsibility suggest that no one may be morally responsible for the harm caused by AI. But even if this is correct, a further important question remains: why should we care about a responsibility gap in the first place? What would be so bad about a future without, or with significantly diminished, human responsibility?


To address this question, we need to distinguish between at least two central ideas about responsibility. The first explains responsibility in terms of liability to praise or blame.[1] On some of these views, being responsible for some harm means deserving blame for it. Thus, a responsibility gap would mean that no one could be blamed for the harm caused by AI. But would this be so bad? Of course, people may have the desire to blame and punish someone in the aftermath of harm. In addition, scholars argue that blaming practice can be valuable for us, e.g. by helping us to defend and maintain shared values.[2] Yet, the question remains as to whether, in the various contexts of AI, people’s desire to blame really ought to be satisfied, rather than overcome, and also what value blaming practices ultimately hold in these different contexts. Depending on our answer to these questions, we may conclude that a gap of responsibility in terms of blameworthiness may not be so disadvantageous in some areas, but maybe still of value in others.


The second idea identifies responsibility with answerability, where an answerable person is one who can rightly be asked to provide an explanation of their conduct.[3] Being answerable for something does not imply any liability to blame or praise. It is at most an obligation to explain one’s conduct to (certain) others. Blame would be determined by the quality of one’s answer, e.g. by whether one has a justification or excuse for causing harm. This approach to responsibility features the idea of an actual or hypothetical conversation, based on mutual respect and equality, where the exchanged answers are something that we owe each other as fellow moral agents, citizens, or friends. Here, the question of a responsibility gap arises in a different way and concerns the loss of a moral conversation. Depending on our view on this matter, we may conclude that losing responsibility as answerability could indeed be a serious concern for our moral and social relations, at least in those contexts where moral conversations are important. But in any case, the value and role of answerability may be quite different from the value and role of blame, and thus addressing the challenge of responsibility gaps requires a nuanced approach too.


Hence, the use of AI invites us to re-consider the conditions as well as the role of moral responsibility. For this reason, the debate surrounding responsibility gaps in AI is not just another case study in applied ethics, but an important opportunity for us to discuss our future moral frameworks and to determine the place of responsibility within them.


Notes

[1] Cf. Pereboom, D. Free will, agency, and meaning in life. Oxford University Press, 2014. [2] Cf. Franklin, C. Valuing Blame. In Coats, J., Tognazzini, N. (Eds.) Blame: Its Nature and Norms, 207-223. Oxford University Press, 2012. [3] Smith, A. (2015). Responsibility as Answerability. Inquiry 58(2): 99-126.


Maximilian Kiener holds a Leverhulme Early Career Fellowship in philosophy at the University of Oxford. His work focuses on consent, responsibility, and artificial intelligence. More information can be found on his personal website: https://maximilian-kiener.weebly.com/

bottom of page