Generative AI and Emotional Outsourcing: Deceiving Others and Ourselves?
- Pascal L. Mowla
- 2 minutes ago
- 7 min read

Communicating during a break-up is difficult enough for many couples, and this is often the case irrespective of the events which precede it. So too is saying sorry, or finding the right words to communicate an emotionally resonant message that sincerely reflects one’s sentiments. Fortunately, it appears as though humans need not trouble themselves with such cognitively burdensome nuisances any longer, for there is a new tool in town. One that not only mitigates the inconvenience of having to take the time required to author such communication, but also diminishes the need for the corresponding cognitive labour, self-reflection, and emotional processing.
As has already been observed in much of the relevant literature, the rapid proliferation of generative AI has already given rise to a host of concerns about outsourcing, plagiarism, and intellectual property rights, but much of the present debate has tended to focus on the impact that the emergence of such technology has on creative and academic labour. Less well treated, but no less troubling, is the phenomenon of what I refer to as emotional-authorial outsourcing. Unlike outsourcing in creative or academic contexts, where people prompt generative systems to produce text, analysis, or creative content, emotional-authorial outsourcing occurs whenever people use such systems to generate emotionally resonant content in place of authoring such communication themselves.
The case that first drew my attention to this phenomenon was not drawn from the proverbial armchair, but from a conversation with a close friend who confided in me a story about the final weeks of her relationship. After repeated infidelity, her partner began sending long, emotionally intricate letters, apologies suffused with vulnerability, self-awareness, and poetic remorse. They were, she admitted, persuasive, and for a time she believed their sincerity. Only later did he confess that the letters were largely the product of a large language model. The sentiments, in some loose sense, were supposedly his own, but the emotional-authorial labour was outsourced almost entirely.
After hearing my friend’s story, I began to collect others. A colleague of a friend’s sister prompted an LLM to produce an apology to coworkers. Another friend reported that their partner generated a heartfelt poem. An acquaintance fed transcripts of her partner’s communication into a language model for emotional analysis and behavioural advice. In fact, since ChatGPT’s launch in 2022, the phenomenon of emotional outsourcing to generative AI had already become so prevalent so as to merit reporting by the BBC and CNN in 2025 and 2026 respectively.
But what, if anything, is morally wrong with emotional-authorial outsourcing? I worry that by engaging in such outsourcing, we not only risk wronging others, but also risk wronging ourselves.
Deception of Others
Within the kinds of cases I wish to draw attention to, something like the following happens. Person A prompts a generative system to produce an emotionally resonant message (e.g., an apology, love letter, confession, etc.) and communicates it to Person B without disclosing its authorial source. Since B receives the message within an ordinary context where no contrary communicative norms are established, we may safely presume that A “warrants” (or guarantees) the truth of whatever it is that they are communicating. Unlike a game of poker in which players reasonably expect their opponents to deceive, ordinary communicative scenarios are typically governed by what philosophers refer to as the default warranty of truth. This presupposition of truth telling in ordinary contexts makes everyday communication possible. When communicating in such scenarios, it is not only plausible to assume that we guarantee the truth of what we are saying to others, but the truth of who has said what.
When that presumption is quietly and intentionally violated, it results in deception or at the very least, an attempt to deceive. Even if the sentiments expressed are not wholly fabricated, B is invited to believe something false about their source, namely that the emotional and authorial labour was performed by A. And where such messages are designed to influence reconciliation, forgiveness, or trust, the deception begins to look like manipulation. It shapes B’s emotional orientation and practical deliberation without supplying her with genuine reasons for action.
That alone should give us reason to be morally sceptical of emotional outsourcing to generative AI. But what struck me following these conversations was something more unsettling. In each case, concerns about deception, authenticity, or relational harm immediately arose. Yet beneath these worries lay another possibility, that sustained reliance on such systems might risk distorting how people understand themselves.
Self-Deception
In support of this worry, consider some of the recent empirical research on cognitive outsourcing and generative AI. In one study, participants who relied on large language models for writing tasks showed weaker memory retention, reduced cognitive engagement, and, most strikingly, a fragmented sense of authorship when contrasted with “brain-only” and “search-engine” groups who were also instructed to complete the same tasks. Indeed, some participants reported full ownership of their work despite displaying diminished self-monitoring and recall of what they actually contributed. The researchers describe this as a form of “cognitive debt”: a condition in which repeated reliance on generative systems gradually replaces the effortful processes required for independent thinking.
This, I believe, reveals not only a new way of deceiving others, but also a new way of deceiving ourselves.
To illustrate further, suppose someone repeatedly turns to AI to articulate apologies, declarations of care, or emotionally nuanced reflections. Over time, the outsourcing of expressive content may come to stand in for the person’s own labour. The desire to see oneself as caring, attentive, or articulate is satisfied, but without the corresponding exercise of the capacities that would make that self-conception accurate. The risk is that the person begins to treat the output, and others’ responses to it, as evidence of their own emotional-authorial capacities, intentions, and evaluative commitments.
Crucially, self-deception of the kind I have in mind need not involve consciously telling oneself a lie. Many philosophers instead believe that self-deception occurs when a person’s desire—for example, that they are remorseful, emotionally articulate, or attentive—motivates a biased pattern of inquiry. On this view, would-be self-deceivers selectively attend to evidence that supports this belief and neglect unwelcome evidence that contradicts it. The resulting false belief is not formed with an intention to deceive oneself, but through a kind of motivated omission.
Turning back to the original case, we can now see how such emotional-authorial outsourcing might provide the ideal conditions for such motivated inquiry to take place. The person intentionally prompts the system because doing so is reassuring insofar as it produces the kind of expression one wishes one were either capable, or willing, to produce. But what risks being omitted during this process is the question of whether the output genuinely reflects one’s own capacities, intentions, or personal commitments. The convenience with which such outsourcing can occur at the click of a button expedites this process, and may fragment the relationship between appearance and authorship. Over time, the person may come to believe that they are emotionally attentive, reflective, or sincere in ways that their unassisted behaviour would not support. The belief is sustained not by explicit falsehood, but by a pattern of motivated reliance.
Once such outsourcing is understood to pose a risk of AI-mediated self-deception, then a question naturally follows regarding its ethical significance. Although many might be tempted by the view that such self-deception amounts to nothing more than imprudence or an epistemic vice, I believe that there is reason to think that the problem runs deeper.
Moral Duties to Oneself
We are not momentary bundles of attitudes and dispositions, but temporally extended people whose projects, relationships, and self-conceptions unfold over time. Much of what matters to us—educational achievement, emotional maturity, artistic or professional competence—depends on sustained effort and on the preservation of certain higher-order capacities. Among these is the capacity for minimally accurate self-knowledge. Or more straightforwardly, the ability to monitor, revise, and stand behind our commitments as ours.
When we repeatedly engage in patterns of motivated outsourcing that predictably erode this capacity, we do not merely risk embarrassing ourselves in ways that are epistemically troubling or imprudent. The greater worry, I suggest, is that we also risk wrongfully undermining the conditions of our own diachronic agency.
One way to frame this is in terms of duties we owe to our past and future-selves. Some philosophers argue that our past and future-selves have morally significant interests that can ground constraints on present action. After all, our past-selves invest time and effort in developing emotional or expressive capacities and pursuing personal commitments, whereas our future-selves depend on us to preserve the capacities necessary for pursuing personal projects and sustaining relationships.
When people predictably undermine these conditions through self-deceptive patterns of reliance, they risk disregarding the cross-temporal interests that structure their commitments and capacities across their lives. It is crucial to note that the wrong here is not paternalistic, for it does not impose “alien” values from the outside. It arises from within the person’s own evaluative outlook. If I care about being a sincere partner, an attentive friend, or emotionally articulate, then preserving the capacity to accurately understand whether I am in fact those things becomes a constraint on how I may act over time.
AI-mediated self-deception poses a moral risk to oneself when it begins to replace, rather than supplement, the labour through which such capacities are sustained. The facsimile of sincerity gradually stands in for the activity of being sincere and the narrative of oneself as the author of one’s commitments, with the capacity to accurately monitor and revise them, drifts away from the reality of one’s practices.
At that point, the harm is not exhausted by deception of others. It is a failure of self-relation. One risks wronging one’s past-self, who invested in developing capacities that are now being eroded, and one’s future-self, who will inherit a diminished ability to situate, revise, and pursue commitments as their own.
As various companies seek to embed generative AI into popular communication platforms, we should not only be wary of undermining the integrity of our communication with others, but also of rupturing the integrity of our relationship to ourselves. Such systems enable practices that make it easier to appear emotionally fluent whilst quietly eroding the capacities that make such fluency genuine. And because self-deception obscures its own operation, the erosion may be difficult to detect from the inside.
The convenience is real. So are the risks.
Pascal L. Mowla is completing his DPhil in Politics at the University of Oxford and pursues engaged philosophy on issues of contemporary political and ethical significance.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.