Sharing Atrocities: The Ethics of Posting, Tagging, and Live-Streaming Images of Mass Violence
- Paul Morrow
- 12 hours ago
- 7 min read

It’s an increasingly common experience: you open your laptop, tap your tablet, or check a notification on your phone and suddenly find yourself confronting images of atrocity. Across news sites and social media platforms, private chat groups and public gaming servers, photos and videos of assassinations, terrorist bombings, and extrajudicial killings abound. What remains rare is philosophical reflection on the ethics of posting, tagging, or otherwise sharing such graphic scenes.
In what follows, I distinguish four features of digital sharing of atrocities that complicate efforts to establish norms for this visual activity. Each of these features tends to undermine the content guidelines and community standards put forward over the past decade by major social media companies. Each ties in to larger philosophical debates in ethics and epistemology.
For all that images tax our normative capacities, it is still necessary to develop norms for dealing with them. The ethical stakes of sharing photos, films, and other images of atrocities are too high to do otherwise. Accordingly, I conclude this brief essay by proposing four norms for individuals and groups engaged in sharing pictures of intolerable harms.
1. Unforeseeable Audiences
A core feature of digital communication is the potential for messages to reach audiences extending far beyond their primary recipients.[1] In part, this broad reach results from human actions, as digital tools make it easy for individual users to forward, reshare, or otherwise recirculate the messages they receive. In part, it is a product of the algorithmic logic of digital media, under which contents that attract high levels of engagement are elevated and displayed to ever-expanding audiences.
Benign images can become subjects of such extended sharing in the wake of atrocities. Shortly after the October 7, 2023 Hamas attack on Israel, video footage of Egyptian paratroopers practising maneuvers in Cairo went viral, on the belief that the clips showed Hamas fighters launching out of Gaza.[2] Likewise, school photos and social media pictures of persons wrongly identified as perpetrators of mass shootings often outrun efforts to fact-check or verify such posts, with results that can be devastating.[3]
But it is not only innocent images that ramify across digital spaces. Perpetrator-created images also tend to reach extended and unexpected audiences. The live-stream video created by the Christ Church, New Zealand mass shooter in 2019 was shared thousands of times in the wake of that attack, and recently helped inspire an August 2025 school shooting in Minneapolis.[4] Photos and videos created by IS fighters in the 2010’s motivated hundreds of men and women in Europe to travel to Syria or undertake violent acts abroad – including the 2015 Bataclan attack in Paris and the 2025 Yom Kippur attack in Manchester. In all of these cases, the logic of digital sharing conveyed graphic images to unforeseen, unforeseeable audiences.
2. Uncontrollable Contents
The documentary power of photography has long been the chief justification for its use in humanitarian and human rights advocacy.[5] But this documentary power depends on the ability of photos and films to transmit contents unaltered across temporal and geographic distances. With the emergence of new digital tools, this ability is open to question.
Advances in generative AI have already begun to undermine the photographic work of humanitarian organizations. In some cases, the damage has been self-inflicted – as when Amnesty International published an AI-generated image of police detaining a young woman at a protest in Colombia.[6] In other cases, the obfuscation originates with third parties – as when actors opposted to NGO efforts in Gaza falsely claimed that photos and videos of starving Palestinians were in fact carefully constructed fakes.[7]
Philosopher Regina Rini suggests that claims of this latter kind will become increasingly plausible as machine-generated images proliferate. She argues that digitally altered images tend to undermine the epistemic backstop which photographs and films once provided for high-stakes political speech.[8] While Rini’s concern is for viewers of images, a parallel problem exists for those who share images of atrocity. Whether such sharing is guided by a desire to inform, an intention to intervene, or an urge to express moral outrage, the uncontrollable contents of images shared in digital spaces can cause such actions to misfire, or even backfire.
3. Inscrutable Intentions
Individuals and groups who publish images of atrocities may be perfectly conscious of their intentions in doing so. Those intentions are, however, rarely so evident to the people who receive and recirculate such imagery. Discerning the intentions behind any given “image/text” is a difficult task, even setting aside doubts about the veracity of the scenes depicted. The stakes of this challenge increase in cases of digital sharing of images of atrocities.
One practical demonstration of this problem comes from Facebook’s Hateful Meme Challenge, conducted in 2020. In this contest for web developers, Facebook sought solutions for the task of training machine-learning systems to recognize and correctly label posts that conveyed hateful or violent messages through non-reducible combinations of words and imagery.[9] Extra points were awarded to solutions that facilitated “zero-shot” detection of hateful memes, though the notion that AI systems can filter complex visual messages while consistently avoiding false positives and false negatives seems dubious.
A second practical demonstration of this challenge comes from a class I taught recently on Genocide and Justice. After asking students to research the use of political cartoons to address mass violence, I received a presentation that unwittingly included a cartoon featured in one of Iran’s notorious “Holocaust Cartoon” exhibitions. Failing to find a caption or other explanatory information in their cursory Google Image search, the students assumed that this image of a Jewish man seeing Hitler in the mirror was intended as a comment on the prevalence of antisemitism, rather than a noxious equation of Jews with Nazis.[10]
4. Unreliable Platforms
Platforms depend on user-generated content, but the ends they pursue are rarely users’ own. At best, platforms are unreliable partners for humanitarian campaigns and human rights advocacy. At worst, they become complicit in violating the human rights of their users, notably in cases where users attempt to share images of atrocities.
In December 2021, Rohingya refugees sued Facebook parent-company Meta for $150 billion for aiding and abetting the genocide of this minority ethnic group in Myanmar. Nasir Zakaria, speaking for a Rohingya advocacy group based in Chicago, argued at the time that Facebook had done harm by preventing targeted communities from calling attention to or issuing warnings about the violence on the platform, including by actively removing photos and texts documenting killings.[11]
More recently, America’s Department of Homeland Security announced plans to review 5 years of social media history for incoming tourists to the US, after previously subjecting visa applicants to such scrutiny.[12] The primary target of this proposed policy appears to be digital advocacy against Israel’s war in Gaza. It would be foolish to expect leading apps and platforms to resist this invasion of users’ privacy, especially since the same companies have already partnered with state security agencies in developing facial recognition and other surveillance technologies.
In my book Seeing Atrocities, published in 2025, I advised readers to prepare for the costs of digital advocacy. I did not expect to see this cautionary plea so thoroughly borne out in the weeks and months following publication.
5. Conclusion
If the audiences for digital sharing are unforeseeable, if the contents of such sharing are uncontrollable, if the intentions behind sharing are inscrutable, and if the platforms on which sharing occurs are unreliable, then it must be asked: why should we ever share images of atrocities? The positive reasons for such sharing can be found in those many cases, from Ukraine and Gaza to Charlottesville and Christ Church, in which images created and shared in digital spaces helped block attacks, convict perpetrators, or expose cover-ups.
The real question, I suggest, is not whether we should ever share images of atrocities, but how we should do so. By way of conclusion, I propose the following four norms for individuals and groups engaged in sharing images of atrocities:
DON’T assume the images you share stop with your network.
DO recognize that platforms follow their own prerogatives.
DON’T repost images without due consideration of their provenance and possible meanings.
DO prepare for the costs of digital advocacy.
Adopting these norms will not guarantee the success, or eliminate the hazards, of digital sharing of atrocities. It will however put that activity on a better ethical footing.
Paul Morrow is a Visiting Research Fellow in the School of Philosophy at University College Dublin and a 2026 NOMIS Fellow at University of Basel. His most recent book, Seeing Atrocities, was published in September 2025 by Oxford University Press.
Notes
[1] Onora O’Neill, A Philosopher Looks at Digital Communication (New York: Cambridge University Press, 2022).
[2] Associate Press, “Video of parachute jumpers in Egypt mischaracterized as Hamas paratroopers during Israel attack,” October 10, 2023. Accessed online at: https://apnews.com/article/fact-check-parachute-hamas-egypt-777994192182.
[3] Anjali Huynh and Shannon Larson, “A Dangerous Road: Misinformation is Spreading about the Brown University Shooter,” The Boston Globe, December 17, 2025. Accessed online at: https://www.bostonglobe.com/2025/12/17/metro/brown-university-shooting-misinformation-republicans/#:~:text='A%20dangerous%20road'%3A%20Misinformation%20is,December%2017%2C%202025%2C%208%3A06%20p.m..
[4] Mariana Olaizola Rosenblat and Luke Barnes, Digital Aftershocks: Online Mobilization and Violence in the United States, NYU Stern Center for Business and Human Rights, October 2025, p. 13.
[5] Heide Fehrenbach and Davide Rodogno (eds.), Humanitarian Photography: A History (New York: Cambridge University Press, 2015.
[6] Luke Taylor, “Amnesty International Criticised for Using AI-Generated Images,” The Guardian, May 2, 2023. Accessed online at: https://www.theguardian.com/world/2023/may/02/amnesty-international-ai-generated-images-criticism.
[7] Michael Wilner, “Israelis rebuff Trump, insisting images of starvation in Gaza are ‘fake’,” Los Angeles Times, July 28, 2025. Accessed online at: https://www.latimes.com/world-nation/story/2025-07-28/trump-rejects-israeli-denials-of-starvation-in-gaza.
[8] Regina Rini, “Deepfakes and the Epistemic Backstop,” Philosopher’s Imprint 20, n. 24, August 2020.
[9] Meta, “Hateful Memes Challenge and dataset for research on harmful multimodal content,” May 12, 2020. Accessed online at: https://ai.meta.com/blog/hateful-memes-challenge-and-data-set/.
[10] For more detailed discussion of this example, see Paul Morrow, Seeing Atrocities: Ethics for Visual Encounters with Intolerable Harms (New York: Oxford University Press, 2025), p. 121.
[11] WBUR, “Rohingya sue Facebook alleging company didn't stop hate speech spread by Myanmar,” Here & Now, December 20, 2021. Accessed online at: https://www.wbur.org/hereandnow/2021/12/20/rohingya-myanmar-facebook.
[12] Camilo Montoya-Galvez, “Tourists from 42 countries will have to submit 5 years of social media history to enter U.S. under Trump plan,” CBS News, December 10, 2025. Accessed online at: https://www.cbsnews.com/news/us-tourists-social-media-history-5-years-trump/.
Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.


