Yahya Sinwar, the former leader of the Hamas militant organization, was killed by the Israeli military in the south Gazan city of Rafah in October 2024. Given the role Sinwar played in the planning and execution of the October 7 terrorist attack, as well as his role in the development of Hamas's military wing, his killing was seen as a possibly game-changing victory for the Israeli prime minister Benjamin Netanyahu.
But, for all sides in the conflict, debate quickly turned to the consequences of his death. Would it change the political possibilities for a resolution to the war in Gaza? And would it transform him into a powerfully symbolic martyr inspiring new generations of militants?
My research and teaching at Lancaster University develops what could be described as "war futurism." It explores the possible futures ahead of us in times that might be shaped in dramatic and unpredictable ways by AI, climate emergencies, space wars and the technological transformation of the "cyborg" body.
In 2023, I wrote a book titled "Theorising Future Conflict: War Out to 2049." It included a fictional scenario involving a leader in a terrorist organization who was rumored to have been generated by AI as a means of producing a powerful figurehead for a group that was losing leaders to drone strikes.
Sinwar's death prompted me to again think about what the age of generative AI tools might mean for strategic thinking and planning within organizations losing key figures.
Will there soon be a situation in real life whereby dead leaders are replaced by AI tools that could produce virtual figures that circulate through deepfake videos and online interactions? And could they be used by members of the organization for strategic and political guidance?
American cyberpunk author Rudy Rucker has written before about the possibility of producing what he calls a "lifebox", where a person could be simulated in digital worlds. Movies like the 2014 US science fiction thriller "Transcendence" have also explored the possibility of people being able to "upload" their consciousness into digital worlds.
Rucker's idea is not so much about uploading consciousness. It is instead about creating the simulation of a person based on a large database on what they've written, done and said.
In his 2021 novel titled "Juicy Ghosts," Rucker explores the ethical and economic problems that could result from people producing lifeboxes to live on after their deaths. These range from how you might pay for your digital "life" after death, and whether you would be able to control how your lifebox might be used.
The era of digital immortality
The possibility of an AI-assisted lifebox in the future isn't so far-fetched. Technological change is happening at a rapid pace and tools already exist that use AI for strategic planning and guidance.
We already get a sense of the ethical, legal and strategic challenges that might be ahead of us in the concern surrounding the Israeli military's use of AI tools in the war in Gaza. In November, for example, the military claimed it was using an AI-based system called Habsora -- meaning "the Gospel" in English -- to "produce targets at a fast pace."
It goes without saying that using AI to identify and track targets is vastly different to using it to create a digital leader. But, given the current speed of technological innovation, it's not implausible to imagine a leader generating a post-death AI identity in the future based on the history books that influenced them, the events they lived through, or the strategies and missions they were involved in. Emails and social media posts might also be used to train the AI as the simulation of the leader is being created.
If the AI simulation works usefully and convincingly, we could arrive at a situation where it even becomes the leader of the organization. In some cases, deferring to the AI leader would make political sense given the way the non-human, virtual leader can be blamed for strategic or tactical mistakes.
It could also be the case that the AI leader can think in ways that exceed human origin and will have greatly enhanced strategic, organizational and technical capacities and capabilities. This is a field that is already being considered by scientists. The Nobel Turing challenge initiative, for example, is working to develop an autonomous AI system that can carry out research worthy of winning the Nobel prize and beyond by 2050.
A virtual political or terrorist leader is, of course, currently only a scenario from a cyberpunk film or novel. But how long will it be before we begin to see leaders experiment with the emerging possibilities of digital immortality?
It may be the case that somewhere in the Kremlin one of the many projects being developed by Putin in preparation for his death is the exploration of an AI lifebox that could be used to guide Russian leaders that follow him. He could also be exploring technologies that will enable him to be "uploaded" into a new body at the time of his demise.
This is probably not the case. But, notwithstanding, strategic AI tools are likely to be used in the future -- the question will be who gets to design and shape (and possibly inhabit) them. There are also likely to be limits on the political and organizational significance of dead leaders.
Concerns may arise that hackers could manipulate and sabotage the AI leader. There will be a sense of uncertainty that the AI will be manipulated through operations to influence and subvert in a way that erases all trust in the digital "minds" that exist after death. There could be a concern that the AI is developing its own political and strategic desires.
And it may well be the case that these attempts at AI immortality will be seen as an unnecessary and unhelpful obstruction by whoever replaces figures like Sinwar and Putin. The immortal leader might remain simply a technological fantasy of narcissistic politicians who want to live forever.