The Dead Spoke. Nobody Asked If They Should.
By Lyndon Amoah | MNEME Studios
On 1 May 2025, a dead man walked into an Arizona courtroom.
Not literally. But close enough that the family wept, and the killer sat in silence watching a screen. Christopher Pelkey, shot dead in a road rage incident in Chandler, Arizona in 2021, delivered his own victim impact statement. In his own voice. Looking directly into camera.
Except it wasn’t him.
The words were written by his sister. The voice was reconstructed from old recordings. The face was generated from photographs. What appeared on that screen was an AI avatar: technically a version of Chris Pelkey, practically a performance scripted by the people who loved him.
Judge Todd Lang watched it and said: “I love that AI. Thank you for that.”
Then he sentenced the man who killed Pelkey to ten and a half years, more than the prosecution had asked for.
I’ve been sitting with this story for weeks. I can’t stop thinking about it.
What Actually Happened
Pelkey’s sister Stacey Wales had spent two years preparing what she wanted to say in court. She had a running list. She’d held it together through two trials, told not to cry, not to react, not to emote. Finally, at sentencing, she could speak.
But when she sat down to write, she kept hearing her brother’s voice instead of her own.
So she did something no one had ever done in a US court before. She and her husband worked with a friend skilled in AI to reconstruct Pelkey: his face, his voice, his manner, using photographs and video footage. The avatar opened its statement with a disclosure: “Hello. Just to be clear for everyone seeing this, I’m a version of Chris Pelkey recreated through AI that uses my picture and my voice profile.”
It then expressed forgiveness to the man who shot him. It ended: “Well, I’m gonna go fishing now. Love you all. See you on the other side.”
The courtroom was silent.
Here is the tension I can’t shake: the words were not Christopher Pelkey’s. They were Stacey Wales’s version of what her brother would have said. He never consented to being recreated. He never wrote a script. He never got to say anything to Gabriel Horcasitas, because he never got the chance. His sister said it for him, through his face, through his voice, in a courtroom, on the record.
And the judge loved it.
The Ethical Problem Nobody Wants to Name
I work with archives and memory for a living. That’s the lens I bring to this. And what unsettles me most about the Pelkey case isn’t the technology. It’s the authorship problem.
Who actually spoke that day?
The avatar said: “I believe in forgiveness.” Wales wrote those words because she knew that’s what her brother would have believed. She may be entirely right. By all accounts, Pelkey was exactly that kind of person: generous, faith-driven, someone who wouldn’t want bitterness to outlast him.
But there is a difference between knowing someone well enough to speak about them, and speaking as them.
When a biographer reconstructs a historical figure’s likely views, we understand that as interpretation. When a family member writes a tribute at a funeral, we understand that as grief. But when the same words come through a moving, speaking, breathing version of the dead person’s actual face and voice, something shifts. The courtroom stops receiving an interpretation. It receives testimony. And testimony carries a different kind of weight.
The judge sentenced Horcasitas to more than the prosecution requested. His defence attorney immediately filed an appeal, arguing the judge may have “improperly relied” on the AI video. That appeal is ongoing.
The question underneath the legal argument is the one nobody is quite asking out loud: whose statement was that?
Why This Matters Beyond One Courtroom
The Pelkey case is extraordinary. But it is also a preview.
We are at the beginning of a period where the gap between archival material and AI reconstruction is closing faster than our ethical frameworks can keep up. A few recordings, a handful of photographs, some old social media posts: that is enough to build something that looks, sounds, and feels like a person.
This technology is already in documentaries. Anthony Bourdain’s voice was recreated in Roadrunner without disclosing it to audiences watching the film, only revealing the fact afterwards in a press interview. It is in entertainment: hologram concerts, posthumous advertising endorsements. It is being used by grieving families to hear their loved ones one more time. And now it has been used in a court of law to influence a criminal sentence.
Every one of those uses involves an archive. A body of material, recordings, images, writing, that someone left behind. What almost never gets discussed is: who controlled that archive? Who decided how it was used? Who checked whether it was being used faithfully?
In the Pelkey case, his sister made those calls. She had his photographs. She had recordings of his voice. She controlled the script. She made the decision because she knew him and loved him, and probably got it right. But she made it alone, with no framework, no oversight, and no precedent.
Now think about the same technology in other hands. A corporation using an employee’s archived voice to endorse a product they never endorsed. A political campaign reconstructing a deceased supporter. A studio generating new performances from an actor’s archived footage without residuals, rights, or consent.
The technology is the same. The archive is the same. What changes is the intention.
The Part That Got Me
I’ll be straight about what landed hardest when I first read this story.
The avatar ended with: “I’m gonna go fishing now.”
That detail, that small, specific, human detail, is what makes AI reconstruction both powerful and dangerous at the same time. It sounds like something a real person would say. It probably is something Christopher Pelkey would have said. Wales knew her brother. That line likely came straight from memory.
But it also means the judge received a portrait of this man: warm, forgiving, ready to go fishing. Not a record of documented facts. Not evidence in the formal sense. Character. And character, rendered in moving image and voice, persuades in ways a written description simply cannot.
The judge told the family: “You allowed Chris to speak from his heart, as you saw it.”
As you saw it.
That phrase is doing a lot of work. It acknowledges, graciously, that what the court watched was a mediated representation, not testimony. But it also leaves open a question worth sitting with: should a mediated representation carry legal weight in a criminal sentencing?
I don’t have a clean answer. I’m not sure anyone does yet.
What I Keep Coming Back To
The Pelkey case is not a story about technology doing something wrong. The family’s intentions were honourable. The execution was transparent: the avatar said what it was from the very start. And the outcome seems to have reflected who Pelkey genuinely was.
But it is a clear example of what happens when the tools for reconstructing the dead move faster than the rules for using them.
We are already in a world where a recording, a photograph, a written text can be turned into a speaking, moving, persuasive version of someone who can no longer speak for themselves. That someone might be a celebrity. A crime victim. A community elder. The founder of a small charity who spent forty years building something and wants it to outlast her.
In every one of those cases, the question is the same: who controls the archive, and what are the rules for using it?
Right now, in most contexts, the answer is: whoever has access. That is not a good enough answer.
Chris Pelkey got a sister who loved him and knew him well. Not everyone is that lucky.
Sources and further reading
- Associated Press, “AI-generated video gave victim a voice at his killer’s sentencing in Arizona”
- The Guardian, “AI of dead Arizona road rage victim addresses killer in court”
- The Washington Post, “Sister creates AI video of slain brother to address his killer in court”
- Judicature, “Victim ‘Speaks’ via AI, Sparking an International Conversation”
- Associated Press, “From AI avatars to virtual reality crime scenes, courts are grappling with AI in the justice system”
- The New Yorker, “The Ethics of a Deepfake Anthony Bourdain Voice”
- Reuters, “Proposed AI evidence rule highlights new challenges for federal practitioners”
Lyndon Amoah is the founder of MNEME Studios, an ethical AI consultancy working with museums, archives, heritage charities and oral history projects. He thinks about memory, consent and technology for a living.