When someone comes forward after witnessing a crime, the conversation that follows would often decide a person’s fate. Professionals trained in trauma response, psychology, and legal nuance know how to navigate that moment, when memory is fragile, emotion is raw, and one poorly asked question can distort the truth. Taking witness statements isn’t a task most people are qualified for, yet it seems AI might soon be trusted with it.
In the Netherlands, projects like AIWitness are already exploring how AI tools might be used to collect witness testimonies, promising speed, consistency, and relief for overstretched legal systems. But this opens up a world of questions. Can AI interpret emotion? Can it respond with empathy? What happens if it misunderstands someone’s words – and what happens if it leaks them?At AI Heroes, we’ve been working with universities across Europe through the Erasmus+ Ethical Engineer project to help students confront dilemmas exactly like this. The goal isn’t to provide a rulebook, since there aren’t any rules for technology that is often yet to be developed. Instead, it is to train the next generation of developers to adopt ethical principles and ask themselves the right questions. One of the cases we explore is on the use of AI in witness testimonies and all ethical questions that arise from there.
What Is AIWitness?
AIWitness is a research project from Saxion University of Applied Sciences and the Dutch National Police. Its goal is to find out if AI can help collect witness statements in a way that’s faster, more consistent, and less demanding. It works by guiding witnesses through a series of questions, recording their answers, and using AI to highlight the most important details. It could work through voice or text, and might be used right after an incident, even when a police officer is not available.
The project recognises just how sensitive and complex these moments are. Giving a witness statement isn’t just answering some questions – it’s often emotional, confusing, and relies on trust and rapport. That’s why the goal of AIWitness is not simply building a tool, but testing where AI might truly be helpful and when it would be irresponsible to use.

The New Interrogator?
Some see a clear use case. An AI can be available 24/7, never gets tired, and can speak any language. Statements can be taken right after an incident, when memories are still fresh. Police departments dealing with staff shortages get help with their routine work and AI offers consistency – no changes in tone, no missed questions or human mistakes.
But that’s also where the doubts begin. Witness statements aren’t routine. They’re personal, often emotional. People hesitate, and sometimes go silent. They change their minds halfway through a sentence. And what the AI does, or doesn’t pick in that moment could determine whether or not justice is ever reached.
What Gets Lost in Translation?
A witness might hesitate. They might say something that doesn’t quite make sense until you ask a follow-up question. They might respond differently based on how a question is phrased, or who’s asking it. Police officers pick up on these subtleties – tone, emotion, contradictions, and adjust their approach. A machine, even the most sophisticated one, follows a script. Empathy, intuition, the unspoken – that’s where the friction lies.
And there are deeper, structural concerns too. What if the AI misinterprets a statement? Who is accountable for that error? What if the model has been trained on insufficient data and unintentionally biases the results? What happens to the sensitive data collected, who stores it, and how securely?
Smarter Tools, Not Smarter Judges
When it comes to witness statements, empathy and human judgment aren’t just nice to have. Stripping them out of the process turns a moment of unimaginable vulnerability into data points, risking people feeling disrespected and misunderstood in their most traumatic moments, where the possibility of technical errors cannot be allowed to exist. And yet, AI may still play a useful role in the interview room. A study from the University of Pennsylvania shows how large language models can help assess the reliability of eyewitness accounts especially in the contexts where human judgment is most prone to fail.
People might sound sure of what they saw, but confidence doesn’t always equal accuracy. AI can help remove bias and variability in how responses are interpreted, reducing the weight often given to vague but very overconfident statements. It can also spot patterns or inconsistencies in statements that may point to whether the testimony will hold up under legal scrutiny. The value would not come from letting AI handle trauma or decide anyone’s fate, but using it to detect signals we might miss, without the same emotional or cognitive blind spots.
This Is the Grey Area We Teach
The Ethical Engineer project exists precisely because these issues don’t have easy answers. That’s why we don’t just hand students a set list of rules, but tough, real-life cases like AIWitness and ask them to think from the standpoint of not just the developers, but also the witnesses, the police and the critics. It’s not a matter of choosing a side, but understanding the stakes.
So Where Do We Go From Here?
AI in legal contexts isn’t going away. If anything, it’s also expanding into predictive policing and forensic analysis. At this point, the question is not whether we use AI altogether, but how to do it in a way that is unharmful and only in complement to our own strengths. That means asking questions early and preparing the people who will build these systems to think critically. Projects like The Ethical Engineer are one small step in that direction.