news 04.03.2026

Open data

Takeaways on AI Facilitated Investigation: Opportunities and Risks

Speaker: Professor Tim Grant, Professor Forensic Linguistics, Aston University

Advances in AI and large language models are transforming how large volumes of information can be processed and analysed. As these tools become more widely available, researchers, analysts and investigators are increasingly experimenting with ways to incorporate them into their work. At the same time, AI cannot replace the expertise, judgement and contextual understanding that investigators bring to a case. Instead, AI should be seen as a tool that can complement professional analysis, while human oversight remains essential to ensure that findings are interpreted accurately and responsibly.

Opportunities:

  • Managing large and complex datasets. Investigations into corruption and environmental crime often involve vast amounts of documentation, financial records and communications. AI tools can help review, organise and analyse these datasets more efficiently, helping investigators identify relevant information and navigate complex cases more effectively.
  • Supporting research and analysis. Large language models can assist with tasks such as summarising lengthy documents, extracting key information and highlighting potential patterns across datasets. These tools can provide preliminary insights that investigators can then explore and verify through further analysis.
  • Expanding access to specialised analytical tools. Some types of analysis that previously required specialised expertise, such as linguistic analysis of communications, may become more accessible with AI-assisted tools. This can help investigators explore new approaches and strengthen evidence analysis.

Risks:

  • Confident but inaccurate outputs. AI systems can produce responses that appear convincing but contain errors or misleading information. Without careful verification, these outputs could introduce inaccuracies into investigative work.
  • Overreliance on AI-generated results. Because AI outputs are often presented in clear and authoritative language, users may be tempted to trust them too readily. Maintaining a critical approach and verifying results remains essential.
  • Limited transparency raises accountability challenges. Many AI systems operate as “black boxes”, making it difficult to understand how they generate their conclusions. This lack of transparency raises important questions about reliability and accountability, particularly in investigative contexts where evidence and analysis must withstand

Useful resources:

Language and Online Identities: The Undercover Policing of Internet Sexual Crime

Cambridge Elements in Forensic Linguistics

Writing Wrongs podcast

The Idea of Progress in Forensic Authorship Analysis

The full recording of the March 4 presentation is available here: