How Law Enforcement Is Using Large Language Models (LLMs)
What to know
- Support for AI among public safety professionals rose to 90% in 2024, with agencies rapidly adopting large language models (LLMs) to streamline operations and improve engagement.
- LLMs are being used for tasks like report writing, intelligence analysis and multilingual communication, but concerns remain over data privacy, cybersecurity and model accuracy.
- Agencies are urged to implement training and policy frameworks to ensure responsible use, mitigate bias and maintain public trust as AI capabilities evolve.
A 2024 survey found that 90% of public safety professionals support AI adoption—a 55% increase from the previous year. This highlights a significant shift in attitudes toward integrating artificial intelligence (AI) into policing practices. As agencies begin to use AI, and more specifically, large language models (LLMs) to enhance efficiency, streamline operations, and improve public engagement, agencies also need to consider the associated security, training, and policies required to use LLMs effectively.
From writing reports to processing intelligence and engaging the public, the applications are growing faster than most agencies can revise their protocols. As innovation continues to outpace policy, law enforcement leaders face increasing pressure to establish clear guidelines that ensure these tools are used responsibly, legally, and with public trust in mind.
Enhancing Efficiency With LLMs
LLMs are already being applied in several practical ways across law enforcement agencies, streamlining tasks and improving access to information. For example, they can assist with automated report writing by transcribing and summarizing body-worn camera footage—an approach currently in use by the Richland County Sheriff’s Department in South Carolina, which leverages AI to analyze 100% of its footage within minutes of upload.
LLMs are also helpful in intelligence gathering, enabling officers to summarize open-source information from platforms like social media and even generate scripts to monitor specific topics. On the analytical side, these tools enable investigators to query large, complex data sets, such as knowledge graphs, spreadsheets, or case files, using plain language, leading to faster insights into criminal networks. Additionally, agencies are exploring large language models (LLMs) for community engagement by generating multilingual announcements and analyzing public sentiment, enabling more inclusive and responsive communication.
Privacy and Data Security Concerns
While LLMs offer clear operational benefits, their use in law enforcement introduces serious privacy and data protection challenges. Police data often includes personally identifiable information, witness statements, health records, and other sensitive content. Feeding this information into AI systems—especially cloud-based tools—raises questions about how the data is stored, who has access, and whether it’s being used for secondary purposes. If privacy safeguards aren't tightly controlled, there is a risk of exposing individuals to harm or violating legal protections surrounding criminal justice data.
In addition to privacy, cybersecurity remains a top concern. According to a 2024 survey, 84% of public safety agencies reported facing cybersecurity issues within the past year. Integrating LLMs into police workflows expands the digital surface area that attackers can target. Without strong encryption, access controls, and routine audits, AI-enabled systems could become entry points for data breaches or manipulation. As agencies explore AI tools, they must prioritize security architectures that are purpose-built for sensitive law enforcement environments, not just repurposed from commercial use.
Accuracy and Reliability of AI Outputs
Even with their advanced capabilities, LLMs remain vulnerable to producing inaccurate or misleading information. This phenomenon, often referred to as “hallucination,” occurs when the model generates text that sounds correct but is factually wrong or unsupported. A 2024 study found that when asked legal questions, LLMs generated hallucinations in an astonishing 58% to 88% of cases, depending on the LLM. This emphasizes the risks of relying solely on these tools without human review or editing for tasks that require legal interpretation or factual accuracy.
Bias is an equally important concern. Because LLMs are trained on large volumes of historical and publicly available text, including potentially biased or outdated data, they can replicate and reinforce existing disparities. This could skew analysis, generate problematic language, or unintentionally support discriminatory practices. For instance, if a model is used to summarize past arrest records, it might reflect systemic inequities that previously existed without context. To use LLMs responsibly, agencies must build in human oversight, routine validation, and ongoing efforts to audit and mitigate bias in both training data and deployment.
The Need for Training and Policy Development
To ensure LLMs are used responsibly and effectively, law enforcement agencies must prioritize comprehensive training for officers, analysts, and leadership. Understanding what these tools can—and cannot—do is essential for avoiding misuse or overreliance. Training should cover both the technical capabilities of LLMs and their limitations, including the risks of inaccuracies, biases, and data mishandling.
Equally important is the development of robust policy frameworks to guide the implementation of LLMs. These policies should define where and how LLMs can be used, establish clear privacy safeguards, and outline procedures for oversight and auditing. Agencies will also need to address questions of authorship, data retention, and human review of AI-generated outputs. As noted by the National Policing Institute, thoughtful governance structures are crucial to ensuring that innovation does not outpace ethics.
Looking Ahead: The Future of LLMs in Policing
As LLM technology continues to advance, its potential role in law enforcement is expanding. One promising area is predictive analytics, where AI models can help forecast crime trends by analyzing historical data, social patterns, and real-time inputs. This capability could support more proactive policing—allocating resources more strategically, identifying high-risk areas, and anticipating potential incidents before they escalate. Similarly, LLMs can enhance decision-making in the field by providing officers with quick access to relevant procedures, legal guidelines, or contextual information during complex encounters.
Yet with these possibilities comes a clear need for caution. The rapid adoption of LLMs risks outpacing the legal, ethical, and operational safeguards necessary to ensure their responsible use. Without strong policies in place, there’s a danger of overreliance on AI outputs, misuse of predictive tools, or unintended violations of rights. To fully realize the benefits of these technologies, agencies must invest not only in the tools themselves but also in the frameworks that govern them, establishing clear limits, accountability mechanisms, and regular oversight to keep innovation aligned with public interest.

Toni Rogers
Toni Rogers is a freelance writer and former manager of police support services, including communications, records, property and evidence, database and systems management, and building technology. She has a master’s degree in Criminal Justice with certification in Law Enforcement Administration and a master's degree in Digital Audience Strategies.
During her 18-year tenure in law enforcement, Toni was a certified Emergency Number Professional (ENP), earned a Law Enforcement Inspections and Auditing Certification, was certified as a Spillman Application Administrator (database and systems management for computer-aided dispatch and records management), and a certified communications training officer.
Toni now provides content marketing and writing through her company, Eclectic Pearls, LLC.