AI in law enforcement faces accountability crisis
- Country:
- Australia
A new study offers a sharp critique of the integration of artificial intelligence (AI) technologies into Australian law enforcement. It reveals critical shortfalls in how human perspectives, oversight, and ethics are factored into AI-driven policing tools across the Oceania region.
Titled “Unlocking Australia’s AI usage in law enforcement from human involvement perspective: a systematic literature review” and published in the journal AI & Society, the study offers the first comprehensive assessment of how AI has been applied in Australia’s law enforcement system and to what extent human actors influence its use. The authors highlight a troubling imbalance: while the Australian government and police agencies are embracing AI across domains - from retail theft prevention and courtroom assistance to child exploitation detection - academic attention remains sparse, and regulatory mechanisms are underdeveloped.
How is AI being used in Australian law enforcement?
The study finds that AI adoption in Australia spans four major domains: child exploitation detection, investigative operations, retail surveillance, and legal practice. However, the majority of insight into these practices originates not from peer-reviewed research but from grey literature - news articles, industry blogs, and official statements.
In child protection, AI is used to identify and classify both authentic and synthetic child abuse material, a task complicated by the rise of AI-generated images. Investigative use includes the deployment of cloud-based AI systems by the Western Australia Police and the Australian Federal Police (AFP) to mine and analyze digital evidence such as social media, CCTV footage, and telecom data. In the retail sector, platforms like Auror are being deployed by retailers like Bunnings and Woolworths to detect shoplifters and facilitate cooperation with law enforcement. Meanwhile, legal professionals increasingly turn to ChatGPT and other generative AI tools for drafting motions, summarizing documents, and preparing for witness cross-examinations.
Despite these developments, AI’s application remains fragmented, and Australia’s reliance on commercial, often unregulated, tools such as Clearview AI facial recognition has drawn intense scrutiny from privacy advocates and civil rights organizations.
How are humans considered in AI policing?
A central focus of the study is the extent of human oversight and intervention in AI law enforcement. The findings are sobering: only a fraction of reviewed materials deeply consider the human-centric dimension of these technologies. Of the 56 documents reviewed, 21 explicitly address human intervention, 17 discuss oversight, and 22 mention ethical concerns. However, 7 pieces made no mention of human involvement at all.
Human intervention is often limited to operational tasks such as verifying AI-generated translations or reviewing legal content for accuracy while broader roles in decision-making, bias detection, and policy formation are inconsistently applied. The authors distinguish between “intervention,” where humans are actively involved in AI operation, and “oversight,” where humans review and monitor AI outputs. Both are vital, yet often lacking.
Public trust, the authors argue, is undermined by the opaque nature of AI decision-making. Citizens, legal practitioners, and even policymakers have limited insight into how AI tools are trained, what data they access, and how conclusions are drawn. The study stresses that ethical and legal concerns, particularly surrounding bias, accountability, and privacy, must be proactively addressed through transparent human–AI collaboration.
What are the key challenges and solutions?
The review categorizes the challenges of AI use in law enforcement into three main themes: ethical and legal issues, data privacy and transparency, and accuracy and accountability.
Ethical and legal concerns dominate the findings. AI tools can embed and amplify existing biases, particularly in predictive policing and facial recognition systems. Legal professionals also face the risk of citing fabricated AI-generated case law, a practice already resulting in high-profile disciplinary actions both in Australia and abroad.
Data privacy is another critical concern. The use of facial recognition software by Bunnings and Kmart was suspended following public backlash and an investigation by privacy watchdogs. Similarly, the use of Clearview AI by the AFP was discontinued due to public pressure.
Accuracy and accountability are perhaps most dramatically illustrated by the “Robodebt” scandal, where automated systems unlawfully pursued over 380,000 welfare recipients for non-existent debts, resulting in a $751 million payout.
Proposed solutions include:
- Legal frameworks like Australia’s AI Ethics Principles and the AI Action Plan
- Transparent AI testing and certification processes
- Adoption of technical tools such as black-box testing, explainable AI, differential privacy, and federated learning
- Guidelines issued by institutions like the NSW Bar Council and Supreme Court of Victoria for responsible AI use in litigation
However, the study underscores that technical fixes alone are insufficient. Broad regulatory alignment, public education, interdisciplinary research, and consistent human oversight are essential to mitigate AI’s risks in law enforcement.
A call for ethical, evidence-based policing
The authors argue that the growing reliance on AI within Australian law enforcement demands more than just software upgrades or efficiency boosts - it requires a fundamental rethink of ethical governance, human rights, and the role of public institutions in safeguarding digital justice.
They recommend increasing peer-reviewed academic work to address the current research gap and urge policymakers to develop unified, enforceable frameworks for AI regulation. Human-AI collaboration, they conclude, should not be treated as a secondary consideration but as a structural necessity for building safe, trustworthy, and equitable law enforcement systems.
In the absence of robust accountability and oversight mechanisms, AI's promise to improve law enforcement risks being eclipsed by its potential to erode public trust and civil liberties.
- FIRST PUBLISHED IN:
- Devdiscourse

