A.I.P.D.
AI Is Already Embedded in Law Enforcement, and Its Usage Is Only Growing.
One of the holy grails of policing has long been the ability to predict when a crime is about to occur, allowing authorities to proactively “flood the zone” before the crime takes place — either to deter potential offenders or, as imagined in the eerily prescient crime drama Minority Report, to intervene when someone is “taking a substantial step to commit a crime with the intent to commit the crime.”
Alternatively, police have sought to position themselves to respond immediately after a criminal act, while — lacking a better analogy — the “gun is still smoking.” In pursuit of these capabilities, police forces around the world have deployed a wide array of data-collection tools and analytics to help officers predict when and where crimes are likely to occur, or who is most likely to perpetrate them.
While there are several ways to describe these sought-after abilities, perhaps the most apt term is 'Predictive Policing.' Interestingly, despite their differing government systems, the United States and China have not only pioneered the development of these methods but have also acted as 'first movers' in normalizing their use to help the state 'protect and serve' the public. Indeed, just as both nations have invested heavily in the development of AI technologies, they have also earmarked considerable funds for experimenting with and implementing predictive policing systems.
In the U.S., one of the most illustrative examples of the march toward AI-powered predictive policing is the Geolitica software suite, formerly known as PredPol. Essentially, the software analyzes crime data collected by law enforcement agencies in specified areas, focusing on crime types, locations, and times. Based on this analysis, the software presents its findings as predictions of when, where, and what type of crime is most likely to occur.
Departments then use these results to plan officer deployment at any given moment. Information that officers obtain on the beat is constantly fed into the database, allowing the software to continuously update and refine its recommendations. It’s exactly what we’ve come to expect from the AI chatbots that are increasingly becoming fundamental to contemporary information society.
In China, law enforcement is leaning even further into the use of AI‑powered predictive policing to carry out its duties. According to China Daily — the Chinese government’s official English‑language newspaper — police departments across the country are establishing dedicated AI units staffed by teams of officers who apply artificial intelligence to years’ worth of accumulated data to “develop intelligent models focused on combating crime, maintaining public security, and enhancing community services.”
The report goes on to note that, thanks to these systems, the AI unit within the Suzhou City Police Department has been operating around the clock, leading to “a more than fivefold increase in efficiency” and effectively saving the workload of a specialized team of five to six officers working continuously for two weeks.
While promising AI applications in criminal justice might suggest a path toward the near-perfect crime prevention and resolution, as depicted in Minority Report, the reality is far more flawed. The primary issue is that AI systems are not infallible. They are often trained on historical data that is likely to contain embedded biases. For example, in the case of tools like Geolitica, this data can include prejudiced police reports or witness statements that perpetuate long-held, inaccurate stereotypes about certain communities, locations, and types of crime.
Consequently, AI can not only misidentify priorities for law enforcement but also present its skewed conclusions with a misleading aura of objectivity. People are more likely to believe the conclusion if it comes from an advanced algorithm. Indeed, how could it possibly be wrong? This reliance on AI answers is likely to become stronger as the technology becomes more ubiquitous.
As in the U.S., the use of AI in predictive policing in China is deeply problematic. China’s political system enables far more expansive access to personal data, and its AI strategy encourages rapid deployment across governance and security. These factors make algorithmic policing not only widespread but also tightly integrated into mechanisms of social control. Of course, the use of AI in China also results in an unwarranted amount of “abuse” of vulnerable populations as a result of AI trained on biased information.
Interestingly, or ominously, depending on your perspective, according to a recent OECD report, the development of many of China’s predictive policing systems has been facilitated through the Chinese state’s access to American technologies. This cross‑border technological entanglement matters: while AI in criminal justice takes many forms, the parallel development of predictive‑policing tools in both the U.S. and China suggests that this is one domain where algorithmic governance is likely to advance rapidly — and with profound consequences.


