Skip to content Skip to sidebar Skip to footer

Artificial Intelligence can now predict crime with 90% accuracy rate

We’re a long way from Asimov’s three laws but could Artificial Intelligence offer a real solution to safer cities? Deepika S investigates

In a world where Alexa can turn the lights on without you moving a finger, there’s seemingly nothing AI can’t do. Beyond technology of convenience, AI-powered analysis is now also making our streets safer.

An AI-based project led by Ishanu Chattopadhyay, Assistant Professor of the Department of Medicine at the University of Chicago and Professor James Evans analyzed the various historical crime data points available in the public domain to predict future events within distinct 1,000-square-foot urban centers. The technology was demonstrated using data from eight US cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco. This new algorithm can predict crimes an entire week in advance with a 90% accuracy rate, as per a recent paper published in Nature Human Behavior.

A step ahead of the crime

Researchers initially collated several years’ worth of data points from 2014 to 2016, using which AI predicted a crime in the upcoming weeks by analyzing past trends. Each year, the database is updated from the Chicago police files on violent and property-related crimes and the number of arrests arising from each incident. This data was then used to program AI models that show how variations in each parameter impacts the other.

The technique allowed the team to not only predict crime levels with startling accuracy; it also replicated the same across seven other US cities. Moreover, when tested on datasets from a predictive policing experiment run by the Department of Justice, it outperformed the best alternative approach in 119 of 120 testing categories, such as accuracy, crime pattern detection, and efficiency.

Although early efforts toward this initiative are commendable, these applications have raised controversy for reinforcing systemic biases in police enforcement and public psyche, and for forming a convoluted rapport of some social groups with crime more than others. Additionally, privacy is a huge concern for most users who interact with AI in real-time.

Reinforcing our biases

Ishanu Chattopadhyay told Insider that their model concluded that crimes in affluent neighborhoods resulted in a higher number of arrests than those in impoverished communities, indicating a bias in police responses to crimes.

Despite the influx of AI that assists in crime detection, detecting specific patterns of fraud committed by the same individual or group is still largely a manual task, as even advanced automated tools aren’t able to decipher patterns from a series of incidences. Instead, they can only predict an average crime level. For a better understanding of instances of crimes, it is important that analysts use a sense of incidental judgment, review past crime reports and compare them to the present day.

Capturing your social identity

Another key application of AI in crime is implemented within the Los Angeles Police Department. The LAPD is collaborating with a data analytics company called Voyager Analytics, which has developed a system that uses AI to analyze social media profiles that detect emerging threats based on the trends of an individual’s social media activity. Its key focus was on analyzing digital universes of people on the radar to determine their involvement in crime rings or intentions to commit future crimes.

According to Meredith Broussard, a data journalism professor at New York University, it is a guilt-by-association system. Voyager affirms that all its data on individuals, groups, and pages allows its software to conduct a real-time analysis of sentiments that help to find new leads when investigating seemingly inconsequential interactions, behaviors, or interests.

Previous efforts to use AI to predict crime have created controversy because they can contribute to racial bias from analyzed patterns. Such grave generalizations cause a huge inaccuracy and erode public trust.

The software can detect people who most fully identify with a given issue. The company’s biggest case study analyzed the ways it would have revealed trends in the social media presence of Adam Alsahli, who was killed last year while attempting to attack a naval base in Texas. Alsah¬li’s behavior on social media show¬cased the types of activity that would invite “proactive vetting and risk assessment”, for his profile strongly indicated a fundamentalist mindset before the attack. They found that 29 of Alsahli’s 31 Facebook and Instagram posts contained Islamic messages, reflecting his pride and identification with his Islamic heritage. The software concluded that his mindset was extremist by a thorough analysis of his social media activity. 

Despite its obvious strengths, systems like Voyager are only as good as the data they’re processing. For instance, in the Adam Alsahli case (which was later amended), many facets of what Voyager processes as signals of fundamentalism could also qualify as free speech or other fundamental activity. This is because parts of the man’s posts essentially read like the social media profile of your average Muslim dad.

Privacy is a major issue that causes concerns for many as the software digs into the virtual activities and transactions to detect any possible criminal intent of a user. Thus, while the nitty-gritty needs finessing and unbiased, responsible analysis, we can take away from the case’s ongoing desire among police and national investigation agencies to advance their policing while eliminating the deeply embedded biases in the data being used in the systems.

Despite the influx of AI that assists in crime detection, detecting specific patterns of fraud committed by the same individual or group is still largely a manual task

The growing AI controversy

Previous efforts to use AI to predict crime have created controversy because they can contribute to racial bias from analyzed patterns. A recent algorithm developed in Chicago created a list of people most at risk of being a part of a shooting, as a victim or an offender. It turns out, however, that despite these efforts to predict crime, details of the algorithm were initially kept secret, but when the list was finally released, it concluded that 56% of black men in the city between the ages of 20-29 were featured on it. Such grave generalizations cause a huge inaccuracy and erode public trust.

According to Chattopadhyay, efforts have been taken to reduce the effect of this bias, whereby the AI doesn’t identify suspects; instead, it only gives you an idea of the potential sites where crimes would have occurred. He adds, “Law enforcement resources aren’t infinite, so the utility brought by AI would be an optimal measure to predict the possibility of homicides in cities. The data has been released into the public domain so other researchers can check its reasoning and possible conclusions.”

“We created a digital twin of urban environments,” says Chattopadhyay. “If you feed it data from what happened in the past, it will tell you what will happen in the future. It’s not magical, and even with limitations, it works really well.”

Our technology is only upscaling, where criminals undoubtedly use the internet and its limitless possibilities to commit a crime, be it a remotely coordinated terrorist attack or an online fraud. So why should crime investigation lag behind? While AI in governance and administration is trying to catch up with the malicious use of technology, predicting crime using AI without betraying the public’s trust or misidentifying a perpetrator still has a long way to go.

Sign Up to Our Newsletter

Total
0
Share