Voyager Labs is reportedly collecting social media information to create profiles that can be used to identify people who pose security risks. It’s part of a growing effort to use artificial intelligence (AI) to investigate potential criminals. But some experts say that the movement is rife with potential problems.  “It’s very hard to predict human behavior,” Matthew Carr, a security researcher at Atumcell Group, told Lifewire in an email interview. “We’re not even able to predict our own behavior, let alone someone else’s. I think it is possible that AI could be used in the future for this purpose, but we are a long way from being able to do so at present.”

Creating Profiles

As The Guardian recently reported,t the Los Angeles police department looked into using Voyager Lab’s crime prediction software. The company also announced that it had sealed a significant deal with a Japanese government agency.  The Japanese agreement provides the government agency with an AI-based investigation platform that analyzes massive amounts of information from any source, including open and deep data.  “I am glad that we are collaborating in the fight against terror and crime,” said Divya Khangarot, Managing Director APAC at Voyager Labs, in the news release. “Using Voyager Lab’s cutting-edge intelligence solutions, our clients gain unique capabilities to proactively identify and disrupt potential threats. We bring additional layers of deep investigative insights using a combination of AI, Machine Learning, and OSINT to uncover hidden trails, infringed information, and bad actors.”

Not So Smart? 

But in an email interview, Matt Heisie, the co-founder of Ferret.ai, which also uses AI to predict offenders, cast doubt on some of the claims of Voyager Labs.  “Is there as clear a link between, say, an arrest record and future criminal behavior, as there is a black spot on a test and the development of a tumor?” he said. “Think of all the potential confounds that went into that arrest—what neighborhood the person lived in, the quantity and quality, even biases, of the police in that area. The person’s age, their gender, their physical appearance, all of those have intersecting impacts on the likelihood of that person having an arrest record, entirely separated from their actual proclivity to commit the crime we’re attempting to predict.” Defendants with better attorneys are more likely to be able to suppress records from becoming publicly available, Heisie said. Some jurisdictions limit the release of mugshots or arrest records to protect the accused.  “All of this adds further bias to the algorithms,” he added. “The computer will learn based on the data you give it and incorporate all of the biases that went into that data collection into the learning and the interpretation.” There have been several attempts to create crime predicting AI’s, and with often scandalous results, Heisie said.  COMPAS, an algorithm law enforcement uses to predict reoffending, is often used in determining sentencing and bail. It has faced scandal going back to 2016 for racial bias, predicting Black defendants posed a higher risk of recidivism than they actually did, and the reverse for white defendants. Over 1,000 technologists and scholars, including academics and AI experts from Harvard, MIT, Google, and Microsoft, spoke out in 2020 against a paper claiming that researchers had developed an algorithm that could predict criminality based solely on a person’s face, saying that publishing such studies reinforces pre-existing racial bias in the criminal justice system, Heisie noted.  China is the largest and fastest-growing market for this type of technology, primarily due to widespread access to private data, with more than 200 million surveillance cameras and advanced AI research focused on this issue for many years, Heisie said. Systems such as CloudWalk’s Police Cloud are now used to predict and track criminals and pinpoint law enforcement.  “However, substantial biases are reported there as well,” Heisie said.  Heisie added that his company carefully curates the data that goes in and doesn’t use mugshots or arrest records, “focusing instead on more objective criteria.”  “Our AI learned from the curated data, but more importantly, it also learns from people, who themselves analyze, curate, and evaluate records, and tell us about their interactions with others,” he added. “We also maintain full transparency and free and public access to our application (as quickly as we can let them into beta), and welcome insight into our processes and procedures.”