An international team says in a new paper that AI can determine which types of research projects might need more regulation than others. The scientists used a model that blends concepts from biology and mathematics and is part of a growing effort to discover what kind of AI can be hazardous.  “Of course, while the ‘sci-fi’ dangerous use of AI may arise if we decide so […], what makes AI dangerous is not AI itself, but [how we use it],” Thierry Rayna, the chair of Technology for Change, at the École Polytechnique in France, told Lifewire in an email interview. “Implementing AI can be either competence enhancing (for example, it reinforces the relevance of human/worker’s skills and knowledge) or competence destroying, i.e., AI makes existing skills and knowledge less useful or obsolete.”

Keeping Tabs

The authors of the recent paper wrote in a post that they built a model to simulate hypothetical AI competitions. They ran the simulation hundreds of times to try to predict how real-world AI races might work out. “The variable we found to be particularly important was the “length” of the race—the time our simulated races took to reach their objective (a functional AI product),” the scientists wrote. “When AI races reached their objective quickly, we found that competitors who we’d coded to always overlook safety precautions always won.” By contrast, the researchers found that long-term AI projects weren’t as dangerous because the winners weren’t always those who overlooked safety. “Given these findings, it’ll be important for regulators to establish how long different AI races are likely to last, applying different regulations based on their expected timescales,” they wrote. “Our findings suggest that one rule for all AI races—from sprints to marathons—will lead to some outcomes that are far from ideal.” David Zhao, the managing director of Coda Strategy, a company which consults on AI, said in an email interview with Lifewire that identifying dangerous AI can be difficult. The challenges lie in the fact that modern approaches to AI take a deep learning approach.  “We know deep learning produces better results in numerous use cases, such as image detection or speech recognition,” Zhao said. “However, it is impossible for humans to understand how a deep learning algorithm works and how it produces its output. Therefore, it’s difficult to tell whether an AI that is producing good results is dangerous because it’s impossible for humans to understand what’s going on.” Software can be “dangerous” when used in critical systems, which have vulnerabilities that can be exploited by bad actors or produce incorrect results, Matt Shea, director of strategy at AI firm MixMode, said via email. He added that unsafe AI could also result in the improper classification of results, data loss, economic impact, or physical damage.  “With traditional software, developers code up algorithms which can be examined by a person to figure out how to plug a vulnerability or fix a bug by looking at the source code,” Shea said. “With AI, though, a major portion of the logic is created from data itself, encoded into data structures like neural networks and the like. This results in systems that are “black boxes” which can’t be examined to find and fix vulnerabilities like normal software.”

Dangers Ahead? 

While AI has been pictured in films like The Terminator as an evil force that intends to destroy humanity, the real dangers may be more prosaic, experts say. Rayna, for example, suggests that AI could make us dumber.  “It can deprive humans from training their brains and developing expertise,” he said. “How can you become an expert in venture capital if you do not spend most of your time reading startups applications? Worse, AI is notoriously ‘black box’ and little explicable. Not knowing why a particular AI decision was taken means there will be very little to learn from it, just like you cannot become an expert runner by driving around the stadium on a Segway.”  Perhaps the most immediate threat from AI is the possibility that it could provide biased results, Lyle Solomon, a lawyer who writes on the legal implications of AI, said in an email interview. “AI may aid in deepening societal divides. AI is essentially built from data collected from human beings,” Solomon added. “[But] despite the vast data, it contains minimal subsets and would not include what everyone thinks. Thus, data collected from comments, public messages, reviews, etc., with inherent biases will make AI amplify discrimination and hatred.”