The Criminal Potential of Artificial Intelligence

Murat Durmus (CEO @AISOMA_AG)
5 min readApr 27, 2021
shutterstock (metamorworks)

This is an excerpt from the book “THE AI THOUGHT BOOK

AI can be implicated in crime in several ways. Most obviously, AI could be used as a tool for crime, using its capabilities to facilitate actions against real-world targets: predicting the behavior of people or institutions to discover and exploit vulnerabilities; generating fake content for extortion or to damage reputations; performing acts that human perpetrators cannot or will not perform themselves for reasons of danger, physical size, speed of response, etc. Although the methods are new, the crimes themselves may be traditional in nature — theft, extortion, intimidation, terror. Alternatively, AI systems themselves may be the target of criminal activity: Circumventing protective systems that stand in the way of a crime; evading detection or prosecution of crimes already committed; causing trusted or critical systems to fail or misbehave cause harm or undermine public trust.

AI could also provide context for a crime. The fraudulent activity could depend on the victim believing that a certain AI functionality is possible when it is not — or that it is possible but not used for the fraud. Of course, these categories are not mutually exclusive. As in the adage about catching a thief, an AI system attack may itself require an AI system to be carried out. The fraudulent simulation of nonexistent AI capabilities could be executed using other AI methods that exist. Crimes vary enormously. They may be directed against individuals or institutions, businesses or customers, property, government, the social fabric, or public discourse. They may be motivated by financial gain, acquisition of power, or change in status relative to others. They may enhance or damage reputations or relationships, change policy, or sow discord; such effects may be an end in themselves or a stepping stone to a broader goal. They may be committed to mitigate or avoid punishment for other crimes. They may be driven by a desire for revenge or sexual gratification or to further religious or political goals. They may express nothing more than a nihilistic urge to destroy, vandalize, or commit violence for its own sake. The extent to which AI can amplify this variety of criminal acts depends mostly on how much they are embedded in a computational environment: Robotics is advancing rapidly, but AI is better suited to participate in a bank fraud than in a bar fight. This preference for the digital over the physical world is a weak defense. However, because today’s society is deeply dependent on complex computer networks, not only for finance and commerce but also for all forms of communication, politics, news, work, and social relationships. People now conduct large parts of their lives online, get most of their information there, and their online activities can make or break their reputations. This trend is likely to continue for the foreseeable future. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activities that can have significant real-world consequences. Moreover, unlike many traditional crimes, crimes in the digital domain are often highly reproducible: once developed, techniques can be shared, repeated, and even sold, opening up the potential for commercializing criminal techniques or providing “crime as a service.” This can lead to a lowering of technological barriers as criminals are able to outsource the more challenging aspects of their AI-based crimes.

Listed below are some potential hazards

Audio and video imitation

People have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a lot of credibilities (and often legal force), despite the long history of photo trickery. But recent developments in Deep Learning, mainly using GANs (see above), have greatly expanded the scope for generating fake content. Persuasive impersonations of targets following a fixed script can already be produced, and interactive impersonations are expected to follow. Delegates saw multiple criminal applications for such “deepfake” technologies to exploit people’s implicit trust in these media, including Impersonation of children to elderly parents via video calls to gain access to funds; use over the phone to gain access to secure systems, and fake videos of public figures speaking or acting reprehensibly to manipulate support. Audio/video impersonation was ranked as the most concerning type of crime overall of all those considered, scoring high on all four dimensions. Combating it was considered difficult: Researchers have shown some success with algorithmic detection of Impersonation (Güera and Delp 2018), but this may not be possible in the longer term, and there are very many uncontrolled pathways through which fake material can spread. Changes in citizen behavior may therefore be the only effective defense. These behavioral changes, such as a general distrust of visual evidence, could be considered indirect societal harms resulting from the crime, in addition to direct harms such as fraud or damage to reputation. If even a small fraction of visual evidence turns out to be convincing fakes, it becomes much easier to discredit genuine evidence, undermining criminal investigations and the credibility of political and social institutions that rely on trustworthy communication. Such tendencies are already evident in the discourse around “fake news.” Profit has been ranked as the least high dimension for this crime, not because the investment required is high (it is not), but because copycat crimes aimed at acquisition are likely to be most easily targeted against individuals rather than institutions, while copycat crimes against society have an uncertain impact.

Driverless vehicles as weapons

Motor vehicles have long been used both as a means of transporting explosives and as stand-alone kinetic terrorist weapons, with the latter becoming increas- THE AI THOUGHT BOOK 100 ingly common in recent years. Vehicles are much more readily available than firearms and explosives in most countries, and attacks using vehicles can be carried out with relatively little organizational effort by fragmented, quasi-autonomous, or “lone wolf” terrorists. While fully autonomous, AI-driven driverless vehicles are not yet available, numerous automakers and technology companies work diligently to develop them, with some trials permitted on public roads. More limited self-driving capabilities, such as assisted parking and lane guidance, are already in use. Autonomous vehicles would potentially enable an expansion of vehicular terrorism by reducing the need to recruit drivers and allowing lone wolves to carry out multiple attacks and even coordinate a large number of vehicles at once. Because driverless cars will almost certainly have extensive security systems that would need to be overridden, driverless attacks will have a higher entry barrier than they currently do because they require technological capability and organization.

End of the excerpt, for more “THE AI THOUGHT BOOK

--

--

Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)