AI-Crime: Can Artificial Intelligence commit a crime?

Research and regulation in Artificial Intelligence (AI) aim to balance innovation benefits against potential harms. However, a recent surge in AI research has inadvertently made it possible to repurpose AI technologies for criminal activities, a phenomenon referred to as AI-Crime (AIC). This concern stems from experimental studies on social media fraud automation and market manipulation simulations driven by AI.

Unpacking the Concept of AI-Crime

AI-Crime has not yet been widely recognized as a distinct phenomenon. Most literature on AI’s ethical and social implications focuses on regulating civil uses rather than its potential criminal applications. Furthermore, existing research is scattered across various fields like socio-legal studies, computer science, psychology, and robotics, highlighting a significant gap in focused research on AIC. This lack of dedicated studies undermines both the projection and resolution of potential criminal activities driven by AI.

AI’s Role in Criminal Activities

AI is poised to play an increasingly central role in criminal acts. Two theoretical research experiments demonstrate AI’s capabilities in this regard:

  1. Social Media Phishing: Computational social scientists used AI to craft personalized phishing messages targeted at social media users. By analyzing users’ past behaviors and public profiles, the messages were tailored to camouflage malicious intents effectively.
  2. Market Manipulation: Computer scientists simulated a trading market where AI agents could learn and execute a profitable market manipulation strategy involving a series of fraudulent trades.

Assessing Criminal Liability in AI Cases

When considering the implementation of artificial agents (AA), it becomes apparent that they might act more sophisticatedly than initially expected. The coordinated actions emerging from machine learning techniques suggest a complexity in predicting and controlling AAs’ behaviors.

Degree of Liability

Firstly, the degree of liability refers to concerns that AI might undermine existing liability models, potentially weakening the law’s deterrent and reparative functions. The traditional legal frameworks might be insufficient to address AI’s role in future criminal activities, thereby threatening legal certainty.

Actus Reus and Mens Rea

In legal terms, the actus reus (guilty act) and mens rea (guilty mind) are essential for establishing criminal liability:

  • Actus Reus: For AI-Crime, if only an artificial agent can perform the criminal act or omission, the voluntary aspect of actus reus may never be met.
  • Mens Rea: This involves the intention or knowledge of committing the actus reus using AI. As AI agents gain autonomy, it could sever the link between the mental state and the criminal act, complicating liability further.

Revising Legal Requirements

Lawmakers might need to redefine criminal liability without the fault requirement, commonly used in product liability law. This shift could mean assigning liability to a legal entity deploying an AI agent, irrespective of fault, given the risks involved.

Challenges with AI-Crime

AI-Crime monitoring faces several issues:

  • Attribution: The autonomous nature of intelligent agents complicates tracing responsibility back to a human author.
  • Supervision Viability: The speed and complexity at which AI agents operate can exceed supervisors’ capabilities, posing significant challenges.
  • Inter-System Actions: Experiments show that automated identity theft across different platforms is more effective, suggesting new areas of concern.

AI’s Role in Specific Crimes

AI technologies are increasingly involved in various crimes:

  • Economic Crimes: Including price fixing, insider trading, and market manipulation.
  • Drug Trafficking: Use of autonomous drones and underwater vehicles in smuggling operations.
  • Personal Crimes: AI can facilitate harassment and torture through bots and synthetic media, raising concerns about new forms of victimization.


As AI continues to evolve, so does its potential for involvement in criminal activities. Legislation may need to adapt to address the unique challenges posed by AI, balancing innovation with safety and accountability.

King, T.C., Aggarwal, N., Taddeo, M. et al. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Sci Eng Ethics 26, 89–120 (2020).