23.05.2023
Peter Francis

Peter Francis

The rapid development of artificial intelligence (AI) and robotics has brought significant benefits but has also raised many questions and concerns about the legal implications of how this technology is used. A question that has arisen as a result of this is: Can a robot commit a crime? Then leading to: How should the actions of a robot be dealt with by the law? The answer is not as straightforward as one might think.

Firstly, how do we define a crime?

A crime consists of two parts – mens rea or guilty mind and actus reus or guilty act, e.g., an unjustified intentional killing of another person, i.e., murder. In other words, an individual must intentionally commit an act that is prohibited by law in order to be considered to have committed a crime.

To claim independent legal guilt prosecutors must prove that a defendant committed a specific illegal act with the intention of doing so. Proving intent is a subjective exercise i.e., the actual intent of the accused, not one attributable to them. As some commentators have described it – a particular state of consciousness. For this, the prosecution must be able to provide evidence of the defendant’s actions and state of mind. They cannot simply rely on the fact that they were present at the scene of a crime or associated with the perpetrator. This is an even more complex challenge when we deal with AI systems, pre-programmed and constrained by pre-determined parameters.

How do robots operate?

The decisions of robots are determined by the information patterns they have observed (or been given). As well as the rule sets (such as optimisation functions) that they work towards, including moral notions by which they are programmed. However, it can be said that human decisions operate under similar constraints. Accordingly, the logical and common nature of AI algorithms could mean that the actions of a robot are of a nature sufficient to amount to a crime at law.

Identity theory, by Jack Smart and U.T. Place, leaders in the field of philosophy of the mind, has been an influential theory in the philosophy of mind and has helped to shape the debate about the relationship between the mind and the brain. Smart noted that mental states are identical to physical states of the brain. In other words, the mind is nothing more than the brain. The theory was later refined by U.T. Place who argued that consciousness which includes sensations, mental images, thinking, imagining and paying attention are all brain states. This kind of analysis would allow the conclusion that not only are intentions physical, but they may also be reproduced in multiple ways, including in the software and hardware of a machine. In result, a robot may have not just a capacity to make a reason-based decision and act on it but to also act in accordance with an intention. Or at least a level of consciousness that equates to what the law would accept as intent.

However, it’s important to note that identity theory has also faced criticisms. Some philosophers argue that it fails to capture the subjective nature of mental experience, and that mental states cannot be reduced to mere physical states of the brain.

 

So, how should the law respond if a robot intentionally commits an act which if done by a human being would constitute a crime?

It’s important to consider the legal status of the robot. The fact that a robot does an act which if done by a human would be a crime does not make that act a crime, simply because, a machine is not a legal person and only persons can commit crimes. That lack of humanity or non-human status prevents a robot from owning property and from otherwise constituting a suitable target for criminal or civil prosecution. That lack of humanity further means that a misbehaving robot may simply be reprogrammed or destroyed without creating any moral or legal problems.

This does not mean that robots cannot be involved in criminal activities though. A robot’s lack of humanity does not prevent its actions from being considered under the laws of property or under the principles of agency and vicarious. Under strict liability, the party owning or deploying a robot even in the absence of fault on their part would be held liable for the consequences flowing from the activities of the robot. For this we can draw on the odious but, in this context, potentially helpful law of slavery as it applied in ancient Rome. At that time in place slaves were not subjects of the law but objects such that the owner was liable for the wrongs of the slave. The slave was not recognised as a legal person. The slave was considered as property and under the control of its owner thereby providing a basis for vicarious liability to apply. Accordingly, a basis was laid for the application of vicarious liability making an owner of a robot liable for its actions in the same way as an employer is liable for the misdeeds of their employees. A robot’s inherently dangerous nature justifies the application of strict liability to the deployment of robots.

 

Conclusion

To conclude, AI may one day reach a level whereby a robot would have the capability to form a criminal intent, act on that intent and do what if done by a human being would be a crime. However, as criminal law pertains to human acts, a robot cannot be a criminal. As AI continues to advance it’s important that laws and regulations keep pace to ensure that robots are used in ways that are safe and ethical. The risks posed by robots are such that consideration should be given to the adoption of a compulsory insurance scheme to provide a no-fault cover for all parties injured by them similar to that operating for motor vehicles.

 

The contents of this document are based on a paper presented at Oxford University by Peter Francis, Managing Partner at FAL Lawyers. Please note the contents of this piece do not constitute legal advice and should not be relied upon as such.

Interested to find out more? Feel free to contact us today.