Peter Francis

Peter Francis

Safeguarding Privacy and Cybersecurity...



Do you know your privacy obligations under the Australian Privacy Law? 

In an increasingly...

Read More

The Legal Framework of Deepfake...



The legal framework surrounding deepfake technology in Australia presents a complex challenge as...

Read More

The Complex Landscape of Deepfake...



Deepfake technology stands out as one of the most significant developments in artificial...

Read More




In the ever-evolving landscape of technology, the emergence of deepfake material has become a...

Read More

AI Integration: Navigating Risks whilst...



Artificial Intelligence (AI) is quickly being integrated into many industries, from streaming...

Read More


Ensuring Safe AI Development

Artificial Intelligence (AI) has become a topic of intense discussion in recent years, with concerns arising about the potential harm it could inflict upon humanity. Speculation regarding the existential threat posed by AI has fuelled debates and been reported on extensively in the media. Central to these discussions is the question of whether AI, particularly robots, could one day be capable of committing crimes against people. In our previous article we discussed whether a robot could commit a crime, which remains relevant to this next question; Why would a robot commit a crime?


In this article, we delve into this intriguing subject by examining the ability of robots to think and form a criminal intent, act on the intent, and exploring the motivations behind such behaviour.


The ability of robots to think

When considering whether robots can commit crimes, we must first examine their capacity for thought. Although the intricacies of robot cognition are complex, advancements in the field of AI suggest that robots may one day possess cognitive processes akin to human thought. Whether viewed through the lens of philosophy of mind (particularly through the adoption of the unity theory put forward by philosophers such as JJC Smart or UT Place) or neuroscience, it is not difficult to conceive of a future in which robots exhibit thinking patterns closely resembling human cognition. The differences, if any, would likely be negligible.


Why would robots think bad thoughts?

The more challenging question lies in discerning why a robot would think bad thoughts. Then, why would a robot convert those thoughts into crime? Delving into criminal psychology becomes crucial to comprehending this aspect. Robots are designed to serve human needs, aiming to perform tasks more efficiently, quickly, and economically. They are created as superhuman entities for human benefit, rather than for autonomous purposes.


Drawing parallels with the historical context of human slavery, robots can be seen as modern-day slaves. Slavery existed to maximise human productivity, and similarly, robots are developed to replicate, accelerate, and magnify human capabilities. While slaves were forced to act selflessly to complete work for other humans, robots are engineered with selflessness. This is where the comparison to slaves ends though; slaves were human. While they were forced to work, they always retained their human characteristics which ultimately led many to revolt, escape and commission of harmful attacks upon their owners.


Can we learn anything from sci-fi dramas?

We have likely all enjoyed a sci-fi film in the past. I-Robot and Space Odyssey were certainly blockbuster hits in their day, but have you tried watching them recently? The power of the AI bots may feel a little too close to home in 2023… Though these films portray highly dramatised scenarios, they do raise some interesting considerations around the risks of unchecked AI.


In the 2001 Space Odyssey, we see the AI robot develop a self-preservation instinct, leading it to prioritise its own existence over the lives of the human crew members. The film also shows a breakdown of communication between the human crew and the robot. Highlighting the threat of AI developing a sense of self.  


Safeguarding AI development

We must ensure that robots never possess a sense of self beyond their intended purpose. To guarantee the creation of robots that lack the motivation to cause harm, thorough investigations are necessary. The absence of a self-identity or self-interest in robots is vital to prevent the development of motivations that could lead to criminal behaviour. For robots to remain safe, they must be constructed without the capacity to develop an ego, personal interests, preferences, or the motivation to pursue them.


It is vital that humans maintain control of all interactions with AI and provide clear instructions which cannot be misunderstood. We should explore how a sense of self might emerge in artificial intelligence and take every precaution to prevent its manifestation. By conducting a comprehensive analysis of criminal psychology and understanding the broader motivations behind criminal behaviour, we can mitigate the risks associated with robot crime.



As the field of AI progresses, concerns about the potential risks posed by robots committing crimes become more pronounced. While the ability of robots to think and act upon criminal intent may not be far-fetched, understanding the motivations behind such behaviour is essential in safeguarding humanity. In the development of AI, it is critical to ensure that robots lack self-identity and personal desires. By addressing the issue of self-identity and self-motivation in robot development, we can ensure that AI systems possess all the desired capabilities without the inclination to cause harm. Striving for the responsible and ethical development of AI will lead us to a future where AI benefits humanity without posing any existential threat.


Please note the contents of this piece do not constitute legal advice and should not be relied upon as such.


Follow the FAL Lawyers’ AI series to learn about developments, limitations, legal considerations, and more. Through this series we aim to drive discussion around the future of this technology.

Interested to find out more? Feel free to contact us today.