05.04.2024
FAL

FAL

The legal framework surrounding deepfake technology in Australia presents a complex challenge as legislation needs to adapt alongside this evolving landscape. Addressing deepfake content regulation, necessitates a comprehensive legal approach, considering aspects such as detection, dissemination control, and attribution. At present Australia lacks specific legislation targeting the use of deepfake material, however existing legal frameworks, including defamation, copyright, and consumer laws, offer potential avenues for recourse for victims of deepfakes.

 

Deepfakes, depicting individuals in fabricated and often detrimental scenarios, raise concerns regarding reputational harm, invoking the principles of defamation law. Defamation laws currently encompasses digitally altered images, which suggests extending their application to deepfakes seems viable. Victims may pursue legal action against the disseminators of deepfake content, seeking remedy for reputational damage through compensatory measures. Nevertheless, the effectiveness of defamation law in curbing the dissemination of deepfakes is impeded by its limited capacity to swiftly secure injunctive relief. In Australia the Courts do not readily grant injunctions for defamation cases. Interlocutory injunctions, which would be the most effective way of preventing the spread of online material, in such cases, are particularly rare.[1] Therefore, as of now the law is not quick enough to intervene effectively. This is exacerbated by the fact that deepfakes can take time to deduce, given that they can come from anywhere and be difficult to detect. Finding the source of deepfake content can also be a huge challenge and prevent cause of action in defamation being possible.

 

Copyright law emerges as another potential avenue for addressing the ramifications of deepfake manipulation. Given that deepfake material typically originates from existing audiovisual content, the Copyright Act 1968 (Cth) could afford recourse to the original creators of the footage. Copyright owners retain the right to pursue remedies for infringement, encompassing account of profits, damages, and injunctive relief.

However, applicability of copyright law is contingent upon the ownership of the original material, which poses challenges in cases where victims lack ownership rights. Moreover, the intricate interplay of artificial intelligence and machine learning in deepfake creation complicates the determination of copyright ownership, as legal protection traditionally extends to works with human authors. So, as deepfake technology advances, particularly as it builds out ‘in real time’ (for example as discussed in part two of our Deepfake Article Series) copyright law’s defence against deepfake violations weakens.

 

The Australian Consumer Law (ACL) prohibits deceptive conduct in trade or commerce, offering compensation for consumers misled by deepfake content. Section 29 of the ACL mandates against false or misleading representations concerning goods or services, thereby encompassing scenarios where deepfakes exploit endorsements or attributes of products or services [2]. However, the efficacy of consumer law in addressing deepfake incidents is confined by its applicability solely within the realm of trade or commerce. Consequently, victims of non-commercial deepfakes may find limited recourse under the ACL, underscoring the need for broader legal remedies.

 

As technology progresses at an unprecedented rate, legal frameworks face challenges in adapting to emerging threats such as deepfake technology. While legislating in this domain presents complexities, jurisdictions worldwide have initiated endeavours to regulate deepfake content. Notably, China, the European Union, and select states in the US have commenced legislative initiatives aimed at addressing this increasing issue. In the meantime, individuals, and businesses alike, must navigate the existing legal frameworks to address instances of deepfake manipulation; altogether emphasizing the imperative for ongoing legal evolution in response to technological innovations.

 

[1] Ted Talas, Maggie Kearney and Ashurst, “Diving Into The Deep End: Regulating Deepfakes Online” (2019) 38(3) Communications Law Bulletin.

[2] Under section 29: a person “must not, in trade or commerce, in connection with the supply or possible supply of goods or services… make a false or misleading representation that goods or services have sponsorship, approval, performance characteristics, accessories, uses or benefits.”

Interested to find out more? Feel free to contact us today.