Does New Zealand need a specific law for deepfakes?

Share

This article is part of the sixth annual special Technology & Law edition of LawNews, put together by ADLS’ Technology & Law committee.

Have you seen the video where Elon Musk declares he is running for US president, that Tesla will start making flying cars and his new start-up will experiment on his own brain?

Antonia Modkova

While you might be forgiven for believing Musk might say those things, the video is actually a deepfake fabricated by Hao Li, one of the fathers of deepfake technology, to warn of the dangers of his creation.

Deepfakes are hyper-realistic audio and/or video depictions of individuals which are, in fact, fake. While they appear to represent real events that have been captured by a microphone or camera, they are artificially constructed from existing photos, videos, and recordings by means of “deep learning” artificial intelligence (AI).

Just as photoshop enables photos to be manipulated, deepfake technology allows video and audio to be created depicting people doing and saying things they never did or said.

It has been possible to create fake media for a while but only recently has the technology become more generally available.

Exacerbating this accessibility is the huge amount of data now freely available online from which deepfakes can be generated. This includes data about public figures mined from media coverage, as well as data about private individuals extracted from their social media accounts and personal blogs.

Legal remediesA law foundation-commissioned report Perception Inception: Preparing for deepfakes and the synthetic media of tomorrow recently examined the extent to which New Zealand law is equipped to address harmful misuse of deepfake technology.

Here is a glimpse of how New Zealand law might already regulate certain uses of deepfakes.

Deepfakes threatening privacy and emotional wellbeing.

Privacy law is an obvious candidate for protecting against unauthorised creations of deepfakes of individuals as it is directed to protecting an individual’s ability to control their personal information.

However, while a deepfake might look real, the events it depicts might never have happened. So how can it be “personal information”?

Interpretation of the Privacy 1993 Act suggests false information about identifiable individuals, including fictitious depictions, may still qualify as personal information; otherwise provisions about rights to correct information would be meaningless.

Therefore the report’s authors conclude deepfakes should be regarded as personal information because they are “information [that purports to be] about an identifiable individual”.

Another issue: a person’s face and voice is generally public information.

Assuming deepfakes are synthesised using only publicly-available footage, how can they disclose any private information?

The authors of the report posit that the deepfake itself cannot be public information because it purports to depict events which never happened and were not “public” until the deepfake was created and published.

It will be interesting to see how privacy law evolves in this area. Does someone have a reasonable expectation that deepfakes will not be created? And would deepfakes be considered offensive to a reasonable and ordinary person?

Other statutes providing remedies against harmful deployments of deepfakes include:

The Defamation Act 1992. Does a deepfake harm an individual’s reputation?

The Harassment Act 1997. Has the deepfake been used for harassment? The broad wording of the Act means intentional appropriation of someone’s likeness in a way that causes distress is likely to be covered.

Committing crimesLying and deceiving are not illegal in their own right but s 240 of the Crimes Act 1961 criminalises obtaining, or causing loss by deception, any property, privilege, service, pecuniary advantage or benefit.

This extends to using deepfakes to illegally obtain or cause loss – for example, by impersonating another by using a deepfake. Threatening to create or disclose a deepfake for blackmail is also criminalised by s 237 of the Act.

As the Crimes Act covers attempted crimes, even the use of blatantly unconvincing deepfakes can be criminalised.

The criminalisation of “revenge porn” using deepfakes under s 216G remains an open question. Section 216G criminalises intimate visual recordings made without consent.

But a sexually-explicit video of a person can be created using deepfake technology without any intimate visual recording of the victim being made – for example, by transplanting a victim’s face onto another person’s body where explicit footage of the other person’s body may have been captured with full consent.

“Unfair trade” is broadly defined to encompass any unfair conduct, regardless of its form. Section 13 of the FTA also prevents the unauthorised use of someone’s image or identity to imply sponsorship, approval, endorsement or affiliation with advertised goods or services.

New Zealand law gives some protection against using deepfakes to spread fake news through legsiation such as the Defamation Act, the Broadcasting Act and the Electoral Act.

A public figure who has been misrepresented using a deepfake has recourse under the Defamation Act. Traditional defences to defamation such as truth and honest opinion might be unsustainable where the deepfake is a construction purporting to depict a real event.

The Broadcasting Act 1989 protects use of deepfakes in radio and television; however it is limited when it comes to the internet.

Section 197 of the Electoral Act 1993 (interfering with or influencing voters) and s 199a (publishing false statements to influence voters) may be of some assistance against use of deepfakes to interfere with the democratic process but they are restricted in their application.

Given their rapid onset and potential harm, some countries have jumped to propose deepfakespecific laws. Does New Zealand need to follow suit?

The conclusion of the law foundation report was “probably not” as our law is drafted in a broad and media-neutral manner.

Law restricting deepfakes should be handled with extreme caution because, like all other audio-visual information, they are protected under freedom of expression legislation.

As the report suggests, nuanced amendments to existing law where there are gaps is preferable.

Antonia Modkova is a computer scientist, lawyer and patent attorney, specialising in AI and the management of the intellectual property portfolio of Soul Machines