AI‑Generated Content: When Is It Illegal in New Zealand?

Artificial intelligence (AI) tools can now create images, videos, audio, and written material that may look highly realistic or completely synthetic. It appears across social media, memes, advertising, gaming, entertainment, and everyday online interactions. Some AI might look like it involves real people – even if it doesn’t. Apps, including deepfake or so‑called “nudify” tools, make it easy for anyone to generate content that can be harmful, abusive, or illegal.

A common misunderstanding is that AI‑generated or “fake” content cannot be illegal because it isn’t real. In New Zealand, it can.

Under the Films, Videos, and Publications Classification Act 1993, AI‑generated content is treated the same as any other content. What matters is what the content shows, not how it was made. Even if something is fictional, computer‑generated, or created as a joke, it can still be illegal.

This page explains how the classification law applies to AI‑generated content, what types of AI content can be illegal, why this matters, and what to do if you come across it.

Important: This information is provided to help you understand how the law applies to AI‑generated content. It is not legal advice.



What is AI‑generated content?

AI‑generated content is material created using artificial intelligence tools. This can include:

  • Images or videos created from text prompts
  • Deepfake videos or audio
  • Edited or altered photos
  • Synthetic or fictional characters
  • AI‑generated voices or music
  • Written content produced by AI tools

This content may look real even though it has been created by a person using a computer program or app.

Can AI‑generated content be illegal in New Zealand?

Yes. AI‑generated content can be illegal under New Zealand law.

New Zealand has a high threshold for what is considered illegal content. The Classification Act uses the term “objectionable” to describe content that is so harmful it is illegal to make, possess, or share.

Examples of objectionable content include:

  • Child sexual exploitation material (CSEM)
  • Sexual violence involving children or young people
  • Bestiality
  • Extreme violence
  • Terrorist or violent extremist content

The law does not require the people or animals depicted to be real. If AI‑generated or fictional material meets the legal tests for being objectionable, it is treated the same as real‑world depictions.

What does the law say about AI‑generated content?

The Films, Videos, and Publications Classification Act 1993 applies to all publications, regardless of how they are made or distributed.

This means:

  • It does not matter if the content is AI‑generated, animated, drawn, or computer‑generated
  • It does not matter if it was made using an app, AI model, filter, or editing tool
  • It does not matter if it was intended as a joke, meme, experiment, or test
  • If the content itself meets the legal criteria for being objectionable, it is illegal to make, possess, or share it.

What types of AI content does the law apply to?

The law applies equally to all of the following:

  • Real footage that has been manipulated
  • Deepfakes
  • AI‑generated images
  • CGI or animated content
  • Fictional or synthetic characters

With the growing availability of AI tools, people are increasingly experimenting with creating content. It’s important to understand that intent does not make illegal content legal. Even if content is created “just to see what the app can do”, it may still break the law.

Why is some AI content harmful?

Illegal content causes real harm, even when it is AI‑generated or fictional.

For example:

  • AI‑generated sexual content involving children normalises abuse and exploitation
  • Deepfake sexual images can humiliate, harass, or blackmail people
  • Violent extremist content can promote or support real‑world violence

The classification system exists to protect people – especially children and young people – while still respecting freedom of expression.

Where might people come across AI‑generated illegal content?

People can encounter AI‑generated illegal content in many ways, including:

  • Social media feeds or “For You” pages
  • Group chats or private messages
  • Apps that claim to “undress”, alter, or sexualise images
  • Search engine results or websites
  • Online forums or gaming platforms
  • Someone showing them content offline
  • Deliberate searching or accidental exposure

Some people create or share this material believing it is harmless because “it’s fake”. This belief is incorrect under New Zealand law.

Is it illegal to possess or share AI‑generated illegal content?

Yes. If AI‑generated content is classified as objectionable:

  • Making it is illegal
  • Possessing it is illegal
  • Sharing, sending, or uploading it is illegal

Serious penalties can apply for offences under the Classification Act.

What should I do if I come across AI‑generated illegal content?

If you come across content you think may be illegal:

  • Do not share it with others
  • Do not save or forward it
  • Report it as soon as possible

Where to report AI‑generated illegal content

You can report illegal or harmful content through the appropriate channels, depending on where you found it:

  • Online platforms: Use in‑platform reporting tools on social media sites, messaging apps, gaming platforms, or websites
  • Netsafe: For help with harmful online content, scams, or abuse
  • New Zealand Police: For content involving serious criminal harm

How to Report Harmful or Illegal Online Content in New Zealand

What if I accidentally made AI‑generated illegal content?

If you believe you may have accidentally created illegal AI‑generated content:

  • Stop using or sharing the content immediately
  • Delete it from your devices and accounts where possible
  • Do not upload or distribute it
  • Seek legal advice if you are unsure about your obligations

About our role and how the law works

The Films, Videos, and Publications Classification Act 1993 balances freedom of expression with protecting people from harm – especially children and young people.

Every piece of content we see is different and to decide whether something is illegal we must consider each one on it’s own merits and carefully apply the Act and the principles of fairness and natural justice. There are also serious penalties for offences under the Act. This is why decisions about whether content is illegal are made by trained experts who follow a very transparent and fair process before making a decision.

That is why decisions about whether content is illegal are made by trained experts at the Classification Office, guided by the Act.

You can read more about our classification process here.

If you are unsure about AI‑generated content, it is safer to not create, keep, or share it and to seek guidance.