top of page
Search

AI, Intellectual Property & The Alan Turing Institute Report


The rapid development of generative AI has triggered one of the most profound debates in modern technology: What happens to copyright and intellectual property when machines can create, remix, and regenerate content at scale?

 

The Alan Turing Institute’s report “Creative Grey Zones: Generative AI, Creative Practice, and Copyright” explores this tension in depth — showing how AI is pushing the boundaries of traditional IP frameworks and forcing creators, policymakers, and technologists to rethink what “authorship” means in the digital age.

 

I have highlighted IP risk in my AI risk lectures, especially when discussing data governance, training-set exposure, and output liability. I remember telling students that AI could reshape copyright as we know it. I also remember conversations with my girlfriend’s family — many of whom are artists — who expressed real fear that AI tools could undermine their rights, dilute their originality, or repurpose their work without permission.


 

✅ AI grey zones

There is no clear consensus on:

➡️ what constitutes “originality,”

➡️ who owns the rights to an AI-generated piece,

➡️ whether training on copyrighted works is permissible,

➡️ or how much human involvement is needed for copyright protection.

 

✅ Artists face ambiguity

Creators interviewed in the study expressed deep concerns about:

➡️ unauthorised use of their work in training datasets,

➡️ difficulty proving infringement when AI images resemble their style,

➡️ reputational damage if AI outputs mimic them,

➡️ economic displacement in creative industries.

 

✅ Human creativity

Boundaries between:

➡️ inspiration,

➡️ imitation,

➡️ appropriation,

➡️ and infringement

are becoming increasingly blurred.

 

✅ Copyright frameworks

The report notes that current copyright law was designed for human creators — not probabilistic models.

Regulators now face fundamental questions about:

➡️ dataset transparency,

➡️ licensing models for training data,

➡️ protectability of AI outputs,

➡️ and responsibility when things go wrong.

 

✅ Transparency

Across interviews, artists consistently requested:

➡️ visibility into what data AI systems are trained on,

➡️ mechanisms to opt in or opt out,

➡️ compensation models,

➡️ and accountability for misuse.

 

From a GRC perspective:

✅ Training Data

➡️ Was copyrighted content used?

➡️ Was it licensed?

➡️ Can we prove it?

✅ Output Risk

➡️ Could generated content infringe someone’s rights?

➡️ Could it be “too similar” to a living artist’s style?

➡️ Does the organisation have policies to avoid derivative misuse?

✅ Compliance

New regulations require:

➡️ dataset transparency,

➡️ provenance tracking,

➡️ model documentation.



 
 
 

Comments


Stay in touch

BW ADVISORY sp. z o.o. 

ul. Boczańska 25
03-156 Warszawa
NIP: 525-281-83-52

Privacy policy

  • LinkedIn
  • Youtube
bottom of page