Researchers Found A Way To Prevent AI Stealing Creative Work: By Poisoning It

Low Boon Shen
3 Min Read
Researchers Found A Way To Prevent AI Stealing Creative Work: By Poisoning It

Researchers Found A Way To Prevent AI Stealing Creative Work: By Poisoning It

To say that AI has opened the Pandora’s box is perhaps an understatement – many ethical questions are being brought up, sometimes even challenged, today as many industries are still finding a balance between AI and human workers (US famously is going through worker strikes in both film and automotive industry due to this reason, at this very moment).

Researchers Found A Way To Prevent AI Stealing Creative Work: By Poisoning It
Image: OpenAI (AI-generated)

Artists are among the most affected of the bunch here as AI has gained fairly sophisticated capabilities at creating an artwork, text, and even videos (albeit a very low quality one at current state), and most of the dispute comes from the topic of copyright. Creators argued that AI scraping data on the web constitutes copyright infringement as it is essentially replicating artist styles without explicit permission; Meta and OpenAI (creators of ChatGPT and DALL-E models) are currently involved in lawsuits as a result.

While companies like Adobe is currently establishing a food nutrition-like label standard to help clarify and combat the spread of illicit AI-generated imagery, researchers at University of Chicago has created ‘Nightshade’ – named after the deadly plant of the same name. It works by literally poison the AI from the inside – by introducing a small amounts of altered pixels that are practically invisible to the human eye.

Researchers Found A Way To Prevent AI Stealing Creative Work: By Poisoning It - 14
Image: Prof. Ben Zhao (University of Chicago)

When enough erroneous training data gets introduced to the machine learning model, the results begin to go awry: in the diagram shown, prompts like dogs and hat gets completely evolved into a different object altogether when enough poison samples are introduced. Removing the poison from the model will be difficult – and potentially prohibitively so, making this a potentially important technology that keeps the AI from stealing creators’ work (and properly compensates them instead).

This technique is currently being peer-reviewed, and the team leader, Prof. Ben Zhao, has noted that certain people may use such model for nefarious purposes; though he noted that it’ll require thousands of corrupted data to do so.

(Note – cover image generated using DALL-E 3 model.)

Source: Engadget

Pokdepinion: The side effects of AI and data scraping is that Internet is less open as a result as platforms seek to (aggressively) protect their data. Hopefully this is the first step in restoring that. 

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *