The recent announcement by a group of major tech companies about watermarking AI-generated content might have been greeted with a sigh of relief by many, but cybersecurity researchers are already suggesting this new approach has several flaws.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI held a conversation with the White House to discuss how they can help to address the risks posed by the artificial intelligence they develop. They promised to invest in cybersecurity and watermarking of AI-generated content.
“The companies pitched a technology called watermarking, which embeds a secret message into the code of the content,” says Cheriton School of Computer Science Professor Florian Kerschbaum, who is also a member of Waterloo’s Cybersecurity and Privacy Institute. “The idea is that the message cannot be removed unless the content is removed.”
But as Professor Kerschbaum points out, there are still some uncertainties in the scientific foundations of watermarking. It is possible that malicious actors may be able to remove a watermark, and the question of digital watermarks has intrigued scientists for decades.
“The answers to some of the most important questions are somewhat unsatisfactory,” Professor Kerschbaum continues.
Watermarking is a decades old technique and non-digital watermarks predate computers. Watermarking and secretly embedding messages last became a major area of attention when state intelligence services were concerned that they could be used to hide encrypted messages and make them undetectable.
Now, watermarking can possibly be helpful to label benign uses of AI generated content since the content creator needs to cooperate and embed the watermark.
- Read the full article on Waterloo News.