- Google's DeepMind is testing out its new artificial intelligence (AI) technology, dubbed SynthID, which it says will be able to label images that have been generated by AI. The labels, however, will be invisible to the human eye to not ruin the picture.1
- The watermark is embedded in the pixels of the image, but DeepMind CEO Demis Hassabis says it won't change the "quality" or "experience" of the image. He added that it can also defend against attempts to erase the watermark like cropping or resizing.2
- SynthID, whose three levels of confidence are "detected," "not detected," and "possibly detected," is currently only available to a select group of Vertex AI customers who use Google's text-to-image diffusion model Imagen, which is similar to Midjourney and Dall-E.3
- As worries over deep fake photos grow — most recently including fake mugshots of former Pres. Trump — Hassabis seems to want SynthID to be an internet-wide standard for AI detection. Other companies are creating their own tools that use cryptographic metadata to tag AI content.2
- Other Big Tech companies promising to implement watermarks are Meta and Amazon, with Meta announcing that it will add watermarks to AI-generated videos from its unreleased project Make-A-Video.1
- In addition, China has imposed a ban on AI-generated images that don't have watermarks. Chinese companies like Alibaba are currently utilizing Tongyi Wanxiang, a text-to-image tool.3
- Narrative A, as provided by Deepmind. Google understands that pictures truly are worth a thousand words, which is why it's working to build a firewall between false images made by bad actors and the end users they're manipulating. The tech giant also understands the importance of not altering the style and beauty of creators' images, which makes this microscopic watermark the perfect solution to the world's technological problems.
- Narrative B, as provided by MIT Technology Review. While watermarks will help combat AI misuse, questions still remain surrounding their efficacy and the public's interpretation of what they actually are. Most people still regard watermarks as a company's logo in the bottom right corner of a picture, so Google and the other AI companies must communicate better what purpose these new watermarks will serve and how they will work. Furthermore, even sophisticated watermarks are still vulnerable to alterations — something that must be tackled before the world can trust Big Tech to moderate this issue.