In a significant move to promote responsible AI development, Google has open-sourced its SynthID text watermarking tool, allowing developers to detect whether text outputs have come from their own large language models. This technology, part of the Google Responsible Generative AI Toolkit, aims to make AI-generated text more identifiable and combat malicious uses such as spreading misinformation or generating non-consensual content.
SynthID works by adding an invisible watermark to AI-generated text, making it detectable by software but not humans. The tool adjusts the probability scores of predicted tokens in a way that doesn't compromise the quality, accuracy, or creativity of the output. While it's not a foolproof solution, SynthID is an important step towards developing more reliable AI identification tools.
The open-sourcing of SynthID comes as governments and regulatory bodies are increasingly looking into making AI watermarking mandatory. California is already exploring this option, while China has made it a requirement since last year. By making this technology available to developers, Google hopes to encourage more responsible AI development and empower users to make informed decisions about their interactions with AI-generated content.