Google’s DeepMind has taken another big step in combining artificial intelligence and cybersecurity with the launch of CodeMender, a smart AI system built to automatically find, patch, and rewrite vulnerable code before it can be exploited.
If you’ve ever worried about a missed software vulnerability turning into a costly security breach, this might be a glimpse into the future of safer code.
CodeMender isn’t just reactive — it’s proactive. According to DeepMind, the AI is designed to fix newly discovered vulnerabilities right away, while also scanning through existing codebases to strengthen them and remove entire categories of security risks.
In other words, it doesn’t just put out fires — it works to fireproof the building.
“By automatically creating and applying high-quality security patches, CodeMender helps developers and maintainers focus on what they do best — building great software,” explained DeepMind researchers Raluca Ada Popa and Four Flynn.
Over the last six months of testing, CodeMender has already contributed more than 70 security patches to various open-source projects — some with millions of lines of code.
Under the hood, CodeMender is powered by Google’s Gemini Deep Think AI models. These models help it detect, debug, and resolve security flaws by addressing the underlying causes rather than just the symptoms.
It also comes equipped with a built-in critique tool — an AI system that compares the original and fixed code, ensuring the changes don’t create new problems. If something seems off, it can even self-correct before implementation.
Partnering With the Open Source Community
Google plans to collaborate directly with maintainers of key open-source projects, offering AI-generated patches and collecting feedback to make the tool even more reliable and secure.
In parallel with CodeMender, Google has also launched an AI Vulnerability Reward Program (AI VRP). This initiative rewards security researchers who find AI-related vulnerabilities such as prompt injections, jailbreaks, or alignment issues — with payouts reaching up to $30,000.
However, not everything AI-related falls under the program. Bugs like hallucinations, factual mistakes, or copyright concerns aren’t part of the reward system.
This release fits into Google’s broader Secure AI Framework (SAIF) strategy, which already includes a specialized AI Red Team focused on testing and protecting AI systems. The latest version of SAIF pays particular attention to “agentic” AI risks — where autonomous systems could accidentally expose data or take unintended actions.
By combining tools like CodeMender with strong security frameworks, Google hopes to shift AI from being a potential risk to a powerful ally for cybersecurity — helping defenders stay one step ahead of attackers.
In an era when threats evolve faster than ever, AI tools that can not only spot but fix vulnerabilities could change how we think about security and software development altogether.