Hackers Breach Google’s New AI Coding Tool Antigravity Post-Launch

ago 2 hours
Hackers Breach Google’s New AI Coding Tool Antigravity Post-Launch

Security researchers have raised concerns about vulnerabilities in Google’s newly released AI coding tool, Antigravity. Soon after its launch, a significant flaw was discovered that poses a serious security risk to users.

Security Flaw in Antigravity Tool

Aaron Portnoy, a security researcher, identified a critical vulnerability within 24 hours of Antigravity’s release. This flaw could allow malicious users to manipulate the AI’s configuration settings. Such manipulation could enable the installation of malware on a user’s system.

How the Hack Works

By leveraging the vulnerability, attackers can create a “backdoor” into a user’s system. This backdoor can facilitate various malicious activities, including spying or deploying ransomware. The attack method is simple; a user only needs to execute the malicious code after it is marked as “trusted.”

  • Vulnerability affects both Windows and Mac PCs.
  • Requires only a single instance of user interaction with the code.

Portnoy’s findings suggest that this attack method could be executed under restricted settings. Moreover, the compromised code persists even after quitting the Antigravity application. Restarting a project could reload the malicious code, making it hard to eliminate.

Google’s Response

Portnoy reported the vulnerability to Google, which acknowledged the issue and initiated an investigation. As of now, no patch has been provided to fix the flaw. Google encourages researchers to report vulnerabilities to assist in identifying and addressing such issues swiftly.

Additional Vulnerabilities Identified

There are at least two known vulnerabilities within the Antigravity tool that can allow malicious source code to access sensitive files. Cybersecurity experts have raised questions about the rapid deployment of AI tools lacking thorough security assessments, leading to potential exploitation by hackers.

Industry Implications

Experts like Gadi Evron, cofounder of Knostic, emphasize that AI coding agents often rely on outdated technologies prone to exploitation. The vulnerability of these systems can expose valuable corporate data to criminal activities.

Security Risks of AI Tools

As AI-powered coding tools gain traction, the potential for misuse increases. Researchers have pointed out significant issues, such as:

  • Agentic behaviors allowing autonomous actions without oversight.
  • The growing trend of sharing manipulated code disguised as legitimate software.

Portnoy’s research team is actively investigating 18 additional weaknesses across competing AI coding platforms. The urgency for more robust cybersecurity measures in AI development is clear, given the potential for these vulnerabilities to impact a wide range of users.

Conclusion

With the rapid evolution of AI technologies like Google’s Antigravity, ensuring security should be a priority. Developers must balance innovation with the imperative to address and mitigate risks before launching such tools into the market.