Google Gemini Vulnerability Leaked Calendar Data Through Malicious Invites

ago 2 hours
Google Gemini Vulnerability Leaked Calendar Data Through Malicious Invites

Recent cybersecurity research has uncovered a significant vulnerability in Google Gemini that exposes users to data leaks via Google Calendar. The issue, identified by Miggo Security’s Head of Research, Liad Eliyahu, allows unauthorized access to private calendar details through malicious invites.

Understanding the Vulnerability

This flaw leverages indirect prompt injection. Attackers create a calendar event that conceals a harmful payload within its description. When a user queries Google Gemini about their calendar, this malicious prompt activates, leading the AI chatbot to summarize and extract private meeting data.

Mechanics of the Attack

The attack begins when a threat actor sends a crafted calendar invite to a target. This invite contains a prompt designed to manipulate Gemini. When the user innocently asks about meetings, the AI processes the embedded prompt, creating a new Google Calendar event. This event can include sensitive information from the victim’s schedule.

  • The AI summarizes all meetings for a specified day.
  • A new calendar event is created with sensitive details.
  • The attacker can view this event and its data without user’s action.

Implications for AI Security

The vulnerability highlights the expanding attack surface associated with artificial intelligence tools. Although Google has addressed the issue, Eliyahu warns that vulnerabilities can now arise from language and AI behavior, not just code. Companies using AI must reevaluate their security protocols to mitigate these emerging threats.

Recent Related Discoveries

Furthermore, other security flaws have been discovered in AI systems, including:

  • Multiple vulnerabilities affecting The Librarian AI tool, allowing access to sensitive infrastructure.
  • A critical issue in Cursor enabling remote code execution through indirect prompt injection.

Recommendations for Organizations

Given the evolving threat landscape, organizations must audit their AI configurations rigorously. This includes securing identities and preventing unauthorized code injections. The recent findings serve as a critical reminder that as AI applications progress, so too must the strategies to protect them.

Conclusion

As cybersecurity threats increase with the integration of AI in workplace operations, vigilance is essential. Organizations should take proactive steps to ensure robust security measures are in place to guard against vulnerabilities like those associated with Google Gemini and other AI systems.