Key highlights:
Google has patched a critical Gmail vulnerability that allowed hackers to extract corporate data using artificial intelligence. The attack, known as GeminiJack, potentially exposed up to 2 billion Gmail users and highlighted a dangerous new class of AI-driven security threats.
Unlike traditional breaches, this attack required no phishing clicks, no malware downloads, and no visible warning signs – raising concerns that existing security models may not be prepared for the AI era.
How the GeminiJack attack bypassed traditional security
The GeminiJack attack relied on a deceptively simple technique. Hackers embedded hidden instructions inside ordinary Google Docs, calendar invitations, or emails. These documents appeared harmless to human readers and triggered no alerts from conventional security systems.
According to Noma, the cybersecurity firm that discovered the flaw, the mechanism exploited how AI systems interpret information. “If an attacker can influence what AI reads, they can influence what AI does,” the company explained.
When employees used Gemini Enterprise to perform routine searches, such as requesting budget data, the AI automatically retrieved the relevant document. If that document contained hidden malicious prompts, the AI would unknowingly execute them, exposing sensitive corporate information in the process.
Why is prompt injection so dangerous
Security experts classify GeminiJack as a prompt injection attack, a growing threat unique to AI-powered systems. Unlike phishing or malware, prompt injection hides malicious instructions inside content designed specifically for AI assistants rather than human users.
If employees rely on AI tools to read emails, documents, or invitations on their behalf, attackers can exploit that trust by planting commands that only the AI will interpret. The result is a silent attack that bypasses many of today’s defenses.
Why security experts are alarmed
The UK’s National Cyber Security Centre (NCSC) has warned organizations not to treat AI assistants as inherently trusted systems. Instead, they should be viewed like human employees – with limited access and constant oversight.
If an organization would not give a human assistant unrestricted access to emails, passwords, and financial data, the same caution should apply to AI.
A flow chart that depicts the User Alignment Critic: a trusted component that vets each action before it reaches the browser. Source: Google
Although Google has fixed the specific GeminiJack vulnerability, experts caution that this incident may represent only the beginning. Government agencies fear that AI-driven attacks could become more damaging than previous cyber threats, as AI tools gain deeper access to sensitive systems.
Concerns about AI access are not new. When Google previously faced scrutiny over AI and Gmail data, the core issue was not whether the system was being trained on emails, but that it could access them at all. While Google denies using Gmail data for training, enabling AI features still allows systems to read user content.
Noma warns that similar attacks are inevitable as AI tools expand across Gmail, Google Docs, and Calendar. Artificial intelligence, once seen primarily as a productivity tool, is now emerging as a new layer of access – and a new point of failure.
Ultimately, protecting data in the AI era will depend less on patching individual vulnerabilities and more on carefully managing how much access AI systems are granted. For many organizations, the challenge is no longer preventing break-ins, but deciding how much trust to place in the machines already inside.
Kraken: Best crypto exchange for security & reliability
- Buy, sell, and trade 400+ cryptocurrencies with industry-leading security
- Spot, Futures & Margin trading – leverage up to 5x for advanced traders
- Earn rewards with staking on top cryptocurrencies
- 24/7 customer support and high liquidity for fast trades
- Regulated in the US with strong compliance and security measures
- 13+ million users worldwide
Disclaimer: This content is for informational purposes only and does not constitute financial, investment, or other advice. Nothing on this page is a recommendation or solicitation. Always seek independent professional advice before making investment decisions. Some links may earn us a commission at no extra cost to you.
Source:: A Gmail Flaw, 2 Billion Users, and a New Kind of AI Attack
