Google Gemini for Workspace Exposed to Prompt Injection Vulnerabilities

Current research has displayed that Google’s Gemini for Workspace, a universal AI assistant integrated across different Google products, is easy to indirectly prompt injection attacks. These exposures allow hostile third parties to use the assistant to produce deceptive or involuntary reactions, raising serious concerns about the reliability and trustworthiness of the data generated by this chatbot.

Gemini for Workspace is created to boost productivity by incorporating AI-powered tools into Google products such as Gmail, Google Slides, and Google Drive. However, Hidden Layer investigators have shown through detailed proof-of-concept examples that detractors can manipulate indirect prompt injection exposures to compromise the integrity of the reactions induced by the target Gemini instance. One of the most concerning elements of these susceptibilities is the capacity to perform phishing attacks.

For example, assaulters can make hostile emails that, when processed by Gemini for Workspace, prompt the associate to display deceptive notices, such as fake alerts about compromised passwords and instructions to visit negative websites to reset passwords. Similarly, investigators have demonstrated that these vulnerabilities extend beyond Gmail to other Google products. For instance, in Google Slides, detractors can infiltrate hostile payloads into speaker notes, causing Gemini for Workspace to generate resumes that include unintended content, such as the lyrics to a renowned song.

The research also indicated that Gemini for Workspace in Google Drive behaves similarly to a typical RAG (Retrieve, Augment, Generate) example, permitting assaulters to cross-inject documents and use the associate’s outputs. This means that assailants can share hostile documents with other users, compromising the integrity of the reactions induced by the target Gemini instance.

Despite these results, Google has organized these exposures as “Intended Behaviors,” meaning that the company does not view them as safety problems. Nevertheless, the importance of these exposures is important, specifically in sensitive contexts where the reliability and trustworthiness of data are essential.

The discovery of these exposures emphasizes the significance of being attentive when using LLM-powered tools. Users must be mindful of the possible hazards associated with these tools and take the required precautions to defend themselves from hostile attacks. As Google resumes rolling out Gemini for Workspace to users, it is essential that the company addresses these vulnerabilities to ensure the probity and trustworthiness of the data induced by this chatbot.