Your questions answered

For Gemini in Google Workspace, Google training ensures that your content isn't viewed by humans or fed into generative AI models outside your domain without your consent. Interactions remain within your organization. For the Gemini API (for developers), data (queries, context, responses) is used for 55 days to monitor for abuse but is not included in training or tuning of AI/ML models.

Yes, Gemini AI is designed to comply with key industry standards and regulations. It can support workloads requiring HIPAA compliance, provided appropriate Business Associate Agreements (BAAs) are in place and properly implemented. Furthermore, Google holds a number of recognized certifications and attestations, such as SOC 1/2/3, ISO 27017/18, ISO 42001 (for AI Management Systems), US FedRAMP High, and German BSI C5, confirming a high level of security and compliance across jurisdictions and sectors.

More official information and detailed documentation on Gemini AI enterprise security can be found on various Google resources. It's worth checking out the Google Cloud Vertex AI documentation, especially the sections on generative AI and customizable security filters. Extensive information on generative AI privacy in Workspace is available in the Google Workspace Privacy Center. Additionally, the Google Cloud and Google AI blog regularly publishes updates and articles on AI advancements and security. More detailed technical aspects, such as attack defense, are often discussed in Google DeepMind research publications.