As businesses increasingly adopt generative AI tools like Microsoft’s Azure OpenAI Service and Copilot offerings, the tech giant has outlined how it aims to protect customer data privacy with these AI solutions. Through a blog post published on March 28, Microsoft assured that it’s maintaining the privacy of its customer organizations—those using Azure OpenAI Service and Copilot—under its existing privacy policies and contractual commitments. It says that:
- The organization’s data isn’t used to train OpenAI models and foundation models unless the users permit that. The users’ data used in Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services are not available to train open-source AI, which has been contested by data privacy experts from Mozilla Foundation in the past. Critically, Microsoft in this blog clearly states that it does not share customer data with third parties like OpenAI without permission, nor use this data to train OpenAI’s foundational models.
- If the organisations use their data to fine-tune AI models, any resulting fine-tuned solutions will only be available to them, not shared externally.
- Access controls and enterprise data permissions continue to apply to generative AI outputs to restrict access internally.
- Data like prompts and responses generated through Azure OpenAI Service and Copilot remain private to the customer organization.
Protection from copyright infringement
The blog further explains how organizations are protected from copyright lawsuits by third parties for using Azure OpenAI and Microsoft Copilot services through the 2023 Customer Copyright Commitment plan. Microsoft has revealed that they will defend customers using Azure OpenAI and Copilot and pay for settlements, in case the customers fall prey to such copyright infringement lawsuit judgements, “as long as the customer has used the guardrails and content filters” available in the products.
Protection of sensitive data
In the blog post, Chief Privacy Officer of Microsoft Julie Brill informed that Microsoft aims to protect users’ data through Microsoft Purview which lets its corporate customers “discover risks” associated with AI usage including sensitive prompts used.
Customers using Azure Open AI and Copilot can protect their “sensitive data with sensitivity labels and classifications.” Copilot only summarizes the content for users when they are permitted by the users, revealed the blog post.
“When sensitive data is included in a Copilot prompt, the Copilot-generated output automatically inherits the label from the reference file. Similarly, if a user asks Copilot to create new content based on a labelled document, the Copilot generated output automatically inherits the sensitivity label along with all its protection, like data loss prevention policies”, the Chief Privacy Officer explained in the blog post.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
Also Read: