Here’s Why Congress Banned Microsoft’s Copilot AI – SlashGear

1 minute, 11 seconds Read
image

“The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services,” Szpindor’s office said. Data leaks have been a problem for generative AI tools ever since OpenAI’s ChatGPT arrived. In May last year, The Wall Street Journal reported that Apple warned employees against using tools like GitHub’s Copilot assistant for coding and ChatGPT. After uncovering data leak trails, Samsung also prohibited staffers from using ChatGPT.

A few months later in June, San Francisco-based Robust Intelligence experts demonstrated how Nvidia’s NeMo Framework AI software can be tricked into revealing private information and skipping past its safety measures. The U.S. Federal Trade Commission also investigated ChatGPT for putting data security at risk. In November last year, experts at Northwestern University unearthed methods to trick custom GPTs into revealing confidential information.

A month later, leaked documents obtained by The Platformer revealed the hallucination problems with Amazon’s Q chatbot and that it was leaking confidential data such as the location of data centers, unreleased features, and discount programs. In another incident, ChatGPT spilled personal details such as user banking information, forcing the company to briefly take it offline. With such a shaky history, it’s no surprise that Microsoft’s Copilot has been banned from official machines, even though Congressional staffers can’t be stopped from using these tools on their personal devices.

This post was originally published on this site

Similar Posts