Microsoft's Copilot and OpenAI's ChatGPT are causing concerns about workplace security.

Worries about AI in work are increasing.

June 5th 2024.

Microsoft's Copilot and OpenAI's ChatGPT are causing concerns about workplace security.
The use of artificial intelligence in the workplace is rapidly increasing, with advancements in generative AI tools such as OpenAI's ChatGPT and Microsoft's Copilot. However, along with these developments come concerns about privacy and security issues.

In a recent report by Wired, it was revealed that a new Microsoft tool called Recall has been labeled as a potential "privacy nightmare." This is due to its ability to take screenshots of a user's laptop every few seconds. The news has caught the attention of the UK regulator, the Information Commissioner's Office, which has requested more details from the technology giant about the product's safety before it is deployed in Microsoft's Copilot PCs.

Another platform that has raised eyebrows is OpenAI's ChatGPT, which has also demonstrated its use of the screenshot feature in their upcoming macOS app. Privacy experts have expressed concerns about the potential for sensitive data to be captured, particularly in a workplace setting.

Cam Woollven, the group head of AI at risk management firm GRC International Group, stated that many generative AI systems act as "big sponges," absorbing vast amounts of information from the internet to train their language models. This poses a threat of inadvertently exposing sensitive data. "They soak up huge amounts of information from the internet to train their language models," Woollven explained.

Furthermore, AI companies are always looking for more data to train their models, making it behaviorally attractive to collect as much information as possible. This raises concerns about sensitive information ending up in the wrong hands.

Not only is there a risk of sensitive data being exposed, but AI systems are also vulnerable to hacking. As Woollven pointed out, if an attacker gains access to the large language model powering a company's AI tools, they could potentially steal data, manipulate outputs, or even spread malware.

While these risks may seem daunting, there are steps that both businesses and individual employees can take to protect themselves. One crucial measure is to avoid providing sensitive information to these platforms. Lisa Avvocato, the vice president of marketing and community at data firm Sama, advises against using these tools for confidential information. Instead, she suggests being more generic with prompts, such as asking for a proposal template for budget expenditure rather than providing specific budget details for a sensitive project. This way, AI can be used as a first draft, and sensitive information can be added later.

In light of these concerns, the House of Representatives has banned the use of generative AI platforms like Microsoft's Copilot among its staff members. The decision was made by the Office of Cybersecurity, citing the risk of House data being leaked to unapproved cloud services. As we continue to explore the capabilities of AI, it is crucial to prioritize privacy and security to prevent potential risks.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]

 0
 0