Microsoft error sees confidential emails exposed to AI tool Copilot

Microsoft error sees confidential emails exposed to AI tool Copilot

February 20, 2026   02:59 pm

Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.

The tech giant has pushed Microsoft 365 Copilot Chat as a secure way for workplaces and their staff to use its generative AI chatbot.

But it said a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders - including those marked as confidential.

Microsoft says it has rolled out an update to fix the issue, and that it “did not provide anyone access to information they weren’t already authorised to see”.

However, some experts warned the speed at which companies compete to add new AI features meant these kinds of mistakes were inevitable.

Copilot Chat can be used within Microsoft programs such as Outlook and Teams, used for emails and chat functions, to get answers to questions or summarise messages.

“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson told BBC News.

“While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,” they added.

“A configuration update has been deployed worldwide for enterprise customers.”

The blunder was first reported by tech news outlet Bleeping Computer, which said it had seen a service alert confirming the issue.

It cited a Microsoft notice saying “users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat”.

The notice added that a work tab within Copilot Chat had summarised email messages stored in a user’s drafts and sent folders, even when they had a sensitivity label and a data loss prevention policy configured to prevent unauthorised data sharing.

Reports suggest Microsoft first became aware of the error in January.

Its notice about the bug was also shared on a support dashboard for NHS workers in England - where the root cause is attributed to a “code issue”.

A section of the notice on the NHS IT support site implies it has been affected.

But it told BBC News the contents of any draft or sent emails processed by Copilot Chat would remain with their creators, and patient information has not been exposed.

‘Data leakage will happen’

Enterprise AI tools such as Microsoft 365 Copilot Chat - available to organisations with a Microsoft 365 subscription - often have stricter controls and security protections in place to prevent sharing of sensitive corporate information.

But for some experts, the issue still highlights risks of adopting generative AI tools in certain work environments.

Nader Henein, data protection and AI governance analyst at Gartner, said “this sort of fumble is unavoidable”, given the frequency of “new and novel AI capabilities” being released.

He told BBC News organisations using these AI products often lack tools needed to protect themselves and manage each new feature.

“Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up,” Henein said.

“Unfortunately the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible,” he added.

Cyber-security expert Professor Alan Woodward of the University of Surrey said it showed the importance of making such tools private-by-default and opt-in only.

“There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen,” he told BBC News.

Source: BBC

--Agencies

Disclaimer: All the comments will be moderated by the AD editorial. Abstain from posting comments that are obscene, defamatory or slanderous. Please avoid outside hyperlinks inside the comment and avoid typing all capitalized comments. Help us delete comments that do not follow these guidelines by flagging them(mouse over a comment and click the flag icon on the right side). Do use these forums to voice your opinions and create healthy discourse.

Most Viewed Video Stories

Ada Derana Lunch Time News Bulletin 12.00

Ada Derana Lunch Time News Bulletin 12.00

Sri Lanka's health officials warn of surge in rabies cases linked to cats (English)

New courts are set up to expedite court proceedings - Justice Minister Harshana Nanayakkara (English)

AI infrastructure is next frontier of economic, cultural cooperation - President at AI summit in India (English)

Ada Derana Lunch Time News Bulletin 12.00 pm

IMF chief meets PM Harini; Commends efforts to stabilise economy (English)

President Anura Kumara holds talks with US Ambassador to India Sergio Gor in Delhi (English)

Thalatha Athukorala claims govt. refuses to take accountability for security failure (English)