Samsung employees accidentally leaked confidential information to ChatGPT, a chatbot owned by OpenAI, while using it for work purposes. The company’s semiconductor division allowed engineers to use ChatGPT to check source code, but unfortunately, three separate incidents occurred where sensitive information was unintentionally leaked to the chatbot. Employees pasted confidential source code into the chat, shared code with ChatGPT for optimization, and recorded meetings for conversion into notes for a presentation. (Source)

This leak is a real-life example of potential privacy concerns that experts have long warned about. Sharing confidential legal documents or medical information for the purpose of text summarization or analysis could violate GDPR compliance. In fact, Italy has banned ChatGPT due to these concerns.
Samsung has taken steps to address the issue by limiting ChatGPT upload capacity to 1024 bytes per person, investigating the employees involved in the leak, and considering building its own internal AI chatbot to avoid similar incidents in the future. However, the leaked data is unlikely to be recalled since ChatGPT’s data policy uses data to train its models unless users request to opt out. ChatGPT explicitly warns users not to share sensitive information in conversations.
This serves as a cautionary tale for anyone using chatbots, including ChatGPT, to exercise caution when sharing information. Samsung has learned this lesson the hard way, and so should others.
Leave a comment