Privacy Implications and Data Security in the Age of AI: Lessons from Samsung’s Data Leaks
Recently reports surfaced that allege several Samsung employees inadvertently leaked sensitive company data on three separate occasions. It began with an employee who copied the source code from a flawed semiconductor database into ChatGPT, seeking assistance in identifying a solution. In another, an employee shared confidential code in an attempt to troubleshoot defective equipment, and the most recent involved an employee submitting an entire meeting’s discussion to the chatbot, requesting it to generate the minutes.
Despite OpenAI’s clear instruction in their frequently asked questions (FAQ) section, advising users not to disclose any sensitive information during conversations with ChatGPT, the practical value of ChatGPT probably outweighed the associated security concerns.
Samsung reportedly had a policy in place that banned the use of generative artificial intelligence (AI) tools from its premises. However, within weeks of lifting the ban, the data leaks happened. These incidents underscore the persistent risk of employee negligence concerning data security and proper handling, indicating that this issue is likely to persist in the foreseeable future.
Privacy Implications with AI
Possible privacy implications resulting from data security lapse are a critical consideration when it comes to utilizing AI, as most generative AIs, including ChatGPT, do not guarantee data privacy.
Therefore, while AI does offer tremendous potential in various fields, including data analysis and decision-making for better business outcomes, yet as a trade-off, it also raises concerns about the protection of sensitive information that has to be shared in order to extract those solutions.
This becomes particularly concerning for businesses that have contractual obligations to ensure privacy and confidentiality for their clients. When businesses share client, customer, or partner information with an AI, there is a risk that the AI may utilize such information in ways that businesses cannot reliably predict.
Interestingly enough, OpenAI’s Terms of Use reads: “You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services.”
Let’s consider a hypothetical example to illustrate the implications of the terms. Imagine that a company, XYZ Corp, decides to use OpenAI’s services and implements a chatbot to interact with its customers. The chatbot collects and processes customer data, including personal information, to provide personalized recommendations and assistance.
Now, suppose that due to a programming error or an employee’s mistake, the chatbot unintentionally discloses sensitive customer information to unauthorized individuals. As a result, several customers file legal claims against XYZ Corp, seeking compensation for the privacy breach and any damages they may have suffered.
In this scenario, according to the terms, XYZ Corp would need to bear the legal costs associated with the claims filed against OpenAI, as well as the potential damages awarded to the affected customers.
Understanding Corporate Liability
With the passage of time, we will get a better picture of the ramifications of generative AI programs, if any. However, at present without a clear strategy that centers on privacy, businesses will remain vulnerable to profitability and reputational risk.
A possible step to prevent AI from spilling sensitive data gathered from careless employees is for companies to limit the inputs their employees can put in a prompt. For instance, when Samsung learnt of the leaks, it tried to control the damage by putting in place an “emergency measure” limiting each employee’s prompt in ChatGPT to 1024 bytes. Some others have gone a step further and tried to outright delete chatbot AIs altogether.
Instead of blanket bans, organizations could limit who has access to chatbots and who doesn’t. This approach allows businesses to strike a balance between leveraging the benefits of chatbot technology and safeguarding data privacy.
The Human Factor
It is essential to approach the integration of generative AI, with the same level of preparedness as one would when introducing a new software in the company. Before granting employees access to chatbots and allowing them to input data, it is crucial to establish clear expectations and provide comprehensive training in the matter. This training should encompass essential topics such as data privacy, confidentiality, and the inherent risks associated with generative AI tools. However, the importance of regular training cannot be overstated, extending beyond AI, to encompass the broader realm of data protection. By conducting periodic training sessions, businesses can ensure that employees stay updated on the latest best practices and security protocols. This ongoing investment in training equips employees with the knowledge and skills necessary to handle sensitive information responsibly, mitigating the likelihood of accidental data breaches.
DISCLAIMER – No information contained in this website may be reproduced, transmitted, or copied (other than for the purposes of fair dealing, as defined in the Copyright Act, 1957) without the express written permission of Rainmaker Online Training Solutions Pvt. Ltd.