
Summary:
Pasting confidential material into AI tools can trigger privacy problems for businesses and create discoverable records that can cause legal headaches later on. A written AI policy is important for defining which tools are acceptable to use in the workplace and how to use them safely.
AI chatbots are now quite powerful and very easy to use, even if they aren’t always accurate. This has led many business owners to try to incorporate them into their operations, but doing so can carry risk if the company doesn’t take adequate measures to protect the privacy of its data. Draft contracts, payroll records, customer complaints, pricing sheets, legal concerns, business strategy notes…feeding this type of information into an AI might provide you with some useful answers to whatever questions are on your mind at the moment. Unfortunately, it can also put your data at risk of being stored and used in ways that may be adverse to your business.
Who is Using Who?
Most AI models store all the data input by users, ostensibly for the purposes of training the company’s AI products to improve their future performance. Depending on the terms of service, this data can be shared with third-party vendors, government and law-enforcement agencies, and data-mining researchers. There is a small but growing number of AI products that are designed with security measures that prevent them from storing and sharing data in this way, but the free tools that most consumers use don’t typically offer that kind of protection.
Businesses Are Responsible for Keeping Their Data Secure
California has many laws intended to protect consumers by giving them more control over who has their data and how it is used; however, these laws do not protect businesses that voluntarily disclose their data by running it through AI. Public AI platforms are treated as third-parties, so running data or searches containing private information through these types of AI models may be akin to third-party disclosure. Consequently, information that might normally be protected as a trade secret may lose that protection if it is input into certain AI platforms since voluntarily shared information violates the company’s obligation to make reasonable efforts to maintain its secrecy. Similarly, information shared with an AI may void protections otherwise available under the attorney-client privilege doctrine and can be subject to the rules of discovery like other digital business records.
Pick Your AI Partners Carefully
Paying for AI service can feel unpalatable when there is so much available for free, but the lack of security controls available for public AI models can make “free” an expensive proposition. Whichever AI you choose to use, be sure to vet the terms of its data processing agreement carefully to make sure that you understand how your data will be managed and to ensure that it is adequately protected.
Put Guardrails in Writing
A written AI policy gives employees usable rules before habits form. It should name approved tools and list the types of information that should not be shared with AI unless the model is secure enough to allow the transmission of sensitive data – e.g, personal information, privileged communications, trade secrets, pricing models, source code, and so on. It should also require the human review of output to ensure accuracy, set retention rules, and route new AI tools through leadership, IT, and counsel for review prior to adoption.
Protect Growth Before the Prompt Is Sent
The technology may be very new, but the issues outlined above are surely familiar to any seasoned business owner – the need to choose vendors carefully, understand contractual terms thoroughly, and provide workplace guidance to employees to keep the business operating smoothly and safely. Integrated General Counsel, P.C. helps companies come to grips with their contracts, employment policies, corporate management, intellectual property, and legal disputes. We can help you implement a practical AI policy too. Contact Integrated General Counsel, P.C. at (925) 399-1529 for a free consultation.
FAQ: Businesses Using AI Chats
Are AI chats discoverable in California litigation?
They may be. California discovery rules cover electronically stored information in a party’s possession, custody, or control, and civil subpoenas may seek electronically stored information as well.
What should employees keep out of AI prompts?
High-risk categories include customer personal information, employee records, contract drafts, pricing, product roadmaps, privileged legal communications, internal investigation facts, and trade secrets. A written policy should list those categories with specific examples so employees aren’t left to guess what is sensitive and what isn’t.
What should an AI policy cover first?
Start with approved tools, blocked data categories, human review of output, retention rules, and an internal approval path for new AI vendors or sensitive use cases.


