Posted by & filed under AI, Credit.

Mar 28, 2024

Written By: Jeremy L. Kahn, Principal at Berman Fink Van Horn

In April 2023, a series of inadvertent leaks of sensitive information by Samsung employees highlighted the potential risks associated with the use of generative AI. A Samsung engineer input confidential source code and asked ChatGPT to check the code for errors. And then another employee shared additional code with ChatGPT and requested “code optimization.” And then, yet a third Samsung employee created minutes of a confidential company meeting by feeding a word-for-word transcription of the meeting into ChatGPT. In each instance, the employee inadvertently—but publicly—disclosed Samsung’s confidential information to the public.

How could Samsung have prevented this? With a well-written AI policy and employee training specifically on the acceptable uses of generative AI.

It doesn’t matter what industry you are in; what happened at Samsung could happen at your business. Generative AI programs such as ChatGPT, Google Gemini, Microsoft Copilot, and others are changing the way people do business across almost every industry. As just a few examples:

  • Retailers can use it to analyze customer data and preferences and provide more personalized shopping experiences.
  • Medical professionals can use generative AI to assist in medical image analysis and interpretation.
  • Manufacturers can use it in quality control and defect detection.
  • Musicians can generate song lyrics.
  • Financial investors can use AI to analyze historical market data and identify trends.
  • Those in tech can use it to optimize or even create code.
  • And I may know a fellow attorney or two who have used it to draft a demand letter or cross-examination.

The list goes on and on. Because it has what seem to be infinite use cases, it is hard to think of any industry that generative AI will not affect.

So why does your business need an AI policy?

If you are subject to any privacy laws, you need an AI policy. Privacy laws affect a vast array of businesses. Medical professionals are regulated by HIPAA. Financial firms are regulated by the Gramm-Leach-Blilely Act. Lawyers are regulated by their states’ Rules of Professional Conduct, which in every state require lawyers to protect their clients’ confidential information. Companies doing business in California are subject to the California Consumer Privacy Act (CCPA). And companies doing business in Europe or with European customers are subject to the GDPR.

If your business is legally obligated to protect its customers’ private information, then you need an AI policy. That is because information inputted into an AI chatbot is no longer private. First, any information inputted is disclosed to the company that owns the generative AI system, such as OpenAI for ChatGPT or Google for Gemini. Second, many generative AI systems continue to “learn” based on the information that users input. That means that information one user inputs as a prompt can make its way into the answer generated to another user—without the first user ever knowing about it.

So, for example, a doctor who uses ChatGPT to assist in forming a diagnosis would likely violate HIPAA—and be subject to significant liability and penalties—if the doctor inputted her patient’s protected health information. So might a psychologist who asked Microsoft Copilot to summarize her therapy notes. An underwriter at a bank or other lending institution might run afoul of the Gramm-Leach-Blilely Act if he inputted a potential borrower’s personal information into an AI chatbot to assist in underwriting the loan.

Without training employees on a clear AI-acceptable use policy, businesses risk their employees violating privacy laws when using generative AI, even when the employee is earnestly trying to help the company. It may not occur to employees that the information inputted into a non-human chatbot is unsecure and possibly violating privacy laws, or that the information may be disclosed to other users worldwide. It is not enough for a business to have a general policy regarding the privacy laws that affect their industry (e.g., a medical office with a HIPAA policy). Businesses should revise such general policies to expressly incorporate rules regarding the use of AI, or draft a separate AI policy, so employees are on notice.

If you have a publicly posted privacy policy, you need an AI policy. “But my business is not subject to any privacy laws,” you say. Well, you still need an AI policy if your business has a publicly posted privacy policy. Many businesses do. Simply scroll to the bottom of almost any company’s website and there will be a link to a privacy policy. That is important because the Federal Trade Commission, which is empowered under the Federal Trade Commission Act to sue businesses that engage in “unfair or deceptive acts or practices,” has taken the position that a business’s violation of its own posted privacy policy is an unfair or deceptive act or practice. The logic makes sense. A business posting its privacy policy is telling customers how it will or will not use their personal information. It would be unfair or deceptive for a business to then turn around and use a customer’s personal information in a contrary way. So even if your business is not subject to a specific privacy law, businesses need to make sure that their employees are not using generative AI in a way that conflicts with their own privacy policies. The way to do that is to draft and then train employees on an AI acceptable use policy.

If your company has trade secrets or other confidential information, you need an AI policy. Besides the privacy of their customers’ information, many companies have their own confidential information that is valuable and competitively sensitive. Companies that have trade secrets or other confidential information likely have policies or employee contracts that govern their employees’ use of such information. But it may not be clear to employees that inputting that information into ChatGPT is a disclosure. After all, it was not clear to the three Samsung employees discussed above, even though they seemingly were trying to use ChatGPT for Samsung’s benefit. Additionally, once a trade secret is disclosed, a business can’t un-ring the bell. A court may very well find that the information is no longer a trade secret and subject to the protections and remedies that designation affords. Therefore, it is critical for businesses with confidential information, especially that which is considered a trade secret, to form an AI policy that protects that information from disclosure.

If accuracy is important to your business, you need an AI policy. Almost any business wants to be accurate. While generative AI systems are impressive with speedily providing information and content, they are far from infallible. In fact, it is not uncommon for them to “hallucinate,” meaning that they may generate a response to a query that seems plausible but it factually incorrect or inapposite to the context. Worse, it is not always obvious when this occurs.

Generative AI programs have generated citations to fake news articles where the newspaper is real, but the article does not exist. Lawyers have gotten in trouble having ChatGPT draft legal briefs that included citations to fake, non-existent cases. Ironically, even the technology news website CNET embarrassingly had to retract over 70 articles because they were written using ChatGPT, and multiple “facts” in the articles were inaccurate.

Publishing inaccurate information can have negative consequences, from embarrassment to legal liability. Publishing inaccurate information about a person can lead to a defamation lawsuit. An inaccurate advertisement can lead to liability for misrepresentations to consumers. And putting aside the legal ramifications, no company wants to lose the trust of its customers.

Because AI can generate inaccurate responses, businesses need policies that make sure that employees are aware of the risks of inaccurate information and that ensure that factual information is verified rather than blindly relied upon.

If you hire employees, you need an AI policy. Unless you’re the sole employee of your own business, your business hires employees. Generative AI can be helpful in developing job descriptions, sorting résumés, and determining successful candidates. But again, an AI policy is necessary to direct the appropriate uses of AI in making hiring decisions. A generative AI program is only as good as the information it is trained on. And AI programs have been found to have biases. This usually is not deliberate. For example, if an AI program sees that all or most employees are white males, that may inadvertently reinforce the selection of white males as likely successful candidates. Another example of bias was seen in Google’s attempt to correct for bias. Google’s attempts to avoid bias and incorporate diversity principles in Gemini resulted in an embarrassing overcorrection where prompts for pictures of historical popes resulted in images of black women and Native Americans as popes. The bottom line is that even inadvertent biases inherent in an AI program’s model may lead to biased results.

That has particular ramifications in the employment context. Businesses risk being sued for discrimination if their employment decisions are biased. Some jurisdictions, such as New York City, have passed laws that prohibit the use of AI in hiring decisions unless the AI system has undergone a bias audit. Once again, businesses can avoid trouble by developing and training their employees on an AI policy.

If your business generates any content, you need an AI policy. If your business generates any content—and most businesses do in at least some way—you need an AI policy. The risks of inaccuracies are discussed above. But there are also risks of unknowingly committing copyright infringement, or even losing copyright protections for your own works.

The data that generative AI systems are trained on includes numerous copyrighted works, such as books, songs, films, and other such content. A generative AI system could create a response that includes copyrighted material with the user not even knowing about. That could lead to a user then publishing that content and then being sued for copyright infringement. In fact, many authors have sued OpenAI (the creator and owner of ChatGPT) for copyright infringement. But it’s also possible that users themselves could be sued.

On the other side of the coin, content creators could risk losing copyright protection for their own works. Let’s say an editor inputs an author’s manuscript into an AI program to check for grammar or change the tone. That manuscript is now part of the database that the AI program is trained on and could be incorporated into responses to other users. Now let’s say some other user asked the same AI program to revise a short story she wrote. The AI program could add a character from the original author’s manuscript. Which work gets copyright protection may depend on who submits to the Copyright Office first. (Note that there are also a host of issues regarding how much use of AI is too much that the Copyright Office will not grant copyright protection, but that could be a whole other article.)

Issues like these underscore the need for content-creating business to have an AI policy that advises employees about the appropriate and inappropriate uses of AI.


Generative AI is a revolutionary tool with the power to increase your business’s productivity and efficiency. But with any shiny, new technology, there are also significant risks. If companies or their employees misuse AI, they could risk a Samsung-esqe disclosure of confidential information. Or they could be fined for violating their customers’ privacy rights. Or be sued for copyright infringement. As with the helpful use-cases, the parade of potential horribles also goes on and on. It is, therefore, critical for businesses to implement an AI-acceptable use policy and train their employees on the appropriate and inappropriate uses of AI.  

Jeremy L. Kahn is a thoughtful and strategic litigator, with a creative approach. He enjoys crafting strategies to resolve difficult and legally challenging problems, always seeking to achieve his clients’ desired results in an efficient manner.

As a Principal at Berman Fink Van Horn, Jeremy advises a broad range of clients in business disputes, including in trial and appellate courts. His complex commercial litigation and appellate practice focuses on business disputes, including contractual matters and business torts, insurance recovery, fraud, deceptive trade practices, partnership and shareholder matters, banking and consumer finance cases, real estate, and trade secret and noncompete litigation. Jeremy handles his clients’ matters from the start of the dispute, through litigation, and, if necessary, through the appellate process. He has successfully handled numerous appeals in both state and federal appellate courts.

Comments are closed.