![](http://images.law.com/contrib/content/uploads/sites/390/2023/06/Robotic-hand-767x633.jpg)
The Risks (and Rewards) of AI for In-House Counsel and Corporate Boards
The world has seen an expeditious uptick in generative AI over the past few months, especially since the introduction of ChatGPT and other innovative products. Though these new chatbots are intriguing for an abundance of reasons, it is crucial to understand the legal implications of this technology and to implement controls to mitigate risks.
June 13, 2023 at 12:21 PM
6 minute read
It seems difficult to believe that it has only been a few months since chatGPT and other generative AI products hit the public consciousness. Among the many uses, students tinker with this technology to write essays, law makers use it to help draft legislation, lawyers use it to help guide their research, writers leverage it to turn words into prose, and next-generation programmers create code with it.
If properly developed and implemented, generative AI holds the promise of enhancing productivity and increasing efficiencies across businesses. The downside? The technology also comes with a fair amount of risk.
- We have all seen incorrect results generated by chatbots. The mantra from computer science's earliest days holds true today: junk input inevitably results in junk output. Today's models are no different. They use information available on the internet to create their outputs, so it's no surprise that they produce erroneous results. Sometimes the errors are so pronounced, and the chatbot acts so erratically, that people call these events "hallucinations." Technologies must be implemented with care, precision and consistency.
- Bias is, regrettably, always a concern when using any technology that is trained by humans. Any bias the human has, or that is embedded in the methods used to create work product or data, will invariably be present in the output.
- Copyright infringement claims are a risk. The way that generative AI chatbots sample materials to generate their work product inevitably leads to a fair amount of copying. Most of the information on the internet is owned by someone (and copyright protected). Lawsuits have already begun, with Stability AI (the creators of Stable Diffusion) being sued for use of images without permission, and other lawsuits are bound to follow. It is impossible to know where the outputs from an AI chatbot really came from, so care must be taken when using the work product created by them.
- Use of generative AI may result in contract breaches. Practically all websites (including those scanned by AI chatbots) have Terms of Use posted to them. Those terms usually prohibit scraping or using bots or crawlers to extract information from the sites, but that is precisely what the AI will do when sampling information. This can result in breach of contract claims being brought against the AI user. Also, if the chatbot is asked to develop software code, it may use open source components in its development, but often will not notify the user of this use, which can lead to loss of control over proprietary code, and also breach of the terms of the open source licenses pursuant to which these materials are licensed.
- Confidentiality is a serious concern. Because AI chatbots learn by performing tasks, and such tasks are re-performed for others using similar training inputs at later dates, any input to the chatbot may end up being produced to a third-party user (in whole or in part). Even if not produced, the input remains in the AI's memory, potentially indefinitely, outside the control of the business.
- Privacy concerns abound. Many laws today have transparency requirements, requiring that businesses disclose what personal information they collect and process, among other things. It could be quite challenging to even understand what generative AI technologies will do with data so that the business can make these disclosures correctly. But most of these laws don't require only transparency (e.g., disclosure of the what, how, when, why and with whom of data processing and collection), but also require that data be deleted upon a consumer's request. This is true in many jurisdictions, including California, the European Economic Area and the United Kingdom, among others. Given the structure and operation of these AI tools, it's not clear that the companies that operate them will be able to comply with these requests. This complicates the privacy landscape for companies using this technology.
- Data security issues also abound. We have seen chat histories of one user being produced to a different user as output, data breaches, and even fake apps (where threat actors pose as chatGPT or other generative AI products, in an effort to get people to download harmful code). These technologies are also proving useful to threat actors in their attempts to subvert business security controls.
- Different inputs to the AI chatbot can lead to different answers. The team using the technology will need to be trained properly to interact with it.
Boards of directors have a duty to understand the risks associated with using new technologies, and evaluate their impact on the business. Therefore, prior to allowing implementation of these new AI tools, it's important that the board take into consideration the risks involved, and implement controls to mitigate them. Perhaps, to reduce the confidentiality and privacy risks, a board might choose to use its own instance of the AI chatbot (one that is not shared with anyone else). It will take longer to train the AI to do what the business needs and will be more costly, but it will be more secure. To avoid bias, a board should strongly consider implementing human supervision of the work product by a team of people trained in diversity, equity and inclusion methodologies. To improve efficiency and effectiveness of the tool, the board might insist on sending personnel to receive specialized training so they learn how the AI is programmed and how best to get accurate results or detect faulty ones. A board might also implement policies to prevent disclosure of confidential business data and cyber incidents by implementing controls, so that employees are restricted from engaging in certain activities (like downloading apps) or prohibit input of business confidential information into chatbots. And boards would wisely choose to assign a responsible officer knowledgeable in privacy risks to guide and supervise the use of these technologies, so that controls can be implemented to protect individual privacy and comply with law, particularly as the privacy and data security legislative landscape continues to become more and more complex in the U.S. and abroad.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View All![How AI Helped a Big Insurer Reduce Legal Costs by $20M How AI Helped a Big Insurer Reduce Legal Costs by $20M](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://images.law.com/corpcounsel/contrib/content/uploads/sites/390/2024/03/AI-Machine-learning-767x633-4.jpg)
![Startup Bringing AI to Doctors' Offices Hires First GC Startup Bringing AI to Doctors' Offices Hires First GC](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://k2-prod-alm.s3.us-east-1.amazonaws.com/brightspot/db/3a/87ef2b8d41cbafb26c3a5c353647/aden-fine-767x633.jpg)
![In-House AI Adoption Stalls Despite Rising Business Pressures In-House AI Adoption Stalls Despite Rising Business Pressures](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://images.law.com/corpcounsel/contrib/content/uploads/sites/397/2023/09/Controlling-AI-767x633.jpg)
![Big Tech Is Cozying Up to President Trump. Here's Why Their Lawyers Are Cautiously Optimistic Big Tech Is Cozying Up to President Trump. Here's Why Their Lawyers Are Cautiously Optimistic](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://k2-prod-alm.s3.us-east-1.amazonaws.com/brightspot/ec/ec/b02ed970416297e9151fa24f0608/gam-767x633.jpg)
Big Tech Is Cozying Up to President Trump. Here's Why Their Lawyers Are Cautiously Optimistic
Law Firms Mentioned
Trending Stories
- 1Munger, Gibson Dunn Billed $63 Million to Snap in 2024
- 2January Petitions Press High Court on Guns, Birth Certificate Sex Classifications
- 3'A Waste of Your Time': Practice Tips From Judges in the Oakland Federal Courthouse
- 4Judge Extends Tom Girardi's Time in Prison Medical Facility to Feb. 20
- 5Supreme Court Denies Trump's Request to Pause Pending Environmental Cases
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250