It seems difficult to believe that it has only been a few months since chatGPT and other generative AI products hit the public consciousness.  Among the many uses, students tinker with this technology to write essays, law makers use it to help draft legislation, lawyers use it to help guide their research, writers leverage it to turn words into prose, and next-generation programmers create code with it.

If properly developed and implemented, generative AI holds the promise of enhancing productivity and increasing efficiencies across businesses. The downside? The technology also comes with a fair amount of risk.

  • We have all seen incorrect results generated by chatbots.  The mantra from computer science's earliest days holds true today: junk input inevitably results in junk output. Today's models are no different. They use information available on the internet to create their outputs, so it's no surprise that they produce erroneous results. Sometimes the errors are so pronounced, and the chatbot acts so erratically, that people call these events "hallucinations."  Technologies must be implemented with care, precision and consistency.
  • Bias is, regrettably, always a concern when using any technology that is trained by humans.  Any bias the human has, or that is embedded in the methods used to create work product or data, will invariably be present in the output.
  • Copyright infringement claims are a risk.  The way that generative AI chatbots sample materials to generate their work product inevitably leads to a fair amount of copying.  Most of the information on the internet is owned by someone (and copyright protected).  Lawsuits have already begun, with Stability AI (the creators of Stable Diffusion) being sued for use of images without permission, and other lawsuits are bound to follow.  It is impossible to know where the outputs from an AI chatbot really came from, so care must be taken when using the work product created by them.
  • Use of generative AI may result in contract breaches.  Practically all websites (including those scanned by AI chatbots) have Terms of Use posted to them.  Those terms usually prohibit scraping or using bots or crawlers to extract information from the sites, but that is precisely what the AI will do when sampling information.  This can result in breach of contract claims being brought against the AI user.  Also, if the chatbot is asked to develop software code, it may use open source components in its development, but often will not notify the user of this use, which can lead to loss of control over proprietary code, and also breach of the terms of the open source licenses pursuant to which these materials are licensed.
  • Confidentiality is a serious concern.  Because AI chatbots learn by performing tasks, and such tasks are re-performed for others using similar training inputs at later dates, any input to the chatbot may end up being produced to a third-party user (in whole or in part).  Even if not produced, the input remains in the AI's memory, potentially indefinitely, outside the control of the business.
  • Privacy concerns abound.  Many laws today have transparency requirements, requiring that businesses disclose what personal information they collect and process, among other things.  It could be quite challenging to even understand what generative AI technologies will do with data so that the business can make these disclosures correctly.  But most of these laws don't require only transparency (e.g., disclosure of the what, how, when, why and with whom of data processing and collection), but also require that data be deleted upon a consumer's request.  This is true in many jurisdictions, including California, the European Economic Area and the United Kingdom, among others. Given the structure and operation of these AI tools, it's not clear that the companies that operate them will be able to comply with these requests.  This complicates the privacy landscape for companies using this technology.
  • Data security issues also abound.  We have seen chat histories of one user being produced to a different user as output, data breaches, and even fake apps (where threat actors pose as chatGPT or other generative AI products, in an effort to get people to download harmful code).  These technologies are also proving useful to threat actors in their attempts to subvert business security controls.
  • Different inputs to the AI chatbot can lead to different answers.  The team using the technology will need to be trained properly to interact with it.

Boards of directors have a duty to understand the risks associated with using new technologies, and evaluate their impact on the business. Therefore, prior to allowing implementation of these new AI tools, it's important that the board take into consideration the risks involved, and implement controls to mitigate them.  Perhaps, to reduce the confidentiality and privacy risks, a board might choose to use its own instance of the AI chatbot (one that is not shared with anyone else).  It will take longer to train the AI to do what the business needs and will be more costly, but it will be more secure.  To avoid bias, a board should strongly consider implementing human supervision of the work product by a team of people trained in diversity, equity and inclusion methodologies.  To improve efficiency and effectiveness of the tool, the board might insist on sending personnel to receive specialized training so they learn how the AI is programmed and how best to get accurate results or detect faulty ones. A board might also implement policies to prevent disclosure of confidential business data and cyber incidents by implementing controls, so that employees are restricted from engaging in certain activities (like downloading apps) or prohibit input of business confidential information into chatbots.  And boards would wisely choose to assign a responsible officer knowledgeable in privacy risks to guide and supervise the use of these technologies, so that controls can be implemented to protect individual privacy and comply with law, particularly as the privacy and data security legislative landscape continues to become more and more complex in the U.S. and abroad.