An Ethical Framework for Artificial Intelligence
This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications.
June 08, 2020 at 12:00 PM
8 minute read
"Great power involves great responsibility."
—Franklin D. Roosevelt
China has a population of approximately 1.4 billion people and the Chinese government is reportedly using a combination of artificial intelligence (AI) and facial recognition software to monitor their movements and online activities. Even more troubling, China is using the same technology to track and control a Muslim minority group, the Uyghurs. China has subverted the potential of artificial intelligence to impose a form of racist social controls. AI offers new opportunities to enhance business productivity and enrich the personal lives of individuals.
Without a broad agreement on the ethical implementation of AI, the still untapped potential of AI can be corrupted.
This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book Responsible AI published by the International Technology Law Association in 2019. We have previously considered the technological elements of AI, facial recognition and personal privacy issues in our recent columns published here, which may provide some useful background for those new to the subject of AI. See "Artificial Intelligence: The Fastest Moving Technology," NYLJ (March 9, 2020); "Waking Up to Artificial Intelligence," NYLJ (Feb. 10, 2020).
|Ethical Purpose
Organizations that develop AI systems have a great responsibility to understand how the system will be used and that its implementation will not be harmful to society. AI system developers should require that the purpose of the software implementation be identified in reasonable detail. They must ensure that the purposes of the new AI systems are ethical and not intentionally harmful.
As the full potential of AI for both good and harm is recognized by national governments, some regulatory statutes or rules will follow. Laws that regulate AI should promote ethical uses that do not cause harm, avoid unreasonable disruptions, and do not promote the distribution of false information.
AI is already being used in the workplace to support automation and speed or eliminate routine administrative tasks. Organizations that develop or deploy AI systems should consider the net effects of any implementation on its employees and their work. In some instances, workers will be displaced by automated systems. To gain greater understanding and acceptance of AI systems on their employees, businesses should allow the affected workers to participate in the decision-making process.
AI systems and automation usually increase efficiency and, as a result, workers are replaced by these systems. To promote efficiency and productivity, governments should consider creating programs for any displaced workers to learn new useful skills. Similarly, governments should promote educational policies to prepare children with the skills they will need for the emerging new economy, including life-long learning.
The implementation of AI systems may have an adverse impact on the environment. When developing AI systems, organizations should assess the environmental impact of these new systems. Government should put into effect statutes or rules that ensure complete and transparent investigations of any adverse or unanticipated environmental impacts of AI systems.
Unfortunately, AI systems have been recognized as creating strategic advantages in weapons systems. The use of lethal autonomous weapon systems (LAWS) should respect international principals of humanitarian law, including for example, the Geneva Conventions of 1949. LAWS can be both accurate and deadly. As such, LAWS should always be under human control and oversight in every situation where they are used in a conflict.
The recent very public policy disputes relating to posts on Twitter and Facebook reveal how AI may be used to weaponize false or misleading information. Companies that develop or deploy AI systems to promote or filter information on Internet platforms, including social media, should take measures to minimize the spread of false or misleading information. It is recommended that these systems should prove a means for users to flag potentially false or harmful content. Government agencies should provide clear guidelines to identify prohibited content that respects the rights and equality of individuals.
|Transparency and Explainability
Transparency refers to the duty of every business and government entity to inform customers and citizens that they are interacting with AI systems. At a minimum, users should be provided with information about what the systems does, how it performs its tasks and the specifications and/or data used in training the system. The goal of transparency is to avoid creating an AI system that functions as an opaque "black box."
Explainability refers to the duty of organizations using an AI decision-making process to provide accurate information in human understandable terms as to how the decisions/outcomes were reached. For example, if an AI system is used to process a mortgage loan application the loan applicant should be able to find out the factors supporting a credit decision including credit ratings, quality and location of the house and recent comparable sales in neighboring areas.
Transparency tends to preserve the public trust in AI systems and to demonstrate that the decisions made by an AI system are fair and impartial.
Transparency and explainability become increasingly important as the AI system deals with important decisions involving sensitive personal or financial data. In designing the AI system, transparency should meet the reasonable expectation of the average user. For this reason, transparency and explainability should be built into the design of any AI system.
|Fairness and Non-Discrimination
The design of AI systems is a human endeavor and necessarily incorporates the knowledge, life experiences and prejudices of the designers. Companies that develop or deploy AI systems should make users aware that these systems reflect the goals and potential biases of the developers. As has been studied in other contexts, implicit bias is part of the human condition and AI system developers may incorporate these values into the methods and goals of a new AI system. In addition, AI systems are often "trained" by reviewing large data sets. For example, an AI system assisting in loan decisions might have used a data set that indicated certain racial or ethnic minority has a higher than average loan default rate. Screening for such a bias is necessary for a fair system.
The decisions made by AI systems must be fair and non-discriminatory as compared to non-discriminatory decisions made by humans. As such, in the design of AI systems fairness should be prioritized in the system's algorithms and training data used. Without attention to fairness, AI systems have the potential of perpetuating and increasing bias, and this could have a broad social impact. To minimize these issues, AI systems with a significant social impact should be independently reviewed and tested periodically.
|Safety and Reliability
AI systems currently control a wide variety of automated equipment and will have a broader impact when autonomous vehicles are in common use. Whether in the factory or traveling on the highway, AI systems will posse a potential danger to individuals. As to the issue of safety, AI system developers must ensure that AI systems will perform correctly, without harming users, resources, or the environment. It is essential to minimize unintended consequences and errors in the operation of any system.
These AI controlled systems must also operate reliably. Reliability refers to the consistency of performance, i.e., the probability of performing a function without a failure and within the system's parameters over an extended period of time. Organizations that develop or deploy AI systems in conjunction with a piece of equipment must clearly define the principles underlying its operation and the boundaries of its decision-making powers. When safety is a priority, the appropriate government agency should require the testing of AI systems to ensure reliability. The systems should be trained on data sets that are as "error-free" as possible. When an AI system is involved in an incident of an unanticipated or adverse/fatal outcome it should be subject to a transparent investigation.
The possibility of personal injury and the potential liability raises a host of legal concerns. Legislators should consider whether the current legal framework, including product liability law, requires adjustments to meet the unique characteristics of AI systems.
For a more detailed review of the above issues the book Responsible AI can be purchased from the International Technology Law Association.
Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press).
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllAs 'Red Hot' 2024 for Legal Industry Comes to Close, Leaders Reflect and Share Expectations for Next Year
7 minute read'So Many Firms' Have Yet to Announce Associate Bonuses, Underlining Big Law's Uneven Approach
5 minute readTikTok’s ‘Blackout Challenge’ Confronts the Limits of CDA Section 230 Immunity
6 minute readEnemy of the State: Foreign Sovereign Immunity and Criminal Prosecutions after ‘Halkbank’
10 minute readLaw Firms Mentioned
Trending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250