An Ethical Framework for Artificial Intelligence
This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications.
June 08, 2020 at 12:00 PM
8 minute read
"Great power involves great responsibility."
—Franklin D. Roosevelt
China has a population of approximately 1.4 billion people and the Chinese government is reportedly using a combination of artificial intelligence (AI) and facial recognition software to monitor their movements and online activities. Even more troubling, China is using the same technology to track and control a Muslim minority group, the Uyghurs. China has subverted the potential of artificial intelligence to impose a form of racist social controls. AI offers new opportunities to enhance business productivity and enrich the personal lives of individuals.
Without a broad agreement on the ethical implementation of AI, the still untapped potential of AI can be corrupted.
This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book Responsible AI published by the International Technology Law Association in 2019. We have previously considered the technological elements of AI, facial recognition and personal privacy issues in our recent columns published here, which may provide some useful background for those new to the subject of AI. See "Artificial Intelligence: The Fastest Moving Technology," NYLJ (March 9, 2020); "Waking Up to Artificial Intelligence," NYLJ (Feb. 10, 2020).
Ethical Purpose
Organizations that develop AI systems have a great responsibility to understand how the system will be used and that its implementation will not be harmful to society. AI system developers should require that the purpose of the software implementation be identified in reasonable detail. They must ensure that the purposes of the new AI systems are ethical and not intentionally harmful.
As the full potential of AI for both good and harm is recognized by national governments, some regulatory statutes or rules will follow. Laws that regulate AI should promote ethical uses that do not cause harm, avoid unreasonable disruptions, and do not promote the distribution of false information.
AI is already being used in the workplace to support automation and speed or eliminate routine administrative tasks. Organizations that develop or deploy AI systems should consider the net effects of any implementation on its employees and their work. In some instances, workers will be displaced by automated systems. To gain greater understanding and acceptance of AI systems on their employees, businesses should allow the affected workers to participate in the decision-making process.
AI systems and automation usually increase efficiency and, as a result, workers are replaced by these systems. To promote efficiency and productivity, governments should consider creating programs for any displaced workers to learn new useful skills. Similarly, governments should promote educational policies to prepare children with the skills they will need for the emerging new economy, including life-long learning.
The implementation of AI systems may have an adverse impact on the environment. When developing AI systems, organizations should assess the environmental impact of these new systems. Government should put into effect statutes or rules that ensure complete and transparent investigations of any adverse or unanticipated environmental impacts of AI systems.
Unfortunately, AI systems have been recognized as creating strategic advantages in weapons systems. The use of lethal autonomous weapon systems (LAWS) should respect international principals of humanitarian law, including for example, the Geneva Conventions of 1949. LAWS can be both accurate and deadly. As such, LAWS should always be under human control and oversight in every situation where they are used in a conflict.
The recent very public policy disputes relating to posts on Twitter and Facebook reveal how AI may be used to weaponize false or misleading information. Companies that develop or deploy AI systems to promote or filter information on Internet platforms, including social media, should take measures to minimize the spread of false or misleading information. It is recommended that these systems should prove a means for users to flag potentially false or harmful content. Government agencies should provide clear guidelines to identify prohibited content that respects the rights and equality of individuals.
Transparency and Explainability
Transparency refers to the duty of every business and government entity to inform customers and citizens that they are interacting with AI systems. At a minimum, users should be provided with information about what the systems does, how it performs its tasks and the specifications and/or data used in training the system. The goal of transparency is to avoid creating an AI system that functions as an opaque "black box."
Explainability refers to the duty of organizations using an AI decision-making process to provide accurate information in human understandable terms as to how the decisions/outcomes were reached. For example, if an AI system is used to process a mortgage loan application the loan applicant should be able to find out the factors supporting a credit decision including credit ratings, quality and location of the house and recent comparable sales in neighboring areas.
Transparency tends to preserve the public trust in AI systems and to demonstrate that the decisions made by an AI system are fair and impartial.
Transparency and explainability become increasingly important as the AI system deals with important decisions involving sensitive personal or financial data. In designing the AI system, transparency should meet the reasonable expectation of the average user. For this reason, transparency and explainability should be built into the design of any AI system.
Fairness and Non-Discrimination
The design of AI systems is a human endeavor and necessarily incorporates the knowledge, life experiences and prejudices of the designers. Companies that develop or deploy AI systems should make users aware that these systems reflect the goals and potential biases of the developers. As has been studied in other contexts, implicit bias is part of the human condition and AI system developers may incorporate these values into the methods and goals of a new AI system. In addition, AI systems are often "trained" by reviewing large data sets. For example, an AI system assisting in loan decisions might have used a data set that indicated certain racial or ethnic minority has a higher than average loan default rate. Screening for such a bias is necessary for a fair system.
The decisions made by AI systems must be fair and non-discriminatory as compared to non-discriminatory decisions made by humans. As such, in the design of AI systems fairness should be prioritized in the system's algorithms and training data used. Without attention to fairness, AI systems have the potential of perpetuating and increasing bias, and this could have a broad social impact. To minimize these issues, AI systems with a significant social impact should be independently reviewed and tested periodically.
Safety and Reliability
AI systems currently control a wide variety of automated equipment and will have a broader impact when autonomous vehicles are in common use. Whether in the factory or traveling on the highway, AI systems will posse a potential danger to individuals. As to the issue of safety, AI system developers must ensure that AI systems will perform correctly, without harming users, resources, or the environment. It is essential to minimize unintended consequences and errors in the operation of any system.
These AI controlled systems must also operate reliably. Reliability refers to the consistency of performance, i.e., the probability of performing a function without a failure and within the system's parameters over an extended period of time. Organizations that develop or deploy AI systems in conjunction with a piece of equipment must clearly define the principles underlying its operation and the boundaries of its decision-making powers. When safety is a priority, the appropriate government agency should require the testing of AI systems to ensure reliability. The systems should be trained on data sets that are as "error-free" as possible. When an AI system is involved in an incident of an unanticipated or adverse/fatal outcome it should be subject to a transparent investigation.
The possibility of personal injury and the potential liability raises a host of legal concerns. Legislators should consider whether the current legal framework, including product liability law, requires adjustments to meet the unique characteristics of AI systems.
For a more detailed review of the above issues the book Responsible AI can be purchased from the International Technology Law Association.
Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press).
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllBig Law Sidelined as Asian IPOs in New York Are Dominated by Small Cap Listings
The Benefits of E-Filing for Affordable, Effortless and Equal Access to Justice
7 minute readA Primer on Using Third-Party Depositions To Prove Your Case at Trial
13 minute readShifting Sands: May a Court Properly Order the Sale of the Marital Residence During a Divorce’s Pendency?
9 minute readLaw Firms Mentioned
Trending Stories
- 1Meet the New President of NY's Association of Trial Court Jurists
- 2Lawyers' Phones Are Ringing: What Should Employers Do If ICE Raids Their Business?
- 3Freshfields Hires Ex-SEC Corporate Finance Director in Silicon Valley
- 4Meet the SEC's New Interim General Counsel
- 5Will Madrid Become the Next Arbitration Hub?
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250