An Ethical Framework for Artificial Intelligence—Part II
In Peter Brown's Technology Law column, he discusses distrust in AI systems and how this distrust may become a roadblock to the successful implementation of this useful and innovative technology.
July 29, 2020 at 11:02 AM
8 minute read
"Public trust is a vital condition for artificial intelligence to be used productively." —Sir Mark Walport
In January 2020 Detroit resident and father of two, Robert Julian-Borchak Williams was identified by a police computer using facial recognition software as the likely shoplifter of several expensive watches. Williams, who is a black man, was detained and held in prison over night until the investigating officers realized that software identification was based on a photo that simply did not look like Williams. He was eventually released but only after the personal and emotional damage was done. The New York Times reported that the prosecutor assigned to this case stated "We apologize. This does not in any way make up for the hours that Mr. Williams spent in jail."
In this example, the artificial intelligence (AI) that was used as part of the facial recognition software was either flawed or had been "taught" to recognize faces based on an inadequate set of sample faces. Indeed, one of the criticisms of facial recognition software is that it perpetuates the racial bias found in the real world. Examples, like Williams' experience, generate distrust of AI systems and this distrust may become a roadblock to the successful implementation of this useful and innovative technology. How could this situation be avoided?
This column is the second of a series on creating an ethical AI policy framework for the implementation of AI-supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book "Responsible AI" published by the International Technology Law Association in 2019. The first part of this series was published on June 9.
|Accountability
Organizations that develop AI systems should identify an individual or individuals who are responsible for the ethical implementation of these AI systems. At every step an individual should be accountable for the faults, property damage or bias inherent in the AI system. The names of these individuals should be available on request, so that they can be held accountable for the results or failures of the AI system. At the same time, the accountable individual or individuals should have the authority to monitor, modify or control the AI system.
Any organization that creates, deploys or uses AI systems should be cognizant of the practices and principles which will define a responsible AI system. The organization should identify and implement a policy framework suited to its industry or market sector. These practices should include a process to assess the AI system's impact on individuals, business, society and the environment.
Private corporations have been at the forefront in implementing AI systems, but in that leading role they need to create appropriate methods to evaluate and control AI. Clear roles within each organization should be defined for the ethical use of AI, including specific internal controls. In considering the decision-making role of an AI system (e.g., evaluating loan applications) the corporation should determine the acceptable level of risk in using AI. The organization must consider where human intervention is appropriate to review a specific decision and the level of human involvement. In considering the management of an AI system there should be accountability for data quality, the use of various data sets for training, minimizing any potential bias, validation of the system's training and periodic quality reviews.
To assess accountability, one must start at the beginning. The development of intelligent algorithms involves identifying the data needed to perform a function, writing of the code, obtaining "clean" data to train the system and testing the results obtained. Generally, the developers of an AI system are only liable when they are negligent or create a negative, unforeseen outcome. Developers should be subject to a continuing series of assessment or audits so that potential harm can be avoided. In some contexts, certain critical systems should include a "kill-switch" to override a dangerous situation when a system endangers human life or acts erratically. A system with a high degree of autonomy, like autonomous vehicles, mandates a higher level of accountability in the organization that develops or uses the AI system.
For AI systems that interact with consumers or impact on individuals, the corporation should consider a system for effective customer relationship management. The organizations must have a strategy for communicating with consumers about the role of the AI system. This should include a transparent disclosure of AI's use, its ethical evaluation, an opt-out option for individuals and the possible human review of material AI decisions.
Government entities, including the police and military, have already begun to incorporate AI systems in their operations. Governments operate within a framework of national legislation and international law. Given the greater powers of government to impact the lives of individuals and the operation of private companies, government entities should consider how they may be held accountable for AI systems they implement.
There may come a time when government entities create their own AI applications. However, for the foreseeable future government will deploy and use AI systems created by the private sector. Government entities should consider the same standards of accountability and obligations as outlined above for private companies. Given the greater responsibilities of government bodies, like the police, they should be subject to an even higher standard of conduct. In particular, government organizations must be sensitive to the national and international standards for human rights. As noted in the first article of this series, China appears to be abusing AI technology to continuously monitor its entire population and abuse certain ethnic minorities.
Government administrative agencies have an obligation to give individuals due process in their activities and decisions. Issues of procedural and substantive fairness must be considered in the use of AI systems. When implementing these systems, government administrators have an enhanced obligation to understand and review the information used for training AI systems to ensure that the information is representative of a broad and fair cross-section of a community. This will avoid incorporating a social bias into the AI system. When an AI system is transparent, citizens will have sufficient information to understand and challenge administrative decisions made through the use of AI.
|Open Data and Fair Competition
Many AI systems learn how to execute the functions they perform or the decisions they formulate by being "trained" through they review of large quantities of relevant data. Access to such data is critical for innovative businesses to develop AI systems. However, in certain business sectors the valuable data for training is controlled by a small number of players. This lack of access to data may become a barrier to entry for certain types of companies.
For some, the control of large quantities of useful data is viewed as an opportunity to monetize a valuable asset. For example, Facebook and Amazon have accumulated vast quantities of freely collected consumer data they use to promote and target advertising. Generally, the training data for AI systems is acquired through licensing from the entities that own and control the data. There is some fear that the large companies will restrict access to data to limit their competition. Further, not all information can be easily disclosed. Considerations of privacy, data protection rules and other factors may limit access to critical training data.
If data is restricted or monopolized for the benefit of a few companies, it has been suggested that anti-trust laws could be used to make vital data available to AI product developers. Governments should review their competition laws to make sure there is sufficient access to data to encourage vibrant competition. This heavy-handed approach can be avoided if dominant businesses are encouraged to voluntarily share data with innovative businesses.
The federal government has various programs to access public data sets in machine readable formats as a result of the 2013 Federal Open Data Policy. Over 300,000 datasets are available at Data.gov, which can be freely used. Such a sharing of data would ultimately benefit consumers.
The use of open-source software has helped propel many smaller innovative companies. Similarly, the government should support an open data initiative in the public and private sectors to encourage open access data. This might include structures for the sharing and exchanging of data. Private sector organizations could also encourage and promote open data in their areas of business as a way to encourage innovation.
A more detailed review of the above issues and the book "Responsible AI" can found at the International Technology Law Association's website at https://www.itechlaw.org/ResponsibleAI/access.
Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press).
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllPatent Trolls Come Under Increasing Fire in Federal Courts
Trump's SEC Overhaul: What It Means for Big Law Capital Markets, Crypto Work
Law Firms Mentioned
Trending Stories
- 1Judge Denies Sean Combs Third Bail Bid, Citing Community Safety
- 2Republican FTC Commissioner: 'The Time for Rulemaking by the Biden-Harris FTC Is Over'
- 3NY Appellate Panel Cites Student's Disciplinary History While Sending Negligence Claim Against School District to Trial
- 4A Meta DIG and Its Nvidia Implications
- 5Deception or Coercion? California Supreme Court Grants Review in Jailhouse Confession Case
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250