|

Artificial Intelligence

"Public trust is a vital condition for artificial intelligence to be used productively." —Sir Mark Walport

In January 2020 Detroit resident and father of two, Robert Julian-Borchak Williams was identified by a police computer using facial recognition software as the likely shoplifter of several expensive watches. Williams, who is a black man, was detained and held in prison over night until the investigating officers realized that software identification was based on a photo that simply did not look like Williams. He was eventually released but only after the personal and emotional damage was done. The New York Times reported that the prosecutor assigned to this case stated "We apologize. This does not in any way make up for the hours that Mr. Williams spent in jail."

In this example, the artificial intelligence (AI) that was used as part of the facial recognition software was either flawed or had been "taught" to recognize faces based on an inadequate set of sample faces. Indeed, one of the criticisms of facial recognition software is that it perpetuates the racial bias found in the real world. Examples, like Williams' experience, generate distrust of AI systems and this distrust may become a roadblock to the successful implementation of this useful and innovative technology. How could this situation be avoided?

This column is the second of a series on creating an ethical AI policy framework for the implementation of AI-supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book "Responsible AI" published by the International Technology Law Association in 2019. The first part of this series was published on June 9.

|

Accountability

Organizations that develop AI systems should identify an individual or individuals who are responsible for the ethical implementation of these AI systems. At every step an individual  should be accountable for the faults, property damage or bias inherent in the AI system. The names of these individuals should be available on request, so that they can be held accountable for the results or failures of the AI system. At the same time, the accountable individual or individuals should have the authority to monitor, modify or control the AI system.

Any organization that creates, deploys or uses AI systems should be cognizant of the practices and principles which will define a responsible AI system. The organization should identify and implement a policy framework suited to its industry or market sector. These practices should include a process to assess the AI system's impact on individuals, business, society and the environment.

Private corporations have been at the forefront in implementing AI systems, but in that leading role they need to create appropriate methods to evaluate and control AI. Clear roles within each organization should be defined for the ethical use of AI, including specific internal controls. In considering the decision-making role of an AI system (e.g., evaluating loan applications) the corporation should determine the acceptable level of risk in using AI. The organization must consider where human intervention is appropriate to review a specific decision and the level of human involvement.  In considering the management of an AI system there should be accountability for data quality, the use of various data sets for training, minimizing any potential bias, validation of the system's training and periodic quality reviews.

To assess accountability, one must start at the beginning. The development of intelligent algorithms involves identifying the data needed to perform a function, writing of the code, obtaining "clean" data to train the system and testing the results obtained. Generally, the developers of an AI system are only liable when they are negligent or create a negative, unforeseen outcome. Developers should be subject to a continuing series of assessment or audits so that potential harm can be avoided. In some contexts, certain critical systems should include a "kill-switch" to override a dangerous situation when a system endangers human life or acts erratically. A system with a high degree of autonomy, like autonomous vehicles, mandates a higher level of accountability in the organization that develops or uses the AI system.

For AI systems that interact with consumers or impact on individuals, the corporation should consider a system for effective customer relationship management. The organizations must have a strategy for communicating with consumers about the role of the AI system. This should include a transparent disclosure of AI's use, its ethical evaluation, an opt-out option for individuals and the possible human review of material AI decisions.

Government entities, including the police and military, have already begun to incorporate AI systems in their operations. Governments operate within a framework of national legislation and international law. Given the greater powers of government to impact the lives of individuals and the operation of private companies, government entities should consider how they may be held accountable for AI systems they implement.

There may come a time when government entities create their own AI applications. However, for the foreseeable future government will deploy and use AI systems created by the private sector. Government entities should consider the same standards of accountability and obligations as outlined above for private companies. Given the greater responsibilities of government bodies, like the police, they should be subject to an even higher standard of conduct.  In particular, government organizations must be sensitive to the national and international standards for human rights. As noted in the first article of this series, China appears to be abusing AI technology to continuously monitor its entire population and abuse certain ethnic minorities.

Government administrative agencies have an obligation to give individuals due process in their activities and decisions. Issues of procedural and substantive fairness must be considered in the use of AI systems. When implementing these systems, government administrators have an enhanced obligation to understand and review the information used for training AI systems to ensure that the information is representative of a broad and fair cross-section of a community.  This will avoid incorporating a social bias into the AI system. When an AI system is transparent, citizens will have sufficient information to understand and challenge administrative decisions made through the use of AI.

|

Open Data and Fair Competition

Many AI systems learn how to execute the functions they perform or the decisions they formulate by being "trained" through they review of large quantities of relevant data. Access to such data is critical for innovative businesses to develop AI systems. However, in certain business sectors the valuable data for training is controlled by a small number of players. This lack of access to data may become a barrier to entry for certain types of companies.

For some, the control of large quantities of useful data is viewed as an opportunity to monetize a valuable asset. For example, Facebook and Amazon have accumulated vast quantities of freely collected consumer data they use to promote and target advertising. Generally, the training data for AI systems is acquired through licensing from the entities that own and control the data. There is some fear that the large companies will restrict access to data to limit their competition. Further, not all information can be easily disclosed. Considerations of privacy, data protection rules and other factors may limit access to critical training data.

If data is restricted or monopolized for the benefit of a few companies, it has been suggested that anti-trust laws could be used to make vital data available to AI product developers. Governments should review their competition laws to make sure there is sufficient access to data to encourage vibrant competition. This heavy-handed approach can be avoided if dominant businesses are encouraged to voluntarily share data with innovative businesses.

The federal government has various programs to access public data sets in machine readable formats as a result of the 2013 Federal Open Data Policy. Over 300,000 datasets are available at Data.gov, which can be freely used. Such a sharing of data would ultimately benefit consumers.

The use of open-source software has helped propel many smaller innovative companies. Similarly, the government should support an open data initiative in the public and private sectors to encourage open access data. This might include structures for the sharing and exchanging of data. Private sector organizations could also encourage and promote open data in their areas of business as a way to encourage innovation.

A more detailed review of the above issues and the book "Responsible AI" can found at the International Technology Law Association's website at https://www.itechlaw.org/ResponsibleAI/access.

Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press).