Data Privacy

As governments lift "shelter-at-home" orders, employers face difficult decisions relating to the fundamental concern of keeping their employees safe. This calculus will involve evaluation of artificial intelligence (AI) solutions to limit occupational exposure to COVID-19. Use of AI, in combination with other measures, could speed the safe return of employees to the workforce.

Of course, willingness to share health data will impact AI's effectiveness. The public's tolerance for sharing personal data for the greater good may be heightened in the short term as we continue developing harm-reduction technologies to battle and fight spread of the virus, but a long-term view of employee privacy will be imperative for continued successful integration of AI in the workplace.

|

Emerging New Technologies Help Employers Safely Return Employees to Work

One way AI can assist in successful return-to-work strategies is through contact tracing and predictive modeling to help halt the spread of the virus. AI-driven algorithms can scour meeting invites, email traffic, business travel, and GPS data from employer-issued computers and cell phones to give employers advance warnings to avoid certain danger zones or to quickly halt a potential outbreak at a location.

Other AI tools used by employers help test, diagnose, and otherwise monitor employee health. One platform is a "fitness for duty" application encompassing a digital health survey, which asks employees for personal information such as health status or recent travel. The data from these programs may be used to build analytical models, such as a public dashboard for employers to monitor the spread of COVID-19 within their company. These programs assist employers in managing continuity of business and navigating the uncertainty of COVID-19.

Biometrics data also fuels social distancing and heat-detection cameras, some of which are paired with facial-recognition software employers can use to track and identify the suspected unwell. For instance, one company has created a camera software that rings a buzzer or alerts security staff when two people stand less than six feet apart. Another company has created an AI-camera solution that can scan groups to detect and identify anyone with an elevated temperature in real time. This platform can help keep employees safe and means organizations do not have to slowly check people one-by-one for symptoms of COVID-19.

Further, the effectiveness of many AI technologies will depend on utilizing connectivity that extends beyond the scope of cellular networks. As a result, innovators are turning to Bluetooth and other alternatives to address this issue. This will expand the scope of the information that may be gathered by AI.

|

Adoption of COVID-19 AI-Related Technologies and Privacy Considerations

While the use of these tools can help reboot our economy, use by employers raises privacy compliance issues as they involve the collection, use, aggregation, analysis and disclosure to third parties of highly personal information, such as biometric data, personally identifiably information, or geolocation data. A poll of about 2,000 people revealed that while more than half of Americans support anonymized government smartphone tracking, data privacy is one of the biggest challenges facing companies.

One of the threshold issues for consideration when implementing AI in the workplace is whether the data will be assigned a pseudonym, deindentified, or aggregated. On May 7, 2020, U.S. senators formally introduced the "COVID-19 Consumer Data Protection Act," which contains protections for personal health, geolocation, and proximity data during the pandemic. The bill proposes a series of requirements regarding the disclosure, consent, collection, and destruction of personally identifiable data that is collected for COVID-19-related purposes by specific entities. Moreover, the bill exempts information that is aggregated, deidentified or publicly available from the definition of "covered data."

Data that is not personally identifiable is also exempt under currently enacted laws. For example, the EU General Data Protection Regulation (Regulation (EU) 2016/679) (GDPR) makes the distinction between pseudonymous data and anonymous data in its definition of what is "personal data" governed by the law. The GDPR does not restrict the use of anonymous data. Further, within the United States, the California Consumer Privacy Act (CCPA)—which many view as an American state variation of the GDPR—permits businesses to collect, use, retain, sell, or disclose information that is deidentified or aggregated, subject to meeting technical specifications.

In addition, if employers are collecting data that is personally identifiable, then they should carefully review with counsel their privacy disclosures and related policies to determine if employees have already provided consent for the collection of personal data that the new AI technology will use, or if new consent is required. Currently, laws governing consent for collection of personally identifiable data vary by type of data that is collected, and by state. For example, under the CCPA, an employer would need to provide new notice before it collects new categories of personal information from its employees. Further, if personal information is used for a new, previously undisclosed purpose, then the employee must provide "explicit" consent. While the CCPA does not apply to information governed by HIPAA, or the state equivalent, it does apply to biometric information, internet activity, geolocation data, and audio, electronic, visual, thermal, or similar information. And, other states are considering enacting similar laws.

To address privacy concerns, some companies are developing technologies that integrate cryptology and decentralized networks to permit users to limit and control the disclosure of their data. For example, Coalition, a global contact tracing app for smartphones, and soon a dedicated hardware device called Nodle MI, encourages individuals to self-report their COVID-19 status. By using cryptology, each individual is provided with a temporary, anonymized ID that is tied to their cell phone. The app collects and locally stores the cell phone user's interactions with other anonymized IDs through Bluetooth technology. If a person self-reports that they are infected with COVID-19, the app uses decentralized, local cloud technology to alert users of their contact with an infected individual. Participants are never identified, and only their anonymous IDs are used. Innovators are working on further anonymizing the user process by developing an anonymous token ID system. Further, by using a local, decentralized network that is based on proximity and interactions, there is no need to collect and store location data, build movement profiles, or maintain identifiable features of the users' contact information and end devices. Efforts such as this demonstrate the possibilities of the power of advanced technologies to reduce risk during this pandemic, while permitting individuals to remain the agents of their own data.

|

Recommended AI Implementation Practices

Before implementing machine learning solutions into the workplace, employers should be sure they understand how the AI is collecting information, the purpose of the collection of this information, and how the information will be stored. Such questions employers should consider are whether the technology is collecting the right data or too much data, how long the data will be stored, and who has access to the data being collected (i.e., the developer, the government, or other third-parties). Only after fully understanding all of these aspects can employers accurately analyze the risks and legal implications of deploying a COVID-19 AI platform.

Further, just because a product has multiple functionalities does not mean that an employer should use all of them. For example, an employer should consider whether it is necessary to track all employee movements as opposed to just tracing contacts? Another crucial aspect of vetting the technology will be understanding the security mechanisms embedded in the systems. This includes the use of encryption, pseudonymization, and anonymization, where appropriate. Companies should have their IT security teams analyze the AI program as part of the vetting process.

Another important step in using these AI COVID-19 technologies is creating procedures and implementing policies related to the use of these platforms. For instance, employers should form a team of those who will have access to the information collected and exclude all others. Similarly, employers should develop strict confidentiality guidelines around the use of any information collected from the technologies. Companies should also regularly audit the technology to ensure it is not creating an adverse impact on any protected classes.

Finally, companies using employment-related AI should continuously monitor laws and regulations in the jurisdictions in which they do business and ensure they comply with those that apply to them, as well as confer with counsel who can help navigate the various nuances of each law. Striking a balance between maintaining employees' privacy and safeguarding their health will be difficult for employers, but it can be achieved with the right technology and suitable understanding of the applicable laws and regulations.

 

Natalie Pierce is a Shareholder at Littler in the San Francisco office. She is Co-Chair of the firm's Robotics, AI and Automation Practice Group.

Julie Stockton is an Associate at Littler in the firm's San Francisco office.

Courtney Chambers is an Associate at Littler in the firm's San Francisco office.