Welcome back for another week of What's Next, where we report on the intersection of law and technology. This week, employment lawyers sound off on the legal ramifications of Google and Silicon Valley's politically and socially charged cultures. Plus, as kids around the country head back to school, they will be joined by artificially intelligent algorithms. And the telecom industry's robocall agreement might not keep spoofers and scammers out of our pockets afterall. Let's chat: Email me at [email protected] and follow me on Twitter at @a_lancaster3.


|

Political and Social Unrest in Silicon Valley

Wired's September cover story maps out the political and social forces eating Google from the inside out. The company has created a culture of dissent, where employees are encouraged to come to work without having to check their politics at the door, leading to a pressure cooker environment filled with protests, viral memos, toxic messageboards and Googler-on-Googler doxing. As Google and the rest of Silicon Valley co-opt the parlance of activists, employees and companies have faced the legal repercussions of a hostile work environment. 

Kelly Dermody, a partner at Lieff Cabraser Heimann & Bernstein, has a front-row view of the legal fallout at Google. The managing partner of Lieff Cabraser's San Francisco office and chair of the firm's employment practice group is leading a pending gender class action against the tech giant over claims it systematically pays women less for the same job.

Google represents the modern workforces, Dermody said. "You have many different cultures within Google, so there's the same problems that are historic legacy problems of gender bias or race bias alleged to exist there, and then there are the folks who have historically been in the majority culture, men, typically, who say they don't like the diversity and inclusion thing, because 'I feel like it tramples on my rights.' I don't find those to be very credible arguments that they bring. They bring them with great confidence, but I don't find them especially credible."

Dermody said Google failed to manage the backlash of its culture of dissent, which resulted in the harassment and doxing of team members. "They did not weed out the employees who were hostile to an inclusive workforce, which seems to be at odds with Google's stated mission," she said. "I think they got to a place where they lost control of the conversation around that. The camps that have been allowed to exist—there's a lot of emboldened behavior by people who do not have any interest in women in the workplace, and that's a real problem for a workplace that has women in it."

Another legal trend that has emerged from Silicon Valley's culture of dissent is a reckoning with forced arbitration. After some women said Google mishandeled sexual harassment allegations against some of its senior leadership, including a $90 million payout given to Android co-founder Andy Rubin, who was accused of coercing a female employee into performing oral sex, 20,000 employees walked out. A week later, the company announced that employees would no longer be bound by arbitration agreements.

Dermody said it's no small thing to get a company to walk back from mandatory arbitration. "A lot of companies have seen this as a get-out-of-jail-free-card of a policy," she said. "It ensures a complete lack of transparency, no public accountability; it severely changes the balance of power in litigation. Arbitrators, as good as they can be, are by definition people who are quite experienced and demographically tend to come from an overwhelmingly white, male, straight background." This is a problem, she said, because the demographic of neutrals doesn't reflect diverse jury pools of courts.

If there had been transparency around how Google dealt with the sexual harassment allegations, Dermody said she doubts Rubin and others would have been given golden parachutes. "Because it was done secretly, it looks like Google is doing the so-called right thing, even though the right thing has attached to it a great deal of benefit for the people being so-called disciplined."

On Friday, Google announced new policies barring employees from discussing politics and other "disruptive" topics. The company told employees that they would be held accountable for what they say in the office.

Dermody said it's not uncommon for companies to want to remove politics entirely from the workplace with content-neutral prohibitions. However, it's difficult to police those policies because of the type of issues that could be construed as political, she said. "Everything about someone's identity if they're not straight, white, male and without any disability—everything about that person's existence is going to be potentially political, because of the climate we're in and the organized, institutional attacks on historically marginalized communities," she said.

Good luck with that, Google, she said. 

Harmeet Dhillon of Dhillon Law Group in San Francisco agrees that Google's new policy might cause some friction.

Dhillon represented former Google engineer James Damore when he sued the company after he was terminated for a viral 10-page memo called "Google's Ideological Echo Chamber." In the dispatch, Damore claimed "Google's left bias has created a politically correct monoculture that maintains its hold by shaming dissenters into silence," and that diversity initiatives might not entirely close the gender gap in tech. He suggested that some biological differences between men and women might explain why there are fewer female leaders in Silicon Valley.

With the new community guidelines, Dhillon said Google is essentially telling employees to censor themselves, which she asserts will have a chilling effect on speech on the platforms.

"From the National Labor Relations Act point of view, prohibiting someone from talking about their working conditions is prohibited," she said.

However, Dhillon alleges that Google wants to barr controversial speech while still having access to employees' communications. "Google's problem is that they want to know what people are saying, mine it and use it," she said.

Dhillon said that the whole policy is a hot mess, and that it will undoubtedly be litigated. "Not necessarily by clients such as those I've represented previously, but people on the left who are complaining that they are being denied equitable working conditions," she said. "The notable internal protests have been against the Pentagon Project Maven contract, the Dragonfly project, gender disparities and gigantic exit packages for accused harassers. Are employees not supposed to talk about those issues at work? Some would say those are political in nature. If someone is disciplined for talking about those issues at work, Google could get sued."

Dhillon said that if Google had a clear policy on what is considered political content, it would help mitigate the litigation risk. Absent such a clear policy, controversy will reign, she said.

One potential explanation for why Google's new policy shifts away from politics, Dhillon said, is to distance the company from political affiliation amid accusations that it manipulated election results to support Hillary Clinton in the 2016 presidential election, and to avoid a repeat of these accusations in the 2020 election and beyond. Although Dhillon doesn't see a growth in the kind of cases like the one brought by James Damore, there does seem to be rising trend of interest in lawsuits challenging speech policies in the workplace.

"My phone rings everyday with people who want to litigate these issues," she said.


|

AI in Schools

Notebooks? Check. Pencils? Check. Calculator? Check.

Artificially intelligent algorithm? Check? Increasingly, back to school also means a return to algorithms that help predict violence, drop-out rates and where kids should go to school. An industry around artificially intelligent tools that leverage data to help under-resourced schools make decisions is growing, according to a report this month from the Center for Democracy & Technology (CDT). The digital policy nonprofit's report outlines some of the privacy, accuracy and transparency issues involved with using AI algorithms in schools.

When it comes to anticipating school violence, for instance, many AI programs focus on monitoring social media content or apps such as search and email for key words and phrases that could signal violent intentions. Some people argue that accessing social media content is not a privacy problem because it's all public, said Natasha Duarte, a policy analyst with CDT's privacy & data team. "When it comes to privacy, there's a big difference between looking at someone's public social media page on a one-off basis, and systematically monitoring everything that a person posts and occasionally flagging it for someone who has the authority to take disciplinary action against them or otherwise affect their education experience or opportunities."

These types of products could potentially complicate the Family Educational Rights and Privacy Act. "When a student posts something on social media, it doesn't necessarily fall under FERPA," Duarte said. "However, once a school collects information and puts it in file, then parents might have the right to access that information."

The law also provides little protection from companies and vendors of these AI algorithms. "In the U.S., we don't have a comprehensive privacy law that protects people from companies and commercial entities collecting and using information that we might have a privacy interest in," Duarte said. Right now, the broadest protection against these actors is through terms of service and developers' API terms of use.

This type of monitoring could violate the First Amendment. Although students have had historically limited free speech in schools, the First Amendment does not stop at the schoolhouse gate, Duarte said. "It certainly burdens children's free expression and could have a chilling effect on what they say if they know that someone is systematically monitoring them." Kids experiencing homelessness or who were in contact with criminal enforcement in the past could have increased concern of scrutiny, she said.

Some state and local governments have started to legislate how public entities use algorithms and automated decision making. New York and Vermont have created multidisciplinary task forces to look at fairness and privacy issues involved with the technology. These types of movements and bills that have cropped up federally and in states like New Jersey could have an enforcement role for how these tools are used in school, she said.

"We don't want data that is potentially negative to be following students around and unnecessarily risking their opportunities," she said. "We want everyone to be really thoughtful when they start combining data sets, to be thinking on the front end, 'Once we have this data, how long are we going to keep it and when are we going to get rid of it?' It's important to make sure that young people are not pigeonholed so early in their lives when something is not clearly an imminent threat to public safety."


|

Telecom's Robocall Agreement Might Not Stop Scammers, Spoofers

Last week, the telecommunications industry announced an agreement to crack down on robocalls. Without a deadline or consequences for carriers who do not adhere to the agreement, some have called it an "empty and unenforceable" deal.

Here's what Verizon, AT&T, Sprint and nine other companies promised in the agreement:

➞Offer call-blocking technology at no additional cost ➞Allow free access to additional call-blocking and labeling tools ➞Use technology to authenticate that callers are legitimate ➞Scan networks for robocalls and quickly work with authorities

Becca Wahlquist, partner and head of the TCPA practice group at Snell & Wilmerin Los Angeles, says the agreement will likely do little to stop actual robocalls.

Wahlquist said the agreement muddled the definition between robocalls and legitimate businesses who use customer-provided information to contact users. She is worried the agreement's mentioning of recordings from places such as schools and town halls could confuse people about what a robocall actually is.

"It's not a robocall just because it's not someone punching 10 numbers into a phone," she said. 

Robocallers are slippery, shadowy and illegitimate entities that call sequential numbers or spoof numbers that are similar to callers and do anything to get people to pick up the phone, as defined by Wahlquist. Since these robocallers generally disappear without a trace, she said it's usually large companies and carriers with deep pockets that get strapped with litigation over automated customer calls.

"I'm in litigation where people are calling my clients robocallers, because they're calling customer-provided numbers," Wahlquist said. 

Often, it's a simple case of a reassigned number or transfer to a family member, she said.

"I'm concerned this is going to make people think that legitimate calls could be lumped in with these, and it could increase even further the barrage of TCPA litigation that companies are facing."


Facebook Chairman and CEO Mark Zuckerberg speaks at a Facebook developer conference in San Jose, California, on May 1, 2018. Photo: Marcio Jose Sanchez/AP
|

On the Radar

Judge Postpones Facebook Privacy Feature A Houston court has blocked Facebook from launching its new tool that would allow U.S. users to delete their browsing data captured on the platform. Judge Tanya Garrison granted a Jane Doe plaintiff a temporary restraining order on the tool, called the Off-Facebook Activity app, so that woman can pursue allegations that a sex trafficker targeted her on the site. Facebook says the order is without basis. Read more from Angela Morris here.

Appeals Court to Weigh in on Amazon's Third-Party Liability  A federal appeals court has agreed to revisit en banc its decision to hold Amazon accountable for defective, third-party products. Last month, the U.S. Court of Appeals for the Third Circuit created a potential divide between federal appeals courts on whether the online retailer can be considered a seller. On Amazon's request, the court will revisit Oberdorf v. Amazon, a case where a woman suffered permanent vision loss after buying a faulty leash from the site via a third-party dealer that has disappeared. Read more from Max Mitchell here.

Qualcomm Gets Its Stay After much agency intervention, a federal appeals court stayed the injunction U.S. District Judge Lucy Koh of the Northern District of California issued requiring Qualcomm to license its chips to competitors on FRAND terms. On Friday, the U.S. Court of Appeals for the Ninth Circuit approved Qualcomm's request for a temporary stay, flagging that the Federal Trade Commission and Department of Justice's Antitrust Division have not come to an agreement on whether Qualcomm has abused antitrust laws. Read more from Ross Todd here.


Tahanks for reading. We will be back next week with more What's Next.