Welcome back to What's Next, where we report on the intersection of law and technology. Today, we chat with Stanford's Jennifer King about the Internet of Things and what it means for the future of our privacy. In other news, the botnets are back, and they're going after a new target. Also on the agenda, IBM is feeding photographers' Flickr images into its facial recognition system, and there's a debate over whether that qualifies as a copyright violation. All of that and more, below.

➤➤ Would you like to receive What's Next as an email? Sign up here.


|

The Internet of Things Is Here. And Privacy Has Some Catching Up to Do

Regulators are nibbling. Privacy advocates are wary. And meanwhile consumers are being presented with an ever-widening array of internet connected gadgets that promise to simplify daily life. (A voice-activated mirror? An egg tray that notifies youwhen you're running low? Seriously!)

For this week's Q&A, we catch up with Jennifer King, director of consumer privacy at Stanford Law School's Center for Internet and Society, and ask what has her attention when it comes to the Internet of Things. One issue she's watching is who bears responsibility for privacy in a world where there are fewer screens and more voice-activation.

Do you have a smart home device or other internet connected devices in your home?

I do not, on purpose. I have been testing smart speakers for research purposes but I do not use them in my normal life. I have one IoT device which is a picture frame that I can connect to the internet, but I don't let it connect.

What is the biggest concern you have with IoT as it relates to consumer privacy? The notion that consumer defaults are set for maximum information collection. People have information collected about them that they're not aware of and that they can't control at all, and that it happens without much visibility. The smart TV case where the FTC settled charges with Vizio, that's a really good example. Here's an object that most of us are completely familiar with. We have all these expectations around what a TV is supposed to do and how it's supposed to operate. And suddenly it's made into a smart TV and the FTC argued that Vizio provided pretty poor notice to consumers that literally the TV was tracking everything that people watched. That was not done in a way that I think most people understood or would have been happy about.

You should, at the very least, be able to turn that functionality off. This is not a free subscription service where the implicit or explicit trade-off is that you're paying with your data. This is an actual physical object that you purchased, so you should arguably have some control over it.

Is there something that's different about engaging with a physical object versus a website that impacts the privacy calculation? One of the things that I find very fascinating about IoT is that you don't have an interface, meaning there's no screen, for the most part, to interact with, to configure, to present a privacy policy, or to help the user configure their privacy setting. Today, when we do have IoT technology, even something like an Amazon Echo or Google Home device, we are still using apps to configure those things. But we're interacting with them using our voice. That's just a whole game changer. As it is, we do a pretty poor job trying to communicate privacy through interfaces, but once we get rid of the visual interface, the challenges become even more daunting.

A lot of what we'll have to do right depends on how we design these different products, so privacy by design becomes a really important concept. The M.O. that we've had for a long time of just get the product launched and then “Oh, does it have privacy issues? We'll go back later and fix those.” You're not going to be able to do that with IoT. And it's not just a legal compliance issue, it's a huge customer trust issue. Companies ignore that at their peril.

I think we are dragging the industry kicking and screaming into a world of more forethought and privacy by design rather than privacy after the fact, but I'd say we're still in the transition phase. To the extent that your business model relies on customers not reading notices and not understanding what they're doing, I think that is shifting and will change.

What will we be talking about next year in the area of IoT? The public conversation is so dominated by AI that it's going to squeeze all the oxygen out of the room for other issues. IoT is kind of the sleeper in that case. It hasn't become any less important. It's just that a lot of the current hand-wringing and focus has been on AI and to some degree robotics and automation, while at the same time no one has stopped making these devices, and they're going to become more popular and more pervasive.

One of the things I haven't seen discussed is how the GDPR has affected the IoT market. … I am wondering if we are in a waiting period where we're trying to see how the different member states enforce the GDPR and its opt-in provisions, for example. There have been lots of consumer complaints in general. When we finally see decisions coming out of the EU, it will be interesting to see if that will be a game changer.

—Vanessa Blum


|

Dose of Dystopia: Rise of the Botnets

You might say that privacy and security of IoT devices are two sides of the same coin. Those of you paying attention to the infosec world no doubt recall the “Mirai” botnet, a chain of exploited internet-of-things devices that in 2016 launch what is regarded as the largest Distributed Denial of Service attack in history. Well now, reports Ars Technica, the malware has been updated to take advantage of devices that are more commonly seen inside enterprises, rather than home devices—which could give attackers access to much higher-speed networks.

Ars cites security firm Palo Alto Networks as identifying 11 new exploits in the latest strain of the Mirai malware, including those targeting “WePresent WiPG-1000 Wireless Presentation systems” and “LG Supersign TVs.” Reporter Dan Goodin notes that these devices “are intended for use by businesses, which typically have networks that offer larger amounts of bandwidth than Mirai's more traditional target of home consumers.”

So why is that important? Well, aside from potentially giving attackers more ammo to hit their target of choice, it raises the lingering question of how much manufacturers should be liable for leaving their devices vulnerable. The Federal Trade Commission's ongoing lawsuit against router and IoT camera manufacturer D-Link centers on that very issue (although it didn't involve Mirai); the two parties were in settlement discussions as of two weeks ago.

One can imagine that if companies start seeing their networks used as DDoS bazookas, that issue might bubble to the forefront for regulators—and the courts—sometime soon again.

—Ben Hancock


|

AI's 'Uncharted Territory'

Rolling out an artificial intelligence system is a bit more difficult than booting up software and saying, “Go.” For one, AI requires training—data that provides the basis for the AI system to further perfect itself. This training can run into all sorts of issues if not managed correctly, such as the increasingly popular “garbage in, garbage out” adage and bias creeping into the system. But as some photographers are learning, training AI systems can result in intellectual property questions as well.

Last week, NBC News reported that IBM has been taking images from photo sharing website Flickr to train its facial recognition technology. What they didn't do, though, is actually tell the photographers their images were being used. Naturally, the photographers were none too happy about the situation, but privacy experts who spoke with Legaltech News agreed on one thing: They may not have much of a leg to stand on for a copyright claim.

“Even if they did have a copyright claim, IBM would have a decent fair use claim, another issue they may rely on,” said Dunlap Bennett & Ludwig partner David Ludwig. “IBM may come out and say, 'We are not copying the photos to sell them, we are using this to create an amazing AI tool.'”

Attorney Carolyn Wright added that most photographers don't register their photos with the U.S. Copyright Office, which allows them to only recoup actual damages—a licence fee usually under $100 and provable profits earned by the company from their image. So the question then becomes, especially with privacy front and center in the current public conversation: What risk are companies willing to assume for this data?

There's no easy answer, said attorney Sara Hawkins, as these problems are “uncharted territory.” She added, “It's really important. We have that data, but the question is how do we allow that data to be accessed and do we allow that data to be accessed for free?”

—Zach Warren


|

A Breach in the Blockchain

Legaltech News looked at how blockchain security may not be all it's cracked up to be. Reporter Victoria Hudgins quotes Phillips Nizer partner and former New York state Department of Financial Services Deputy Superintendent Patrick Burke: “With anything involving software, anything involving anything online, it's always an IT security risk. So while the blockchain itself is generally pretty impregnable except for the '51% attacks,' the software written around the blockchain is as susceptible as any other software.” 

For instance, hackers in January stole $38,000 worth of cryptocurrency after exploiting a weakness in an adult entertainment company's smart contract. In January, crooks got away with $1.1 million after rewriting cryptocurrency Ethereum Classic's transaction history by taking over more than half the network's computing power—aka: the 51% attack. Read more here.

—MP McQueen


 

 

|

On the Radar:

Facebook Settles: The social media giant has agreed to change its advertising practices after it was smacked with lawsuits accusing it of discrimination. Facebook first faced scrutiny over its ads after ProPublica published a story in October 2016 that detailed how the company allowed advertisers to target users based on race. This week's settlement also includes nearly $5 million in damages and court fees. Read more from Ellis Kim and Ross Todd here.

An AI Challenge: Harvard and MIT are awarding $750,000 to seven organizations working to promote open access to government data, or to detect deepfakes. The award is part of an initiative the two Boston-area schools launched two years ago to offer a broad perspective on AI and ethics. Read more from Victoria Hudgins here.

Car Comms: Rapid advances in autonomous vehicle technology means regulators need to get ready for roadways with fewer human drivers. Adopting, implementing and refining the rules for how wireless spectrums are assigned, used and protected from interference will be crucial for autonomous vehicles to communicate with us and each other. To learn more, check out this item from Denton's Eric Tanenblatt and Todd Daubert here.


That's it from us this week. We'll be back next week with more What's Next.