Photo: Diego M. Radzinschi. Photo: Diego M. Radzinschi
|

Earlier this month, the Washington Post reported that representatives from top tech companies such as Facebook, Google and Twitter met with Trump administration officials about the possibility of developing a tool that could scan social media posts to help predict mass shootings.

One day later, CNN ran a story about a draft executive order from the White House that would create new laws surrounding how and when social media platforms remove or block content on their platforms while assigning crucial roles regarding the proposals to both the Federal Communications Commission and the Federal Trade Commission.

Both stories broke at a time when U.S. regulators are still trying to clarify their relationship with social media platforms. But heavier government involvement with the way that content is published and utilized on those sites is not entirely out of the question, provided it stays just this side of free speech and censorship concerns.

And that may be easier said than done.

"There's a lot that we haven't figured out about basic human-to-human, face-to-face interaction with how the law plays out, much less than online," said Christopher Ballod, a partner at Lewis Brisbois Bisgaard & Smith. "It's fraught with a lot of issues."

Many of those issues stem from privacy, a perennial thorn in the side of social media that would seem especially relevant were the government to actually collaborate with tech companies on a tool that scanned posts, photos or videos in an effort to predict violence.

Yet, assuming the tool in question were to take the shape of an analytic solution, it wouldn't necessarily run afoul of privacy the way that it's conceived in the United States.

Jarno Vanto, a partner in the privacy and cybersecurity group at Crowell & Moring, said that the U.S. typically considers content a user uploads to a social media site to be public from the moment they hit "share." This stands in juxtaposition with many European countries, which would consider such information to be personal data and thus subject to certain rights.

"If you put in place analytics tools [in the U.S.] that would then analyze the type of content posted on social media platforms." Vanto said. "That would more likely be allowed in terms of even if it were a government tool because it would not filter certain kinds of content."  

However, if the same analytic solution were ever to cross the line from serving as a law enforcement notification system into a tool that automatically censored posts by virtue of their content, there could be trouble. But trouble for whom?

"When we look at government bodies engaging in the vetting of content, I'm 100% sure that that would be challenged on a First Amendment basis," Vanto said.

Because they are private entities, social media companies tend to have more freedom to restrict or edit content as they see fit. Also, Section 230 of the Communications Decency Act says that providers will not be held liable for actions taken in good faith to "restrict access or availability" of content deemed obscene, filthy or excessively violent, among other things.

Per Vanto, that protection from liability would typically hold unless the social media platform made its own edits to a piece of material or published original content itself.

According to CNN, however, the draft executive order developed by the White House would ask the FCC to "find that social media sites do not qualify for the good-faith immunity if they remove or suppress content without notifying the user who posted the material, or if the decision is proven to be evidence of anticompetitive, unfair or deceptive practices."

The "unfair" element could be a sticking point. For example, the Communications Act of 1934 required broadcasters to provide equal time to all candidates for a public office and failure to do so might have brought them into conflict with the FCC.

However, Ballod, at Lewis Brisbois, points out that the internet is arguably more accessible to the common person than a television studio. Still, so long as the FCC isn't targeting individual posts, there's a chance it could wind up leveraging some influence over social media platforms as a whole.

"If the position were taken that a social media platform does little or nothing to prevent disinformation, misinformation, abuse, deceit, fraud and dangerous content I think then you have a leg to stand on," Ballod said.