California's new bot law went into effect on July 1, which means it's now officially illegal to use undeclared bots to incentivize a sale or influence an election. The parameters of the legislation are fairly narrow, targeting bots deployed “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving.”

In other words, it's more of statement than a law that's likely to see much enforcement action. But that statement still should be of interest to the social media giants that dominate the California landscape and provide an inadvertent platform for bots.

“Doesn't it seem like it's a message to Twitter, to Facebook, to Google: 'We stopped one step short, we're at your doorstep. If you want us to knock on the door, then don't police the bots,'” said Christopher Ballod, partner at Lewis Brisbois.

That door knock would ostensibly be legislation that assigned the responsibility of monitoring or identifying deceptive bots to the platforms themselves. As the law stands now, the parties deploying the bots are accountable for properly declaring them.

Holding social media companies responsible for policing their own platforms might indeed be more effective than trying to intimidate actors who are already working outside of the law. Even if perpetrators were caught in the act of using bots to, say, influence an election, they likely exist beyond the reach of California authorities.

“A lot of that activity is from foreign countries, so we're not going to go to war over it,” Ballod said.

So why didn't California's law take a firmer hand with social media and other platforms? Jessica Lee, a partner and co-chair of the privacy, security and data innovations practice at Loeb & Loeb, said early drafts of the bill explored placing more of the onus for bots on platforms, but those considerations didn't make the final cut.

One reason might be that no one could quite figure out how platforms would be able to manage such a monumental task. For example, in the month of December 2018 alone, Twitter challenged 22,185,461 accounts potentially associated with spam or platform manipulation.

Lee believes that throwing legally enforced fines into the mix would lead to extreme pushback.

“We're talking about platforms that have millions and millions of accounts. There doesn't seem to be a great solution right now with regards to how to moderate [bots],” she said.

While the sheer volume of bots on platforms such as Twitter definitely poses a challenge, there's also more nuanced issues pertaining to the First Amendment that make taking an overly aggressive stance problematic.

According to Lee, bots disseminating opinions—even if those opinions are discriminatory or hateful—are theoretically protected by the First Amendment. There's also the nebulous gray area between what is considered an opinion and what is out and out false.

“I think there's still a lot of back and forth about how to deal with the tension between the First Amendment and this issue of content moderation,” Lee said.

Given that uncertainty and the California bot law's inherent limitations, the chances of it inspiring similar efforts in other states are uncertain. Ballod thinks it unlikely, citing California's unique position as a hub for social media or tech companies like Facebook, Twitter and Google. Plus, the law was originally passed back in October 2018.

“That's an awful long time for no states to jump on the bandwagon,” he said.

Lee, on the other hand, thinks the national conversation around the integrity of political advertising could help a variation on the California law to gain traction elsewhere.

“I think because we're leading up to an election season, we're going to see a lot more activity, both regulatory activity and then sort of external pressure to make sure that we don't have a repeat of what happened in 2016,” she said.