There could be no time like the present for anyone hoping to see amendments made to Section 230 of the Communications Decency Act.

The long-standing provision buffering tech companies from liability for third-party content posted to their sites has been the subject of much debate recently, whether from members of the U.S. House of Representatives Energy and Commerce Committee concerned over similar language appearing in U.S. trade agreements or U.S. senators seeking amendments to address alleged tech company bias against conversation.

But political momentum still may not be enough to paper over some of the thornier questions surrounding how online content can be feasibly moderated at the federal or even the state level.

"If I was placing bets, I wouldn't want to bank my house or anything on seeing any of these things move forward," said Jessica Lee, a partner and co-chairwoman of the privacy, security and data innovations practice at Loeb & Loeb. "I think we'll see something eventually, but it's really the timing that's up in the air."

However, it's hard to imagine the timing getting much better than now for amendments to Section 230. Lee thinks that the right atmosphere for a change is certainly in place, citing concerns over the integrity of the upcoming 2020 presidential election, the rise of bullying and hate crimes online, and the mounting backlash against big tech as factors.

"I think that if it's going to happen, now is the time that it's going to happen. Will it happen? I think you still have some challenges," Lee said.

Liz Harding, a shareholder at Polsinelli, referred to those hurdles as "the law of unintended consequences."

She agrees that the political will currently exists to make changes to the ways in which online platforms are held accountable for the content posted on their sites, but identified some practical concerns likely to fetter progress.

For starters, big-picture concerns about censorship and the inadvertent erosion of free speech could temper lawmakers' enthusiasm. On the micro-level, there's the complicated process of actually divining parameters around what constitutes hate speech or content likely to provoke violence, both of which may ultimately be easier to recognize than political bias.

"Determining what should be censored from publication is extremely subjective. What is offensive to you may not be to me.  What you see as political bias may be different to my interpretation," Harding said.

So are new rules around how online platforms moderate content a lost cause?

Despite some of the complications at play, the interest in gaining some kind of traction on the problem remains. Lee raised the possibility that the individual states could step up to fill the absence of a federal standard with laws of their own, similar to how California has taken the initiative on privacy with the forthcoming California Consumer Privacy Act.

However, it's not a given that a state-centric approach would be greeted kindly at the federal level.

"I think the problem with tackling this at the state level is that of preemption and whether any proposed state law could effectively serve to change the Section 230 protections for platforms," Harding said.

However, continued public and political pressures could still yield progress. Lee indicated that tech companies might attempt to get ahead of the issue by proposing their own content moderation solution.

Still, some healthy skepticism may apply. "If they put something out or they put a solution out, it has to be something that's going to pass the smell test and not just be, like, some tech-shine," Lee said.