Twitter revealed its plans to curb hate speech and violent threats on its platform Tuesday amid increasing international calls to regulate content moderation on social media.

In a blog post, the San Francisco-based company's vice president of service Donald Hicks and senior director of product management in health David Gasca said Twitter plans to update its community rules “in the next few weeks” and will begin “experimenting” with an option to hide tweet replies in June.

The company will also up its current content moderation technology to remove more abusive posts before they're reported. Twitter began removing nonreported posts in the past year, and, according to Tuesday's post, automated technology now proactively removes 38 percent of abusive content.

“People who don't feel safe on Twitter shouldn't be burdened to report abuse to us,” Hicks and Gasca said in the post. Twitter did not immediately provide additional comment.

Twitter's post comes as politicians at home and abroad re-examine social media platforms' liability for hateful and violent content posted on their sites, sparked in part by a shooter's viral Facebook livestream of the murder of 50 people at mosques in New Zealand last month. Social media sites struggled to contain the video's spread across platforms.

Shortly after the shooting, Australia passed a law threatening fines and jail time for social media companies and their executives if they do not “ensure the expeditious removal” of “abhorrent violent material” on their platform. Last week, New Zealand Commissioner John Edwards told Radio NZ that “regulating, as Australia has done just in the last week, would be a good interim way to get [platforms'] attention.”

And content moderation concerns aren't just being raised in Asia-Pacific. The Toronto Star and Buzzfeed reported last week that Canada is considering regulating tech platforms. Democratic Institutions Minister Karina Gould told Buzzfeed that “self-regulation is not yielding the results that societies are expecting these companies to deliver.”

The U.S. House Judiciary Committee held a hearing last week on the spread of white supremacy online, with representatives from Google and Facebook testifying on their companies' efforts to address hate speech online.

Louisiana Rep. Cedric Richmond hinted at the hearing that regulation could come if companies don't improve their content moderation strategies. Section 230 of the Communications Decency Act currently protects platforms from being held liable for harmful content on their sites, in most instances.

“Figure it out because you don't want us to figure it out for you,” Richmond said.

Read More: