Facebook took another step to reduce hate speech on its global platform this week, announcing an expansion of its artificial intelligence translation tools.

While the company's platform content moderation policies—toward both hate speech and fake news—have come under scrutiny in the U.S., Facebook's decisions on which posts come down can be even more complicated in non-English-speaking countries.

If content moderators can't read a post's language, it's hard (if not impossible) to spot hate speech. That issue was raised again last month after Menlo Park, California-based Facebook booted, for the first time, a senior government official off the platform for violence-inducing hate speech in Myanmar.