Protesters on Friday plan to gather outside Meta's San Francisco offices to demand the company stop making the model parameters for its Llama 2 artificial intelligence public under an open-source framework—citing safety risks.

Organizer Holly Elmore said releasing parameters, which determine how artificial intelligence behaves, means bad actors can strip safety features from the model and use artificial intelligence for everything from generating racist or hateful content to planning phishing campaigns and cyberattacks.