Culture 2 min read

Facebook Launches 'One Strike' Policy for Facebook Live

Frederic Legrand - COMEO / Shutterstock.com

Frederic Legrand - COMEO / Shutterstock.com

In an effort to regulate Facebook Live, Facebook has announced that it will now be implementing a ‘one strike’ policy to prevent users from abusing the platform.

Facebook‘s announcement was made two months after the social media network’s streaming service was used by an Australian man to broadcast a mass shooting that left 50 dead at two mosques in Christchurch, New Zealand.

Facebook’s VP for Integrity Guy Rosen said in an official statement released Tuesday:

“Today we are tightening the rules that apply specifically to Live. We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense.”

According to Rosen, all people who break specific Facebook rules, including the platform’s Dangerous Organization and Individuals policy, will be banned from using Facebook Live.

Preventing People From Abusing Facebook Live

After the devastating mass shooting incident in New Zealand, the call for tech companies to regulate the spread of extremist content online has increased. According to Rosen, one of the many challenges the company faced since the Christchurch shooting was “a proliferation of many different variants of the video of the attack.”

Rosen claimed that in most cases, the edited version of the gruesome video was shared non-intentionally, which made it hard for their systems to track them. The Facebook executive admitted that despite their best efforts to control the spread of the video, this particular area required them to invest more in research.

As a solution to stop people from using Facebook Live to spread hate, Facebook is now working with the University of Maryland, Cornell University, and the University of California, Berkeley to develop new techniques that will enable the company to detect manipulated media content.

“This work will be critical for our broader efforts against manipulated media, including deepfakes. We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

Read More: Facebook Expects $5 Billion Fine From FTC For Data Privacy Violations

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Chelle Fuertes know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Chelle Fuertes

Chelle is the Product Management Lead at INK. She's an experienced SEO professional as well as UX researcher and designer. She enjoys traveling and spending time anywhere near the sea with her family and friends.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.