Facebook says the original New Zealand shooter video was viewed about 4,000 times before removal - Advance Tips And Tricks For PC

Advance Tips And Tricks For PC

Advance Tips And Tricks For PC, Laptop, Smart Phone, Android, Microsoft, IOS, Apple OS

test banner

Breaking

Home Top Ad

Responsive Ads Here

Post Top Ad

Responsive Ads Here

Tuesday, March 19, 2019

Facebook says the original New Zealand shooter video was viewed about 4,000 times before removal

Facebook released new figures about its attempts to stop the spread of videos after a shooter livestreamed his attacks on two Christchurch, New Zealand mosques last Friday, killing 50 people.

In a blog post, Facebook vice president and deputy general counsel Chris Sonderby said that the video was viewed less than 200 times during the live broadcast, during which no users reported the video. Including views during the live broadcast, the video was viewed about 4,000 times before it was removed from Facebook. It was first reported 29 minutes after it started streaming, or 12 minutes after it had ended. Sonderby said a link to a copy was posted onto 8chan, the message board that played a major role in the the video’s propogation online, before Facebook was alerted to it.

Before the shootings the suspect, a 28-year-old white man, posted an anti-Muslim and pro-facism manifesto. Sonderby said the shooter’s personal accounts had been removed from Facebook and Instagram, and that it is “actively identifying and removing” imposter accounts.

Facebook’s new numbers come one day after the company said it had removed about 1.5 million videos of the shooting in the first 24 hours after the attack, including 1.2 million that were blocked at upload, and therefore not available for viewing. But that means it failed to block 20 percent of those videos, or 300,000, which were uploaded to the platform and therefore could be watched.

Both sets of figures, while meant to provide transparency, seem unlikely to quell criticism of the social media platform’s role in spreading violent videos and dangerous ideologies, especially since Facebook Live launched three years ago. They call into question why the platform is still heavily reliant on user reports, despite its AI and machine learning-based moderation tools, and why removals don’t happen more quickly, especially during a crisis (and even routine moderation takes a deep psychological toll on the human monitors tasked with filling in the gaps left by AI). The challenges of moderation on a platform of Facebook’s scale (it now claims more than 2 billion monthly users).

Sonderby also said that the company has hashed the original Facebook Live video to help detect and remove other visually similar videos from Facebook and Instagram. It has also shared more than 800 visually-distinct video related to the attack through a database it shares with members of the Global Internet Forum to Counter Terrorism (GIFCT). “This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online,” he wrote.

Other online platforms, however, have also struggled to stop the video’s spread. For example, uploaders were able to use minor modifications, like watermarks or altering the size of clips, to stymie YouTube’s content moderation tools.



from TechCrunch https://ift.tt/2FmRt6o

No comments:

Post a Comment

Post Bottom Ad

Responsive Ads Here