fbpx
Home / Tech  / Zuckerburg attempts to make Facebook a safer place – but is it enough? Here’s what the experts think

Zuckerburg attempts to make Facebook a safer place – but is it enough? Here’s what the experts think

Alex Bartley Catt

Alex Bartley Catt is the founder of artificial intelligence company Spacetime.

Do you believe the latest changes made by Facebook is enough of an attempt to moderate harmful behaviour on the platform?

I’d like to define ‘harmful behaviour’ first in an effort to get some clarity around this issue. Looking back at what happened with the Christchurch live stream, I see two main issues. First is the ease in which the live stream was uploaded to Facebook and the duration it ran before being taken down (which it never was, it ended). Second is the ease in which other internet users duplicated and distributed the video during and post live streaming.

The first issue, uploading, the act of live streaming, is something Facebook will manage better by banning users from using Live for a period of time if they have broken one of Facebook’s rules. I think this is a decent idea and will limit the amount of trolling online. What it won’t do is stop anyone planning to make a scene from doing exactly that. Case in point, the Christchurch event would have still been live-streamed. Of course, Facebook could go a step further and outright ban live-streaming. The problem with this is the nature of the internet and the wealth of social media and live-streaming alternatives available. Again, this would reduce the amount of harmful behaviour on Facebook but would not have stopped the Christchurch video being made available live, online. My issue with the above is we are potentially removing people’s ability to share ideas online without removing the worst behaviour of all. Not to mention we are leaving it up to Facebook to decide right vs. wrong when their platform spans multiple countries and cultures.

The second issue, distribution and sharing of the already live-streamed content, is a less straightforward problem to solve. Yes, the $7.5 million investment into video analysis tech will help, but as Facebook mentioned in a release shortly after the Christchurch event, these AI systems require training data to be able to accurately detect harmful content. This training data is limited and hard to get considering most of it is banned and illegal to make or share, much like the Christchurch live stream. What was more interesting from Facebook was the claim that “during the entire live broadcast, we did not get a single user report.” This changes the framing of this problem dramatically. How did the supposed thousands of live stream viewers not think to flag or report the video, aiding Facebook in moderating harmful content? It makes me question how we might shift the culture to reduce sharing of harmful content, rather than relying on technology to save the day.

When will technology be able to manage hate crime on social media?  

No doubt technology has a role in managing harmful activity online. A lot of this goes on unseen. We also don’t see thousands of people who manipulated and re-uploaded the live stream footage. Facebook mentioned there were 900 variations of the video (making it hard for AI to detect) people attempted to upload 1.5 million times (Facebook automatically blocked 1.2 million of those with AI). Again, this is not strictly a technology and social media issue. It’s an issue every one of us will have to face up to including traditional media and government. Believe it or not, the most footage I personally saw of the live stream was not on social media but local and international news media websites.

Cassie Roma

Cassie Roma is head of content marketing at The Warehouse Group.

Do you believe the latest changes made by Facebook is enough of an attempt to moderate harmful behaviour on the platform?

I do not believe that the latest changes made by Facebook are enough to moderate or even stem the flow of harmful behaviour by bad actors on their platform. Until Facebook’s business strategy moves from being a money making machine to being a platform for good, we’re all a little hamstrung as users to the whims of advertisers. Most of these whims are based on marketing success metrics, metrics that need to be extricated from the conversation around safety at scale. The first step towards positive change will be in putting humanity, not dollars, first when thinking about data, privacy, & the glut of content that can be targeted at communities that are vulnerable to fake news & harmful content/hate speech.

What further action would you like to see enacted by Mark Zuckerberg?

Wow, this is a BIG question. There’s so much that needs to be enacted. Truly investing in better ways to spot harmful behaviour & false information spreading widely would be a start. Cracking down on hate speech, hate groups, & the leaking of private consumer information to advertisers would be a trio of awesomeness for me when tackling some of their biggest issues.  

What remains the biggest safety/security challenge for Facebook?

Investing in safety & security. Even if it means advertisers spending less on platform. Facebook’s big wigs need to take a hit to make a difference. 

What will it take before technology is able to manage hate crime on social media?  

Rules & regulations – by governing boards & by governments. With strong guardrails & regulations in place, platforms like Facebook & other big players need to be regulated by independent entities. We cannot charge social media platforms with being our moral gatekeepers.

Bron Thomson

Bron Thomson is the founder and CEO of Springload, a Wellington-based digital agency.

Do you believe the latest changes made by Facebook is enough of an attempt to moderate harmful behaviour on the platform?

The funding for research into video analysis and the “one strike” ban policy are both to be applauded, but won’t be a complete solution. Facebook have long had a community policy which forbids the kind of content we saw posted during the Christchurch attack, however as we’ve seen there was no automated process in place to detect and remove objectionable content.

Computer vision, a field of AI, has been available as a commodity technology for some time, with some major providers offering the ability to detect violent content. This clearly hadn’t been implemented at Facebook to detect and automatically block this sort of content.

Facebook have blamed a “lack of training data” but we know that off-the-shelf services already exist. It would have been preferable that Facebook were actively blocking this kind of content, e.g. this.

What further action would you like to see enacted by Mark Zuckerberg?

Censorship is one part of the problem, but the bigger concern is the echo chamber effect that concentrates ideology within social media networks. This amplifies certain extremist views and normalises hate when people share and like content which appeals to the fast-thinking brain, that while not hate speech in itself, normalises an ideology which leads to the development of hate. I would like to see Facebook do an in-depth study into their algorithms and ensure that the echo chamber effect is better understood and combatted. It’s also up to the users of Facebook to report this sort of content, and the responsibility of Facebook to immediately act on those reports.

What remains the biggest safety/security challenges for Facebook?

Even before Christchurch, Facebook have admitted that they are good at censoring out child exploitation and terrorism, but not hate speech. This will continue to be a challenge for Facebook to improve this area. A bigger challenge will be overcoming fake content which continues to grow in sophistication. The potential for sensational fake content to skew election and referendum outcomes is a major concern, and one for which there is unlikely to be an easy solution.

Troy Rawhiti-Connell

Troy Rawhiti-Connell is a multi-hatted social and digital media communication strategist. Currently, Connell is head of copy at The Warehouse Group.

Do you believe the latest changes made by Facebook is enough of an attempt to moderate harmful behaviour on the platform?

I don’t think Facebook is being ambitious enough in their efforts to confront harmful behaviour. The one-strike proposal starts off well, but when you add the “no context” rider to the policy then it sounds like a very specific remedy that doesn’t address a broader spread of online behaviours. It proposes to limit the path from communication to execution to, first, posting terror content, then creating terror content within the Live platform. All Facebook appears to propose is barring potential murderers and terrorists from access to one content channel within their platform, for a limited time. That’s not good enough.

Effective moderation would be instant and permanent removal from all Facebook’s products. Is that a line that company is willing or able to walk? I’m not in their boardroom or sitting with their community managers or engineers, so I don’t know, but that course of action is probably about as effective a solution as Facebook might be able to provide. They’ve created the problem, so they must do their best to solve it.

What further action would you like to see enacted by Mark Zuckerberg?

Mark Zuckerberg and his team need to show leadership. Not the kind of leadership that’s written on a business card or LinkedIn profile, but the kind of leadership that delivers what Prime Minister Jacinda Ardern has gathered support for with the Christchurch Call: an effective and lasting solution to limiting or even eliminating hate speech and content on social platforms. Yes, it’s a big ask – but this is a big problem.

The barriers to success are huge, and possibly insurmountable, but nothing less than Facebook’s absolute best efforts to protect Facebook users, the Facebook staff who have to wade through the much, and those who would be victims of mass murder and terror attacks should be acceptable to anyone.

If it’s easier to ask forgiveness than to seek permission, then I’d recommend an instant and permanent ban instead of a limited-time ban on that zero-strike policy. Open it up to appeals rather than sending this weak-willed message that a little bit of time in the naughty corner is all it takes to stop someone from committing or inciting harm. Anyone who believes it will is fooling themselves.

I’d also recommend that Mark Zuckerberg and Facebook take the time to really know their role, both globally and in the United States. They don’t just operate a communications platform, they have a voice that is among the world’s most powerful. What I love about some of our larger New Zealand companies is that they’ve began to develop points of view on issues that matter to New Zealanders, and it’s really brave to do that because the power is given to customers or users to decide whether their values align with those of the company.

Facebook sits on the sideline, only acting when prodded, and even then only acting to a bare minimum. What if Mark Zuckerberg actually stood up and said “I’ve had it with the number of shootings we’ve had in our schools, theatres, clubs, and places of worship. I’ve had it with the fake news and the astroturfing campaigns, and I’ve had it with the hatred that has been allowed to grow on a platform I helped to grow and that my name is forever linked with.” No, it’s not an easy thing to say, especially in the United States, but unless he feels otherwise then I’d be glad to hear him speak loud and make commitments.

Speech is free, but you could say something valuable, Mister Zuckerberg. Please take the opportunity that is afforded to you and to all Americans under your own Constitution. If, indeed, that is how you really feel. Is this the outcome you imagined when you were doing all-nighters on campus?

When will technology be able to manage hate crime on social media?

The biggest safety and security challenges for Facebook are the people who build it and the people who use it, and that will continue for as long as their proposed safety measures are so limited in their scope and efficacy. Those who really want to do harm to others online find ways to step around protocols, moderation lists, and the like just as easily as a motivated reader can step around a paywall. And what about those who don’t understand they’re doing harm? How do you protect anyone from the uncle or aunt or workmate who is reposting something from their preferred news or talk source and saying “just saying what people think” without understanding the impact? On the surface, it looks like a long comparision to draw, but it’s at the heart of the contention that “that isn’t us” isn’t true.

When we say things that harm, and believe things that harm, and don’t challenge and defeat things that harm, then I’m afraid that is us.

I’m ever the optimist, but I think the likely outcome is ongoing management, embarrassment, and then more management rather than complete elimination of hate crimes on social media. Even the biggest and most powerful companies are the sums of the same parts that all other organisations are made of. Issues of resource allocation, internal and stakeholder politics, and all the other things that see us flailing in the dark sometimes affect the big and mighty. Facebook, Twitter, and the other global social media providers continually trip over themselves in the quest for better services because, in the end, they’re flailing in the dark too. All the data in the world won’t help if you’re unable or unwilling to put it to work.

Technology is no solution for harmful human behaviour. We’ve been hating and killing each other in the name of difference for thousands of years. Our platforms give us unprecedented opportunities to converse with and learn from each other, but also to hate and fear each other to the point of extremism.

In the end, the only way to effectively manage hate crime on social media might be to manage social media out of existence. That is unreasonable, and so it’s on all of us to manage ourselves. It starts with us. But it would fantastic if the big fish could give us a hand.

One of the talented Idealog Team Content Producers made this post happen.

Review overview