Brand Safety: How Facebook Uses AI To Ensure A Safe Advertising Environment

Social networks are great opportunities for advertisers, but hate postings and rampant misinformation can tarnish the user experience. Facebook cleans up and supports its auditors with an effective tool: artificial intelligence.

“Is the advertisement placed securely with you? We don’t want to appear together with weird things.” This is what some advertisers ask when they are looking for a suitable medium for their offers. Brand safety is a central quality requirement for marketing: Nobody wants to see their brand associated with harmful content.

And in the age of fragmented web content, the concern about a secure advertising environment is even more significant: Images, texts, videos, and ads from a dozen sources can be imported and displayed side by side on a single website.

Viewed in this way, social networks such as Facebook and Instagram are the most diverse environment imaginable. Everyone with Internet access – and more  and more of them worldwide – contributes content and can frame an advertisement that appears there. The network is where people get information, exchange ideas, meet friends, share passions, and interact. It is these strengths that make Facebook and Instagram popular platforms for advertisers.

What Can Be Done Against Hate Postings And False Information?

But wherever people are, their problems do not stay away. In addition to valuable and friendly posts, hatred, discrimination, and misinformation appear. This harmful content not only casts a shadow over advertising, but it also threatens productive and helpful communication for all users who Facebook wants to promote.

The podcast’s Facebook Briefing Discusses the extent of hate speech online and where there is still a need for action on the part of the platform operator, the legislature, and the judiciary, among others with Renate Künast member of the Bundestag.

With their community standards, Facebook and Instagram have created a framework to protect users and keep harmful content away from the platforms. In doing so, they follow the strategy of “remove, reduce, inform.” Harmful content that contradicts the user agreement will be removed. Inappropriate content that is within the boundaries of the agreement is reduced. Users are provided with contextual information to decide for themselves whether they want to click, read or share something.

Artificial Intelligence Helps With Testing.

The enormous number of contributions that have to be checked daily cannot be managed with human labor alone. To do this, Facebook uses tools with artificial intelligence. Complex AI networks are trained to support auditors.

Insults and inflammatory messages are often clearly and quickly recognized by the AI. Difficulties are caused by irony and seemingly harmless images that are discriminatory depending on the cultural context. But artificial intelligence keeps getting better. “The more subtext and context play a role, the greater the technical challenges,” says Ram Ramanathan, Director of the AI ​​Product Management Team at Facebook. “This is exactly why the big leap in the further development of AI has been so significant in recent years: AI could learn what makes such content.”

Recognized Over 96 Percent Of Hate Messages

The current Community Standard Enforcement Report also proves the learning success. Of the 25.2 million hate speech content that Facebook removed in the first quarter of 2021, 96.8 percent were proactively detected by AI. During the same period, the spread of hate speech has been dramatically reduced.

On Facebook, only 6 out of 10,000 content views were hated comments. This metric is vital for advertisers who want to know the potential risk of an ad being displayed along with malicious content.

Facebook has also made great strides in using artificial intelligence to detect illegal content and remove it more quickly before users report it. The new systems are already able to recognize hate speech in 45 languages ​​on all of our platforms. This makes the test efficient and ensures that testers do not have to expose themselves to the most harmful content.

Incorrect Information Is Flagged.

If the AI ​​models have identified potential misinformation, this is displayed to the independent fact-checker under the items to be checked and evaluated. As soon as a piece of content is classified as incorrect by the fact-checkers, Facebook reduces its distribution and marks it with a warning and informs people by giving them more context. Users: the checked content is still accessible, but it is displayed further down in the news feed and warning. Duplicates and similar versions of the article are also detected thanks to powerful models and marked accordingly.

“This method has proven to be extremely effective,” says Ramanathan. “Our internal study has shown that content with warnings about false reports was skipped and not viewed in 95 percent of the cases.” Since the beginning of the pandemic, a tremendous amount of misinformation on medical topics has been shared.

Facebook and Instagram have removed more than 16 million pieces of content worldwide because, according to health experts, it was proven to contain incorrect information about Covid-19 or vaccinations.

The video series “Let me explain,” which explains complex topics on the social network, offers more insights on Facebook’s approach to countering misinformation about the Covid 19 vaccination.

Tough Nut For AI: Deep Fakes And Hate Memes

The latest challenges for AI include deep fakes and hate memes. Deep Fakes are artificially altered videos and images that cannot be seen as fakes with the naked eye. Hate memes use images and text, each of which seems harmless in itself. However, when combined, they result in a hurtful or discriminatory message.

Both media types require extensive and differentiated training for artificial intelligence in their way. In its Hateful Memes Challenge other Deepfake Challenge, Facebook lets AI researchers compete against each other to identify and block the contributions of other teams. The strategies developed are also intended to help combat the harmful contributions on the open stage.

Communicate Thanks Securely To AI

Ramanathan does not see free speech restricted by the work of the moderators and the AI. On the contrary. “We take the position that people on our platforms can openly express their opinions,” says the researcher. “However, this does not apply if you injure others or cause them harm.

Companies should also be able to participate without being associated with harmful or incorrect content. “The ultimate goal is therefore to provide a communication platform that is as secure as possible. “AI isn’t the only solution to problematic content,” says Ramanathan. “But it allows us to react faster and more effectively than with human labor alone.”

How the checking of posts and fact-checking work on Facebook is also the subject of an episode in the podcast Das Facebook Briefing, in which Max Biederbeck, Head of the German Fact-Checking Team at AFP, and Guido Bülow, Head of News Partnerships Central Europe at Facebook to discuss the basics of collaboration. It’s worth listening to.

Thanks to AI support, Facebook and Instagram are platforms on which users and advertisers can communicate helpfully and securely. Facebook AI Research, learn how open source tools and neural networks will help create places in the future where people like to be.

Also Read – Facebook Ads: What It Is And How It Works

Tech Gloss
Tech Gloss is a site dedicated to publishing content on technology, business news, Gadget reviews, Marketing events, and the apps we use in our daily life. It's a great website that publishes genuine content with great passion and tenacity.
RELATED ARTICLES