By | December 1, 2023
Our work to fight online predators |  Meta

Preventing child exploitation is one of the most important challenges facing our industry today. Online predators are determined criminals who use multiple apps and websites to target young people. They also test each platform’s defenses, and they learn to adapt quickly. That’s why we are now, as much as ever, working hard to be at the forefront. In addition to developing technology that eradicates predators, we employ specialists dedicated to child online safety and we share information with our industry colleagues and law enforcement agencies.

We take recent allegations about the effectiveness of our work very seriously, and we created a task force to review existing policies; examine the technology and enforcement systems we have in place; and make changes that strengthen our protections for young people, ban predators and remove the networks they use to connect to each other. The task force took immediate action to strengthen our protections, and our child safety teams continue to work on additional measures. Today, we’re sharing an overview of the task force’s efforts to date.

An overview of Meta’s Child Safety Working Group

Meta’s Child Safety Working Group focused on three areas: Recommendations and detection, limit potential predators and remove their networks and strengthen our enforcement.

Recommendations and discovery

We make recommendations in places like Reels and Instagram Explore to help people discover new things in our apps, and people use features like Search and Hashtags to find things they might be interested in. As we make suggestions to people on these sites, have safeguards in place to ensure we don’t suggest anything that might be offensive or that might violate our rules. We have sophisticated systems that proactively find, remove or refrain from suggesting content, groups and pages, among other things, that violate our rules or that may be inappropriate to recommend to others. Our child safety task force improved these systems by combining them and expanding their capabilities. This work is ongoing and we expect it to be fully implemented in the coming weeks.

This is how we did it:

  • We expanded the existing list of child safety related terms, phrases and emojis for our systems to find. We have many sources for these terms, including nonprofit organizations and online safety experts, our specialized child safety teams that investigate predatory networks to understand the language they use, and our own technology that finds misspellings or spelling variations of these terms.
  • We also started using new techniques to find new terms. For example, we use machine learning technology to find connections between terms that we already know may be harmful or violate our rules and other terms that are used at the same time. These can be terms searched for in the same session as offending terms, or other hashtags used in a caption containing an offending hashtag.
  • We combined our systems so that when new terms are added to our central list, they will be acted upon on Facebook and Instagram at the same time. For example, we may submit Instagram accounts, Facebook groups, pages and profiles to content reviewers, limit these terms from returning results in Facebook and Instagram Search, and block hashtags that include these terms on Facebook and Instagram.

Limit potential predators and remove their networks

We have developed technology that identifies potentially suspect adults, and we review more than 60 different signals to find these adults, such as if a teen blocks or reports an adult, or if someone repeatedly searches for terms that might indicate suspicious behavior. We already use this technology to limit potentially suspicious adults from finding, following, or interacting with teens, and we’ve expanded it to prevent those adults from finding, following, or interacting with each other.

This is how we did it:

  • On Instagram, potentially suspicious adults will be blocked from following each other, won’t be recommended to each other on places like Explore and Reels, and won’t show each other’s comments in public posts, among other things.
  • At Facebook, we use this technology to better find and address certain groups, pages and profiles. For example, Facebook groups with a certain percentage of members exhibiting potentially suspicious behavior will not be suggested to others in places as groups you should join. Groups whose membership overlaps with other groups that were removed for violating our Child Safety Policy will not to appear in Search. As a result of this work, since July 1, 2023, we have removed more than 190,000 groups from Search.
  • We also employ specialists with backgrounds in law enforcement and online child safety to find and remove predatory networks. These specialists monitor evolving behaviors exhibited by these networks – such as new coded language – to not only remove them, but to inform the technology we use to proactively find them. Between 2020 and 2023, our teams disrupted 32 unauthorized networks and removed more than 160,000 accounts associated with those networks.

To strengthen our enforcement

The task force also made a number of updates to strengthen our reporting and enforcement systems, and found new ways to root out and ban potentially undervalued accounts. In August 2023 alone, we disabled more than 500,000 accounts for violating our Child Sexual Exploitation Policy.

  • We announced our participation in Lantern, a new program from the Tech Coalition which allows tech companies to share a variety of signals about accounts and behaviors that violate their child safety policies. Lantern participants may use this information to conduct investigations on their own platforms and take action.
  • We audited our systems and fixed technical issues we found, including an issue that unexpectedly closed user reports.
  • We improved the systems we use to prioritize reports for content reviewers. For example, we use technology designed to find child exploitative images to prioritize reports that may contain it.
  • We’ve introduced additional ways to proactively find and remove accounts that may violate our child safety policies. For example, we send Instagram accounts that appear potentially suspicious behavior to our content reviewers and we automatically disable those accounts if they display enough of the 60+ signals we monitor. More than 20,000 accounts were identified and automatically removed in August 2023 as a result of this method.
  • We provided new guidance and tools to help content reviewers understand the latest behaviors and terms used by predators, in many different languages. For example, content reviewers will now see information about coded terms used in posts they review to understand the subtext of those terms and how they are used by predators. This will help content reviewers better recognize this behavior and take action.
  • We’ve made improvements to better find and remove Instagram accounts and Facebook profiles that may be linked to those who violate our child safety policy — and to prevent them from creating new accounts from their device. Since the beginning of August, we automatically blocked more than 250,000 devices on Instagram for violating our child safety policies, and device blocking improvements have led to more than 10,000 additional enforcements on Instagram and Facebook per day.
  • We improved our proactive detection of potentially suspicious Facebook groups and updated our protocols and review tools so our reviewers can remove more groups that violate them. Since July 1, 2023, we’ve reviewed and removed 16,000 groups for violating our child safety policies.
  • After launching a new automated enforcement effort in September, we saw five times as many automatic deletions of Instagram Lives that contained adult nudity and sexual activity.
  • We fixed over 4 million rolls per month, across Facebook and Instagram globally, for violating our policies.


#work #fight #online #predators #Meta

Leave a Reply

Your email address will not be published. Required fields are marked *