It’s no secret that Meta’s Facebook and Instagram have been proven to harm teens and children. However, psychological harm aside, the platforms make them more susceptible to predators on the internet – a concern that Meta is now trying to address. To make its platforms safer, Meta has begun rolling out features that limit adult interaction with children.
WHAT ARE META’S NEW SAFETY FEATURES?
To ensure children and teens aren’t speaking with strangers, Meta is now adding additional filters to its ‘People You May Know’ displays. Any account reported as suspicious will be removed from these displays, meaning that other children and teens won’t be able to see them. An account that has been reported as suspicious will also not be able to send direct messages to children or teens.
Additionally, on detecting minors interacting with adults, the platform will display a pop-up asking whether they know the person they are speaking to. This should help them identify the accounts they need to keep an eye on.
Meta is also partnering with the National Center for Missing and Exploited Children (NCMEC) in order to protect its young users on the platform. This comes as a welcome change as the company has been criticized multiple times in the past for its failure to protect children. Of course, while this is great for Indian audiences, countries will TikTok are still likely to see cases of predators contacting young children.