
Meta has announced new measures designed to increase the safety of teenage users of the platform. Indicating a desire to strike a balance between privacy, safety and invasiveness, Instagram says that it will start to notify parents “if their teen repeatedly tries to search for terms related to suicide or self-harm within a short period of time”.
The platform says that by providing this information, as well as advice about what to do next, it is empowering parents to take an active role in their teenagers’ safety.
Instagram says: “In the coming weeks, Instagram will start notifying parents using supervision if their teen repeatedly tries to search for terms related to suicide or self-harm within a short period of time. This is the latest protection for Teen Accounts and Instagram’s parental supervision features”.
The announcement continues:
We understand how sensitive these issues are, and how distressing it could be for a parent to receive an alert like this. The vast majority of teens do not try to search for suicide and self-harm content on Instagram, and when they do, our policy is to block these searches, instead directing them to resources and helplines that can offer support. These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen.
Starting next week, Instagram will issue notifications to both parents and teen users about the fact that such searches will result in further notifications.
The alerts will be sent to parents via email, text, or WhatsApp, depending on the contact information available, as well as through an in-app notification. Tapping on the notification will open a full-screen message explaining that their teen has repeatedly tried to search Instagram for terms associated with suicide or self-harm within a short period of time. Parents will also have the option to view expert resources designed to help them approach potentially sensitive conversations with their teen.
The safety measures build on existing protections and have been devised in conjunction with appropriate experts.
There are also plans to work with AI, as Instagram explains:
We work to block searches for terms clearly associated with suicide and self-harm, including terms that violate our suicide and self-harm policies. This means we don’t show any results and instead direct people to resources and local organizations that can help. We also direct people to resources and helplines when their searches aren’t clearly related to suicide and self-harm, but mental health more broadly. We’ll continue to alert the emergency services when we become aware of anyone at imminent risk of physical harm — actions that have saved lives.
We’re launching these alerts on Instagram search first, but we know teens are increasingly turning to AI for support. While our AI is already trained to respond safely to teens and provide resources on these topics as appropriate, we’re now building similar parental alerts for certain AI experiences. These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI. This is important work and we’ll have more to share in the coming months.
