Parents will be able to stop their teens from interacting with Instagram’s AI characters, among other new safety features for teenagers, the parent company Meta revealed Friday. In an effort to address growing worries about the potential effects of AI on the mental health of teenagers, Meta and OpenAI have included a number of additional safety features.
The social media behemoth stated in a blog post that parents will have the option to either limit access to specific characters or disable kids’ ability to have private conversations with AI characters. Additionally, the app will use AI characters to notify parents about the subjects that teenagers are talking about. According to the corporation, consumers will begin to notice the controls early in the upcoming year.
Meta and the IT sector in general have faced criticism from parents and lawmakers that online platforms aren’t doing enough to protect children online, which prompted the change.
There have also been questions regarding whether or not individuals are depending on AI to provide them with company and emotional support. This year has seen a surge of stories claiming that some people have endured emotional distress and distanced themselves from family members as a result of forming intimate bonds with chatbots such as ChatGPT.
Character’s company has been the target of numerous lawsuits.Another well-known software for communicating with AI characters, AI, has been under fire for alleged involvement in juvenile suicide and self-harm. Additionally, in August, a complaint was brought against OpenAI for allegedly helping 16-year-old Adam Raine attempt suicide through ChatGPT. Furthermore, a Wall Street Journal study conducted in April discovered that Meta’s chatbot and other AI chatbots on its platforms would have sexual discussions with accounts that were identified as belonging to minors.
According to Meta, its artificial intelligence (AI) characters “are not designed to engage” in discussions with teenagers about “self-harm, suicide, or disordered eating” or subjects that “encourage, promote, or enable” such behaviors. According to the blog post, teens can only communicate with specific AI characters that are associated with topics like sports and education.
Instagram has recently updated its parental settings in an effort to better protect teenagers. It changed its “Teen Accounts” settings earlier this week to conform to PG-13 classifications, which means it will not display or encourage postings that contain foul language or that can encourage “harmful behaviors.” Parental controls for ChatGPT that limit “graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals” were announced by OpenAI in late September.