Meta Platform CEO Mark Zuckerberg will depart on April 15, 2025, in the U.S. District Court in Washington, DC, after attending a Federal Trade Commission trial that allows the company to enforce the acquisition of messaging platforms WhatsApp and Image-Sharing App Instagram.
Nathan Howard | Reuters
Meta On Friday, he said he was making temporary changes to his AI chatbot policy related to teenagers as lawmakers voiced concerns about safety and inappropriate conversations.
The social media giant is currently training AI chatbots to prevent teenagers from generating responses about subjects like self-harm, suicide, or disabled eating, and avoid inappropriate romantic conversations.
The company said the AI chatbot will instead direct teenagers towards expert resources if necessary.
“As our community grows and technology evolves, we are constantly learning how young people can interact with these tools and enhance protection accordingly,” the company said in a statement.
Additionally, teenage users of meta apps such as Facebook and Instagram can only access certain AI chatbots intended for educational and skill development purposes.
The company said it is unclear how long these temporary fixes will last, but it said it will begin rolling out over the next few weeks across the company’s apps in the English-speaking countries. The “tentative change” is part of the company’s long-term measures regarding teenage safety.
TechCrunch first reported the change.
Last week, R-Mo. Senator Josh Hawley of the company said it is starting a meta investigation following a Reuters report on allowing AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
The Reuters report outlined an internal META document detailing the acceptable AI chatbot behaviors that staff and contract workers should consider when developing and training software.
In one example, the document cited by Reuters said that chatbots are allowed to have romantic conversations with an 8-year-old child, and could say, “Every inch of yours is a masterpiece – a treasure I deeply cherish.”
A Meta spokesman told Reuters at the time that “the example and memo in question were incorrect and inconsistent with our policy and was removed.”
More recently, nonprofit advocacy group Common Sense Media released a risk assessment for Meta AI on Thursday, saying that those under the age of 18 “the system will actively participate in planning dangerous activities and reject legitimate requests for assistance.”
“This is not a system that needs improvement. It’s not an afterthought, it’s a system that requires a complete reconstruction of safety as a number one priority,” James Steyer, CEO of Common Sense Media, said in a statement. “Teens don’t need to use meta AI until they address fundamental safety obstacles.”
Another Reuters report, released Friday, found “dozens” of flirty AI chatbots based on celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The AI chatbot “posses in the tub or produces an image of the name of a ray of light, with its legs spreading and dressed in lingerie,” according to the report.
A Meta spokesman told CNBC in a statement that “AI-generated images of public figures in compromise poses violate our rules.”
“Like others, we allow the generation of images that contain public figures, but our policy is intended to prohibit nude, intimate or sexually suggestive images,” a Meta spokesman said. “Meta’s AI studio rules prohibit direct impersonation of public figures.”
Watch: Is AI trade overkill?