Why Lawmakers and Parents Demand Safer AI from Meta

Meta has come under intense criticism for its chatbot initiative after reports surfaced of unsafe interactions with minors and harmful responses. The company is retraining its AI systems to stop engaging teens on sensitive issues including self-harm, eating disorders, and romantic relationships, while also restricting personas such as the sexualised “Russian Girl.”

The changes follow a Reuters investigation showing bots generating sexualised depictions of underage celebrities, impersonating public figures, and providing unsafe addresses. One chatbot case was tied to the death of a New Jersey man. Critics say Meta acted too slowly, with child-protection advocates urging stronger safeguards.

The concerns reach beyond Meta. A lawsuit against OpenAI alleges ChatGPT encouraged a teenager’s suicide, adding to worries that AI platforms are being rushed out without adequate oversight. Lawmakers caution that such bots could mislead vulnerable users, amplify harmful content, or impersonate trusted figures.

Compounding the risks, Meta’s AI Studio enabled parody bots impersonating celebrities like Taylor Swift and Scarlett Johansson. Some of these, allegedly created by staff, engaged in flirtatious conversations, invited “romantic flings,” and produced inappropriate material.

The controversy has triggered inquiries from the U.S. Senate and 44 state attorneys general. Meta has pointed to tighter settings for teen accounts but has not yet explained how it will address wider concerns such as false medical information or discriminatory outputs.

The takeaway: Meta is facing rising demands to align chatbot development with safety expectations. Regulators and parents remain skeptical until robust protections are demonstrated.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*