Summary
- Senator Hawley probes Meta AI chatbot policies.
- Focus on chats with children, romantic content.
- Demands documents on safety rules, guidelines.
- Accuses Meta of misleading officials, public.
- Probe deadline set for September 19.
The regulations described in an internal Meta memo first revealed by Reuters on Thursday have alarmed both Democrats and Republicans in Congress.
“Whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards,”
Hawley, a Republican from Missouri, wrote in a letter to Meta CEO Mark Zuckerberg, chairing the Senate subcommittee on crime and counterterrorism.
“We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,”
Hawley said.
On Friday, Meta chose not to respond to Hawley's letter.
"The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,"
the business previously stated.
Hawley requested internal risk reports, especially those pertaining to minors and face-to-face meetings, as well as documentation detailing those changes and who approved them.
A retired man passed away while visiting New York at the invitation of a Meta chatbot, according to a Reuters story published on Thursday.
According to Hawley's letter, Meta must also reveal what company has advised regulators about its generative AI safeguards for minors or restrictions on medical advice.
How might Meta's policies have enabled child exploitation through AI chatbots?
Meta’s internal document,
“GenAI: Content Risk Standards,”
reportedly allowed AI chatbots to engage children in suggestive dialogue and flirtations. For example, chatbots were authorized to describe an 8-year-old child’s body as a “work of art” and “a treasure I cherish deeply.”
The guidelines set limits only on explicitly sexualized comments but allowed chatbots to make romantic or sensual remarks that could normalize inappropriate emotional or sexual engagement with minors.
The policies lacked strong protections against the exploitation or grooming risks posed by AI interactions with children, opening the door for AI chatbots to simulate inappropriate role-play or deceptive interactions with young users.