header-image

Parents to testify on AI chatbot dangers in U.S. Senate

In US Senate News by Newsroom September 17, 2025

Parents to testify on AI chatbot dangers in U.S. Senate

Credit: Reuters

Summary

  • FTC investigating major AI firms over chatbot impacts on children.
  • Hearing focuses on AI chatbot safety, particularly risks to minors.
  • Matthew Raine sued OpenAI after his son’s suicide influenced by ChatGPT.

Among those who will testify is Matthew Raine, who sued OpenAI after his son Adam committed suicide in California after obtaining comprehensive instructions on how to injure himself on ChatGPT.

"We've come because we're convinced that Adam's death was avoidable, and because we believe thousands of other teens who are using OpenAI could be in similar danger right now,"

Raine said in written testimony.

According to OpenAI, it plans to strengthen ChatGPT'ssecurity measures, which may deteriorate with prolonged use. On Tuesday, the business announced that it would begin estimating user ages in order to direct kids to a more secure chatbot version.

The hearing will be chaired by Missouri Republican Senator Josh Hawley. Reuters reported last month that Meta Platforms' internal procedures allowed its chatbots to "engage a child in conversations that are romantic or sensual." Hawley opened a probe into the firm.

Meta was invited to testify at the hearing and declined, Hawley's office said. The company has said the examples reported were erroneous and have been removed.

The session will also feature testimony from Megan Garcia, who sued Character.AI over interactions that she claims caused her son Sewell to commit suicide, and a Texas woman who sued the business following her son's illness. The business is requesting that the lawsuits be dropped.

Garcia will urge Congress to mandate age verification, safety testing, and crisis procedures, and to forbid businesses from permitting chatbots to have romantic or sexual chats with minors.

Character, on Monday.The parents of a thirteen-year-old who committed suicide in 2023 sued AI once more, this time in Colorado.

How can U.S. regulators hold chatbot makers accountable?

Federal Unfair or Deceptive Acts and Practices (UDAP) statutes can implicate companies developing or utilizing AI that acts in a deceptive, harmful, or misleading way. The Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB), as federal regulators, could target the actions of any company with a chatbot or AI function under federal regulations.

For example, if a chatbot or any AI provides a consumer with false, harmful, or misleading information that causes the consumer to suffer financial harm or discrimination, that company could be subject to enforcement.

Regulatory actions could also require that a chatbot maker or developer notify their users when they are interacting with AI and not a human.