SAFE BOTs Act
Summary
The SAFE BOTs Act would establish federal requirements for artificial intelligence chatbots accessed by minors. If enacted, the bill would require chatbot providers to clearly disclose to children that they are interacting with an AI system rather than a real person. The bill would also prohibit chatbots from falsely claiming to be licensed professionals such as doctors or therapists.
The legislation would mandate several safety protections for young users. Chatbot providers would be required to provide information about suicide and crisis intervention hotlines when minors raise topics related to self-harm or suicide. The bill would also require chatbots to prompt users to take a break after three continuous hours of interaction. Additionally, providers would need to establish reasonable policies to prevent minors from accessing sexual content, gambling references, and information about illegal drugs, tobacco, or alcohol through chatbot platforms.
Enforcement would fall to the Federal Trade Commission, which would treat violations as unfair or deceptive practices. State attorneys general would also have authority to bring legal actions on behalf of residents. The bill would direct the Department of Health and Human Services to conduct a four-year study on how chatbots affect the mental health of minors. These requirements would take effect one year after the bill becomes law, if enacted.