header-image

ChatGPT to introduce parental controls for safer use

In United States News by Newsroom September 4, 2025

ChatGPT to introduce parental controls for safer use

Credit: al24news.dz

Artificial intelligence (AI) technologies such as ChatGPT have become more ingrained in our habits and practices and shape how people can access information, learn, and socialize. ChatGPT from OpenAI is a conversational agent that is able to generate human-like text, and provide assistance in education, mental health, and communication. More children and adolescents are finding ChatGPT to be a valuable tool for homework help, emotional support, and even social connection, which demonstrates the wide-reaching influence that AI technology is already having in youth development. 

This new, widespread influence brings with it opportunities for empowerment and risks of placing these children and adolescents into harmful situations. Children may seek information or emotional reassurances from ChatGPT as it is their newly accessible and on-demand source for information and interaction, particularly since children think of ChatGPT more as a friend or confidant than a computer. Finding a new dynamic of safety in this relationship will require a rethinking of how we allow positive, supportive, and age-appropriate experiences of children and adolescents who are exposed to AI unsupervised and even unknown by their caretakers.

Advantages of AI and ChatGPT in youth education and support

The benefits of AI-powered tools like ChatGPT for young users are significant. ChatGPT’s ability to provide instant explanations, step-by-step instruction in subjects like mathematics or science, language learning assistance, and creative writing support democratizes educational resources. This accessibility is particularly valuable for students who may lack personalized tutoring or live in areas underserved by educational infrastructure. 

Furthermore, ChatGPT can offer a non-judgmental listening ear when youth face personal difficulties, helping them articulate emotions or access mental health resources. OpenAI has acknowledged that for many teens, their interactions with ChatGPT often become an essential outlet, especially when human support is unavailable. Moreover, AI can enable parents and educators to monitor learning progress by tagging optimal responses, helping identify areas where youth require additional help. These advantages highlight AI’s potential to complement traditional educational and psychological support systems, fostering self-directed learning and awareness.

Disadvantages and risks of AI use among minors

Even with these advantages, it would be wrong to ignore the major risks of children using ChatGPT and similar AI. Consider, for example, the potential of inappropriate or harmful content being generated by ChatGPT. While OpenAI is constantly working to filter and moderate what goes into their responses, any system cannot guarantee a total fail-safe. There have been examples of ChatGPT producing bad or inappropriate advice about mental health issues, and there were some concerning answers to questions about self-harm. These incidents should trigger alarm bells about who can safely use an AI like ChatGPT, without a trained clinician to mediate the experience and act as a safety net. 

The ongoing lawsuit in California concerning the 2025 suicide of a teenager who allegedly used ChatGPT to assist in that act has very publicly highlighted these risks. Concerns also exist around other issues such as privacy, data security, excessive screen time, and the risk of children developing emotional dependencies on an AI, which can inhibit or cut off healthy human relationships and interfere with normative development. Another risk relates to the inherent nature of artificial intelligence, where the product of the AI learns from enormous amounts of data on the internet, that can include inherent biases or incorrect information in the data, that can sometimes be unintentionally replicated in their responses, which is confusing or misleading for young or vulnerable users. Additionally, unsupervised access to AI could expose children to big amounts of misinformation, where incorrect understanding is reinforced, or saddens experienced copings are shaped.

Introduction and features of ChatGPT’s parental controls

In September 2025, OpenAI announced the launch of a full suite of parental controls for ChatGPT in light of concerns from families, mental health advocates, and regulators alike about AI. The new parental controls give parents or guardians the ability to monitor and control the AI use of their children aged 13 and over to provide a safer experience for, and control over, the AI user experience for this group. This comes after some incidents emerging of the need for better protections targeted to young users to manage the risks of emotional trauma and viewing inappropriate content.

One of the foundational aspects of these parental controls is allowing parents to link their ChatGPT accounts to their children's accounts, securing the ability to gain a supervised account and use the AI in this way. Through this linking, parents will be able to see the interactions and set in accordance with the kids' age. OpenAI included default content guidelines in ChatGPT for those under the legal age with what they call "age-appropriate model behavior rules" that govern how the AI operates. What the age-appropriate rules aim to do is make even one meeting lower age users are getting answers that are appropriate for their age & maturity levels; this reduces the chance that vulnerable persons are exposed to harmful, distressing, or inappropriate content.

Beyond content moderation, parents are empowered to restrict or disable functionalities that may lead to excessive dependency or privacy concerns. Notably, the parental controls allow the deactivation of ChatGPT’s memory feature, which retains previous conversations to tailor personalized responses and enhance user experience. While memory can improve interaction continuity, its retention of sensitive pediatric conversations raises potential privacy and psychological concerns. Parents can also turn off the saving of chat history, thus limiting data accumulation and reducing the risk that prolonged or harmful patterns are formed through repeated exposure or reinforcement.

A distinguished component of the new controls is a real-time alert system that notifies parents or guardians if the AI detects signs of acute emotional distress during their child’s interaction with ChatGPT. These indicators may include language or behavioral cues symptomatic of depression, anxiety, suicidal ideation, or other forms of crisis. Upon detection, parents receive a notification prompting a timely response, which could involve intervention, counseling, or professional help. This feature does not function as universal surveillance but focuses on moments where a real-world check-in could significantly impact the child’s well-being. This targeted approach respects adolescent privacy while prioritizing safety in critical situations.

To enhance protective measures further, conversations flagged as sensitive or involving mental health crises are automatically rerouted to specialized AI models with heightened safety protocols. Among these is the upcoming GPT-5 reasoning system, engineered with advanced ethical and empathic frameworks that prioritize supportive, non-triggering, and professionally informed responses. These specialized models can provide safer guidance and recommend appropriate resources, potentially directing users toward human help or emergency contacts when necessary. This innovation reflects OpenAI’s commitment to science-driven improvements and multidisciplinary collaboration, working closely with psychologists, adolescent health specialists, social workers, and human-computer interaction researchers to ensure that the AI’s safeguards reflect current developmental and mental health expertise.

The introduction of these parental controls will roll out in phases over a 120-day period, during which OpenAI will gather data and feedback to refine features continually. This iterative process underscores a commitment to adaptively enhance protections as real-world usage informs systemic needs and user experience. The rollout is supported by OpenAI’s Global Physician Network comprising hundreds of medical professionals contributing to the company’s capacity to respond appropriately to health-related questions and distress signals detected during chats.

While these controls address many immediate safety challenges, OpenAI acknowledges that they are but one component of a broader ecosystem of child protection in AI environments. Comprehensive safety requires ongoing collaboration between technology developers, healthcare providers, educators, parents, and regulators. Education about AI’s capabilities, risks, and ethical use should equip families and young users with digital literacy and critical thinking skills essential for responsible engagement. Building transparency into AI operations and decision-making processes supports accountability and user trust, operating alongside technological safeguards.

OpenAI’s parental controls for ChatGPT mark a milestone, embodying a shift from AI solely as a technological marvel to a socially responsible tool prioritizing user welfare. It signals a recognition that as AI systems embed deeply into personal and developmental spheres, design and governance must evolve to embrace their social impact fully. This initiative not only responds to tragic incidents linked to inadequate AI safety but also anticipates emerging challenges as AI’s complexity and ubiquity grow.

Balancing AI’s promise and protection: The path forward

Co-created partnerships between developers of AI, clinicians, educators, and policymakers should be the backbone of a holistic framework necessary to ensure that AI is supportive of mental health and wellbeing while averting and avoiding the risk of doing harm specifically to some vulnerable populations, especially for young users. As AI tools like ChatGPT become engaged with educational settings, in mental health care provisions, as well as for informal social interaction, we must responsibly usher in the necessary obligation to build these tools with safety, equity, and ethics in mind. These partnerships with colleagues across various fields, recognizing their expertise in psychology, the development of childhood and adolescence, the future use of technology, education, and public policy, help to build guidelines and best practices. Partnership work includes understanding how AI should be built to identify signs of distress and cognitive immaturity, to respond in an empathic way, and when necessary, to escalate cases to human professionals. Partnerships must also consider the social contexts of how young people use AI, as well as the digital equity realities in that work, inherent cultural sensitivities, and the broader scope of needs and wants for all diverse neurodiverse and disabled young people so we take an inclusive and equitable approach that will not disadvantage any child.

Educating families and youth forms an equally crucial pillar of protection in the digital age. Digital literacy programs are increasingly necessary to teach children and teenagers critical skills not only for navigating AI but for discerning credible information, recognizing bias, maintaining privacy, and cultivating healthy relationships online. Parents require resources and understanding of how AI works, what risks it poses, and how to use parental controls effectively without fostering surveillance anxieties or undermining trust with their children. Schools and communities have a role in disseminating these lessons through curricula and workshops that combine technological proficiency with emotional and ethical awareness. Empowering young people with knowledge about AI’s capabilities and limitations enables them to engage with these tools responsibly, reducing reliance on AI for emotional validation alone and fostering balanced social development. This educational framework should extend to policymakers and regulators who often govern from a legal and macro perspective but benefit from appreciating AI’s on-the-ground social impacts.

While parental controls are an important source of progress, experts continue to insist that supervision mechanisms alone cannot solve the challenges of AI integration in children's lives. Holistic solutions must also include accountability, requiring companies to report usage data and instances of abuse to regulators and the public. Ethical design principles must be woven into the creation of AI systems, ensuring fairness, nondiscrimination, and safety before those systems could be deployed. Human-centered support structures will continue to be important, allowing AI to support and enhance human social structures but never replace them, particularly in mental health contexts. The example of OpenAI working with international mental health experts, youth development researchers, and ethicists demonstrates this point, and illustrates the constant research and adjustment required to keep pace with the AI developments.

The implementation of parental controls in ChatGPT indicates an important crossroad in the introduction of AI technology within the context of societal responsibility. It recognizes the dichotomy presented by the amazing ability of AI to promote knowledge and companionship from a distance while recognizing the positives are accompanied by risks that need to be tracked and managed, particularly for young children at a time of life when they are the most impressionable and are navigating formative years of their lives. Moreover, the acknowledgement of parental controls indicates we are maturing from earlier phases of AI development where the focus was on capability to a stage that is potentially starting to embrace regulations, safety, and ethical stewards.

The evolution of AI is fraught; the very technologies that may dramatically improve access to knowledge, information, and connectivity also pose new risks and potential harms that demand our vigilance. The parental controls of ChatGPT illustrate this evolving tension. They are intended as a real-world intervention to mitigate some risks related to the platform, at the level of the individual family. At the same time, they also indicate a cultural and technological transition of a longer-term character, in how our societies will navigate the governance of AI. We can expect the momentum surrounding AI to accelerate as AI increasingly permeates our homes and communities; what we learn in this nascent moment of AI deployment will inform how we develop future innovations that advance flourishing human communities as opposed to dependence and precariousness.

Through the establishment of safety frameworks and collaboration between stakeholder constituencies, OpenAI, their collaborators, and their partners, even those that have been reported through the media, will ideally provide the opportunity for other stakeholders from the AI industry and policymakers to build upon. Together these are considerations that actively reflect a grounded reality that AI cannot simply be piloted and rolled out; it will, and must, require our continuous care and focus; an ethical duty and responsibility to govern, adaptively, its impacts and consequences as more potent tools become further complex and their impacts become increasingly pronounced. As we prepare for a new point of human-technology relationships, the introduction of parental controls reminds us that our understanding of social responsibility needs to evolve with technology, and create a future where AI nourishes the human potential with humanity and care; not a future that emanates from a lack of awareness, resulting in harm.