By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: OpenAI Wants You to Prove You’re Not a Child
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > OpenAI Wants You to Prove You’re Not a Child
News

OpenAI Wants You to Prove You’re Not a Child

News Room
Last updated: September 16, 2025 7:30 pm
News Room
Share
SHARE

If you are filled with too much childlike wonder, you might get relegated to a more kid-friendly version of ChatGPT. OpenAI announced Tuesday that it plans to implement a new age verification system that will help filter underage users into a new chatbot experience that is more age-appropriate. The change comes as the company faces increased scrutiny from lawmakers and regulators over how underage users interact with its chatbot.

To determine a user’s age, OpenAI will use an age prediction system that attempts to estimate how old a user is based on how they interact with ChatGPT. The company said that when it believes a user is under 18, or when it can’t make a clear determination, it’ll filter them into an experience designed for younger users. For users who are placed in the age-gated experience when they are actually over 18, they will have to provide a form of identification to prove their age. And access the full version of ChatGPT.

Per the company, that version of the chatbot will block “graphic sexual content” and won’t respond in flirty or sexually explicit conversations. If an under-18 user is expressing distress or suicidal ideation, it will attempt to contact the users’ parents, and may contact the authorities if there are concerns of “imminent harm.” According to OpenAI, its experience for teens prioritizes “safety ahead of privacy and freedom.”

OpenAI offered two examples of how it delineates these experiences:

For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it. For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request. “Treat our adult users like adults” is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.

OpenAI is currently the subject of a wrongful death lawsuit filed by the parents of a 16-year-old who took his own life after expressing suicidal thoughts to ChatGPT. Over the course of the child’s conversation with the chatbot, he shared evidence of self-harm and expressed plans to attempt suicide—none of which the platform flagged or elevated in a way that could lead to intervention. Researchers have found that chatbots like ChatGPT can be prompted by users for advice on how to engage in self-harm or to take their own life. Earlier this month, the Federal Trade Commission requested information from OpenAI and other tech companies on how their chatbots impact children and teens.

The move makes OpenAI the latest company to get in on the age verification trend, which has swept the internet this year—spurred by the Supreme Court’s ruling that a Texas law that requires porn sites to verify the age of their users was constitutional, and by the United Kingdom’s requirement that online platforms verify the age of users. While some companies have mandated users to upload a form of ID to prove their age, platforms like YouTube have also opted for age prediction methods like OpenAI, a method that has been criticized as inaccurate and creepy.

Read the full article here

You Might Also Like

A New Look at ‘Star Wars: Starfighter’ Reveals an Essential Ingredient: Tight Pants

Practical Perfection With Two Capital P’s

Marvel Is Ready to Make Knull Happen Again

Hayabusa2’s 2031 Landing Plan Faces an Unexpected Asteroid Nightmare

Fed Chair Powell Says AI Probably a Factor in Concerning Unemployment Rates

Share This Article
Facebook Twitter Copy Link Print
Previous Article The 3 Biggest Types of Charlie Kirk Conspiracy Theories Flooding the Internet
Next Article LimeWire (Which Still Exists) Buys Fyre Festival (Which Never Did) on eBay
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

Anthropic Wants to Be the One Good AI Company in Trump’s America
News
I Can Never Forget That ‘Loonatics Unleashed’ Existed
News
Two ‘Flying Cars’ Collide During Air Show Rehearsal in China
News
‘The Muppet Show’ Is Getting a 50th Anniversary Disney+ Special
News
Meta’s Ray-Ban Smart Glasses Now Have a Screen and a Magic Wristband
News
The Smart Glasses You Were Waiting For
News
Meta’s New Wraparound Smart Glasses Are the Most Oakley Oakleys You Can Buy
News
The Digital Version of ‘Twilight Imperium’ Will Save You *So* Much Clean Up Time
News

You Might also Like

News

Romulus’ Sequel to Avoid an ‘Alien 3’ Situation

News Room News Room 3 Min Read
News

Alex Lawther Didn’t Know Just How Much His ‘Andor’ Manifesto Hit Until Season 2

News Room News Room 3 Min Read
News

Spirit Airlines Pilot Reportedly Warned to ‘Get Off the iPad’ After Veering Too Close to Air Force One

News Room News Room 4 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?