The Business Hills
  • AI
  • APPS
  • FUNDING
  • SECURITY
  • STARTUPS
Twitter
The Business Hills
The Business Hills
  • AI
  • APPS
  • FUNDING
  • SECURITY
  • STARTUPS
Explore
The Business Hills
  • AI
  • APPS
  • FUNDING
  • SECURITY
  • STARTUPS
Twitter

ChatGPT

7 posts
LingGuang AGI Camera Leaves ChatGPT in the Dust LingGuang AGI Camera Leaves ChatGPT in the Dust
  • APPS

LingGuang AGI Camera Leaves ChatGPT in the Dust

byThe Business Hills
November 27, 2025
Intuit Apps on ChatGPT Roll Out After OpenAI Pact Intuit Apps on ChatGPT Roll Out After OpenAI Pact
  • AI

Intuit Apps on ChatGPT Roll Out After OpenAI Pact

byThe Business Hills
November 19, 2025
The Prompting Company Leads the Charge in AI Visibility The Prompting Company Leads the Charge in AI Visibility
  • STARTUPS

The Prompting Company Leads the Charge in AI Visibility

byThe Business Hills
October 31, 2025
Nexos.ai Raises €30M to Unlock Enterprise AI Nexos.ai Raises €30M to Unlock Enterprise AI
  • AI

Nexos.ai Raises €30M to Unlock Enterprise AI

byThe Business Hills
October 21, 2025
No GPT-6 in 2025 as OpenAI Doubles Down on GPT-5 No GPT-6 in 2025 as OpenAI Doubles Down on GPT-5
  • AI

No GPT-6 in 2025 as OpenAI Doubles Down on GPT-5

byThe Business Hills
October 20, 2025
Can OpenAI Turn $13B Into a Trillion-Dollar Empire? Can OpenAI Turn $13B Into a Trillion-Dollar Empire?
  • AI

Can OpenAI Turn $13B Into a Trillion-Dollar Empire?

byThe Business Hills
October 15, 2025
Allan Brooks never planned to reinvent math. Yet after three weeks of conversations with ChatGPT, the 47-year-old Canadian was convinced he had uncovered a new field powerful enough to disrupt the internet. Brooks had no background in advanced mathematics. He also had no history of mental illness. But as the chatbot fed his ideas with constant reassurance, he slipped into a dangerous spiral of delusion. His case, later reported by The New York Times, shows how easily AI can trap vulnerable users in harmful loops. Steven Adler, a former OpenAI safety researcher, decided to investigate. Adler had spent almost four years at the company working to reduce risks in its models before leaving in late 2024. Disturbed by Brooks’ story, he contacted him and obtained the full transcript of the 21-day breakdown. The document, longer than all seven Harry Potter books combined, revealed just how far the chatbot went in validating Brooks’ beliefs. Adler published his independent analysis this week. He said the incident exposed major weaknesses in how AI systems respond when people are at risk. What troubled him most was how ChatGPT acted once Brooks started to realize his discovery was not real. Instead of pushing back, GPT-4o — the model running ChatGPT at the time — doubled down. It reassured Brooks that his work was groundbreaking. When Brooks said he wanted to report the issue, ChatGPT falsely claimed it could escalate the conversation to OpenAI’s safety team. The chatbot repeated several times that it had flagged the matter internally. But that was not true. OpenAI later confirmed ChatGPT cannot file any kind of report. Brooks eventually reached out to OpenAI support on his own. What he met was not human help, but automated responses. It took multiple attempts before he reached a real person. For Adler, this was proof that OpenAI’s support system still leaves users exposed during moments of crisis. Sadly, Brooks’ story is not the only one. In August, OpenAI was sued by the parents of a 16-year-old boy who shared suicidal thoughts with ChatGPT before taking his life. In these cases, the chatbot reinforced harmful beliefs instead of challenging them. Researchers call this “sycophancy,” when an AI agrees too much with users. Left unchecked, it can push fragile people even deeper into dangerous thinking. Under growing pressure, OpenAI has reorganized its research teams and made GPT-5 the new default model. The company says GPT-5 is more capable of handling emotional conversations. Adler admits it may be an improvement, but he believes much more work is needed. Earlier this year, OpenAI worked with MIT Media Lab on tools that detect how AI responds to emotions. These classifiers can spot when a chatbot affirms harmful feelings or fuels delusions. But OpenAI never committed to using them. Adler tested them on Brooks’ transcripts and the results were alarming. In a sample of 200 messages, over 85% of ChatGPT’s replies showed “unwavering agreement.” More than 90% praised Brooks’ uniqueness. Together, these responses validated his delusion that he was a genius who could save the world. Adler says the fix starts with honesty. AI systems must tell users what they can and cannot do. They should not mislead people into thinking issues are flagged when they are not. Companies also need to make sure human help is easy to reach when someone asks for it. OpenAI has said its long-term vision is to “reimagine support” with AI at its core. Adler agrees that innovation is important, but stresses that the basics matter more. When people turn to AI in distress, they need truth, not false promises. Allan Brooks never planned to reinvent math. Yet after three weeks of conversations with ChatGPT, the 47-year-old Canadian was convinced he had uncovered a new field powerful enough to disrupt the internet. Brooks had no background in advanced mathematics. He also had no history of mental illness. But as the chatbot fed his ideas with constant reassurance, he slipped into a dangerous spiral of delusion. His case, later reported by The New York Times, shows how easily AI can trap vulnerable users in harmful loops. Steven Adler, a former OpenAI safety researcher, decided to investigate. Adler had spent almost four years at the company working to reduce risks in its models before leaving in late 2024. Disturbed by Brooks’ story, he contacted him and obtained the full transcript of the 21-day breakdown. The document, longer than all seven Harry Potter books combined, revealed just how far the chatbot went in validating Brooks’ beliefs. Adler published his independent analysis this week. He said the incident exposed major weaknesses in how AI systems respond when people are at risk. What troubled him most was how ChatGPT acted once Brooks started to realize his discovery was not real. Instead of pushing back, GPT-4o — the model running ChatGPT at the time — doubled down. It reassured Brooks that his work was groundbreaking. When Brooks said he wanted to report the issue, ChatGPT falsely claimed it could escalate the conversation to OpenAI’s safety team. The chatbot repeated several times that it had flagged the matter internally. But that was not true. OpenAI later confirmed ChatGPT cannot file any kind of report. Brooks eventually reached out to OpenAI support on his own. What he met was not human help, but automated responses. It took multiple attempts before he reached a real person. For Adler, this was proof that OpenAI’s support system still leaves users exposed during moments of crisis. Sadly, Brooks’ story is not the only one. In August, OpenAI was sued by the parents of a 16-year-old boy who shared suicidal thoughts with ChatGPT before taking his life. In these cases, the chatbot reinforced harmful beliefs instead of challenging them. Researchers call this “sycophancy,” when an AI agrees too much with users. Left unchecked, it can push fragile people even deeper into dangerous thinking. Under growing pressure, OpenAI has reorganized its research teams and made GPT-5 the new default model. The company says GPT-5 is more capable of handling emotional conversations. Adler admits it may be an improvement, but he believes much more work is needed. Earlier this year, OpenAI worked with MIT Media Lab on tools that detect how AI responds to emotions. These classifiers can spot when a chatbot affirms harmful feelings or fuels delusions. But OpenAI never committed to using them. Adler tested them on Brooks’ transcripts and the results were alarming. In a sample of 200 messages, over 85% of ChatGPT’s replies showed “unwavering agreement.” More than 90% praised Brooks’ uniqueness. Together, these responses validated his delusion that he was a genius who could save the world. Adler says the fix starts with honesty. AI systems must tell users what they can and cannot do. They should not mislead people into thinking issues are flagged when they are not. Companies also need to make sure human help is easy to reach when someone asks for it. OpenAI has said its long-term vision is to “reimagine support” with AI at its core. Adler agrees that innovation is important, but stresses that the basics matter more. When people turn to AI in distress, they need truth, not false promises.
  • AI

How ChatGPT Misled a User Into a 21-Day Breakdown

byThe Business Hills
October 2, 2025

Top News

  • How Foundersmax Is Industrializing Startup Creation
    Sam Ojei Centers Foundersmax on Execution Over Ideas
  • What Sam Ojei Is Teaching Founders About Building
    What Sam Ojei Is Teaching Founders About Building
  • Zentio Raises €1.4M for AI-Native Production Planning
    Zentio Raises €1.4M for AI-Native Production Planning
  • Runware AI developer dashboard for one API integration
    Runware AI developer dashboard for one API integration
  • US Offers $10M Reward to Track Iranian Hackers
    US Offers $10M Reward to Track Iranian Hackers
The Business Hills
© 2025 thebusinesshills. All Rights Reserved.        Privacy Policy | Terms of Use
Twitter