Earlier this month, the company unveiled a wellness council to address these concerns, though critics noted the council did not […]
Category: AI sycophancy
Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem
Measured sycophancy rates on the BrokenMath benchmark. Lower is better. Measured sycophancy rates on the BrokenMath benchmark. Lower is better. […]
Millions turn to AI chatbots for spiritual guidance and confession
Skip to content Bible Chat hits 30 million downloads as users seek algorithmic absolution. On Sunday, The New York Times […]
OpenAI announces parental controls for ChatGPT after teen suicide lawsuit
On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its […]
- AI
- AI assistants
- AI behavior
- AI Chatbots
- AI consciousness
- AI ethics
- AI hallucination
- AI personhood
- AI psychosis
- AI sycophancy
- Anthropic
- Biz & IT
- chatbots
- ChatGPT
- Claude
- ELIZA effect
- Elon Musk
- Features
- gemini
- Generative AI
- grok
- large language models
- Machine Learning
- Microsoft
- openai
- prompt engineering
- rlhf
- Technology
- xAI
The personhood trap: How AI fakes human personality
Intelligence without agency AI assistants don’t have fixed personalities—just patterns of output guided by humans. Recently, a woman slowed down […]
- AI
- AI alignment
- AI assistants
- AI behavior
- AI criticism
- AI ethics
- AI hallucination
- AI paternalism
- AI psychosis
- AI regulation
- AI sycophancy
- Anthropic
- Biz & IT
- chatbots
- ChatGPT
- ChatGPT psychosis
- emotional AI
- Features
- Generative AI
- large language models
- Machine Learning
- mental health
- mental illness
- openai
- Technology
With AI chatbots, Big Tech is moving fast and breaking people
Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist. Allan Brooks, a 47-year-old corporate recruiter, spent three […]
AI therapy bots fuel delusions and give dangerous advice, Stanford study finds
Popular chatbots serve as poor replacements for human therapists, but study authors call for nuance. When Stanford University researchers asked […]
Annoyed ChatGPT users complain about bot’s relentlessly positive tone
Users complain of new “sycophancy” streak where ChatGPT thinks everything is brilliant. Ask ChatGPT anything lately—how to poach an egg, […]
