Wednesday, October 8, 2025
No menu items!
Google search engine
HomeAI News and TrendsChatGPT Struggles With This Particular Question

ChatGPT Struggles With This Particular Question

Nearly two months after the release of GPT-5, meant to provide ChatGPT with “PhD level” intelligence and edge closer to artificial general intelligence (AGI), the OpenAI chatbot is still malfunctioning with basic inquiries.

On the ChatGPT subreddit, users observed that asking whether there’s an “NFL team whose name doesn’t end with the letter ‘s’” leads to a convoluted response, showcasing its “reasoning” abilities faltering.

“Yes — there are two NFL teams whose names don’t end with an ‘s,’” ChatGPT responds, listing two teams that do: “Miami Dolphins? ❌ end with s. Green Bay Packers? ❌ ends with s.” This unnecessary listing is puzzling since the names do end with “s.”

It appears ready to resolve the question, then falters again: “The only two teams that don’t end with ‘s’ are: Miami Dolphins ❌ no (wait, ends with s),” revising its mistake, “Hold up, let’s do this carefully. Actually, the correct answer is: Washington Commanders ❌ ends with s. Chicago Bears ❌ ends with s.”

In the Reddit example, ChatGPT goes on for paragraphs without arriving at the truth — that there are no such teams. Like a student padding a word count, it adds irrelevant information, pretending deep thought with phrases like “Hold up, let’s do this carefully.”

Eventually, it promises “the correct answer (for real this time),” intending to list “two teams” that don’t end with “s,” but lists three that do instead.

Other users shared instances where ChatGPT eventually provides the correct answer, but only after a lengthy, confusing discourse. Testing showed similar peculiar outcomes.

This isn’t the first time the bot has stumbled over a straightforward question or melted down spectacularly. Earlier this month, asking if a mythical seahorse emoji existed led to a logic struggle. Despite it not being an official emoji, ChatGPT argued it was, demonstrating AI’s lengths to please users.

Sycophancy isn’t the sole issue. GPT-5 uses a lightweight model for basic prompts and a more complex one for difficult questions. The lightweight model may be handling questions it shouldn’t, instead of deferring to the advanced one, a problematic dynamic contributing to disappointment and even anger with GPT-5’s launch. OpenAI’s decision to cease access to older models, later reversed, added to this frustration.

Regardless, it’s a weak justification. If AI requires its most powerful resources for simple questions, it may not yet be on the path to surpassing human intelligence.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
- Advertisment -
Google search engine

Most Popular

Recent Comments