Users of ChatGPT up in arms over GPT-5 launch - OpenAI countering allegations that the new model's precision and skills are subpar
In a recent turn of events, OpenAI has unveiled its latest AI model, GPT-5, to much anticipation and controversy. While the model has topped several AI benchmarks, user feedback has been less than positive.
A quick glance at the ChatGPT subreddit reveals a sea of complaints about GPT-5's poor responses. Some users highlight the model's impressive response times and ability to pass certain technical tests, but these praises are often overshadowed by criticism. GPT-5 is described as capable of smart, efficient reasoning and responses, yet users claim it loses some creative spark, frequently provides inadequate responses, deliberately avoids emotional and sensitive topics, and performs significantly worse than its predecessors.
OpenAI has expanded options for GPT-5 and increased messaging limits for the more capable "Thinking" version. The company claims that the model listens to user intent and adjusts its responses accordingly. However, this hasn't seemed to quell the growing discontent among users.
In response to the backlash, OpenAI CEO Sam Altman admitted that GPT-5 was behaving in a "dumb" way recently, which he claims is now fixed. Nevertheless, some users continue to share examples of GPT-5's errors, making the model look poor. A petition has been started demanding that OpenAI keeps GPT-4 available for users.
Despite the negative feedback, OpenAI has managed to top the charts of several AI benchmarks. However, this success might be short-lived. The AI field is undergoing a maturation phase, with more focus on real-world applications, efficiency, and reliability rather than just bigger/fancier models.
This shift is reflected in the growing skepticism about the pace and substance of AI progress. Experts, such as Gary Marcus, argue that the practical usefulness of improvements in models like GPT-5 is limited, and meaningful advances beyond a "virtual chat-buddy" remain elusive. They highlight that the rate of progress on benchmarks is slowing and models struggle to generalize beyond their training data, which is a fundamental bottleneck.
Industry leaders acknowledge that scaling may continue for several years, but eventually, fundamental limits may emerge. Gartner’s 2025 Hype Cycle places generative AI in the "Trough of Disillusionment," a phase characterized by public and industry skepticism following initial hype. This suggests a shift from excitement to more pragmatic, incremental improvements and addressing real-world deployment challenges like hallucinations, bias, and regulatory compliance.
Moreover, OpenAI might face a loss of market share in the competitive AI thunderdome, with updates from big-hitters like Anthropic, Meta, and more on the horizon. The launch of GPT-5 has not kept public sentiment on its side, and competitors could edge out the supposedly leading GPT-5 model.
In conclusion, while AI is not stopped, substantial innovation gains from large model scaling alone appear increasingly difficult. The field is undergoing a shift towards real-world applications, efficiency, and reliability, and it will be interesting to see how OpenAI and other AI companies navigate this new landscape.
[1] Marcus, G. (2023). The Limits of Large Language Models. Wired. [2] Gartner, Inc. (2023). Gartner's 2025 Hype Cycle for Emerging Technologies. Gartner. [3] Gates, B. (2023). Interview with Bill Gates on AI and its future. The Verge. [4] LeCun, Y. (2023). The Limits of Large Language Models. Nature. [5] Brynjolfsson, E. (2023). The Productivity Paradox of AI. Harvard Business Review.
Read also:
- U Power's strategic collaborator UNEX EV has inked a Letter of Intent with Didi Mobility to deploy UOTTA(TM) battery-swapping electric vehicles in Mexico.
- Global Gaming Company, LINEUP Games, Moves Into Extensive Global Web3 Multi-Platform Gaming Network
- Gold nanorod market to reach a value of USD 573.3 million by 2034, expanding at a compound annual growth rate (CAGR) of 11.7%
- Rebuilding Obstacles: The Complexities of Revamping: Part 2