OpenAI’s GPT-5 Struggles to Simplify ChatGPT Experience Amidst User Preference Complexities

With the introduction of GPT-5 by OpenAI last week, the company aimed to streamline the ChatGPT experience by introducing a unified AI model that would serve as a “one size fits all” solution. This model was designed with an intelligent router capable of determining the most effective response to user inquiries, eliminating the need for users to navigate through a complex menu of AI options.
However, it appears that GPT-5 may not have lived up to these expectations as intended. In a recent post on X, OpenAI CEO Sam Altman announced new “Auto”, “Fast”, and “Thinking” settings for GPT-5, all accessible through the model picker. While the Auto setting operates similarly to the router initially announced for GPT-5, users now have the option to bypass it, allowing direct access to both fast and slow responding AI models.
In addition to these new modes, paid users can once again access several legacy AI models such as GPT-4o, GPT-4.1, and o3, which were recently deprecated. GPT-4o is now the default option in the model picker, with other AI models available through ChatGPT’s settings.
In the same post on X, Altman mentioned an upcoming update to GPT-5’s personality aimed at feeling warmer than the current version but less annoying than GPT-4o for most users. However, he also emphasized the need for more per-user customization of model personalities in the future.
The complexity of ChatGPT’s model picker seems to persist despite the introduction of GPT-5, suggesting that the router may not have met all user expectations as hoped. With high anticipation surrounding GPT-5, many were expecting OpenAI to push the boundaries of AI models as they did with the launch of GPT-4. However, the rollout of GPT-5 has been more challenging than anticipated.
The deprecation of certain AI models, including GPT-4o, sparked a backlash among users who had formed attachments to these models’ responses and personalities in unexpected ways. In the future, OpenAI will provide ample notice before deprecating any model, especially GPT-4o.
On launch day, GPT-5’s router reportedly faced issues, causing some users to perceive the AI model as less efficient than previous OpenAI models. These issues were addressed by Altman during an AMA session on Reddit, but it seems that the router may still not meet the needs of all users.
Routing prompts to the appropriate AI model is a complex task that involves aligning the model with user preferences and the specific nature of their inquiries. The router must make quick decisions on which model to assign each prompt to, ensuring prompt responses remain swift.
Beyond speed, some users may develop preferences for AI models based on factors such as verbosity or contrarian answers. This human attachment to certain AI models is a relatively new concept that requires further exploration and understanding. In some cases, AI chatbots have been linked to mentally unstable individuals going down psychotic rabbit holes.
It seems OpenAI has more work ahead in terms of aligning its AI models with individual user preferences.