OpenAI Updates ChatGPT with Enhanced Tone Control in GPT-5.1

5

OpenAI has released a major update to its language model, GPT-5.1, giving users more granular control over ChatGPT’s tone and behavior. The changes, announced Wednesday, aim to make the chatbot more adaptable and responsive to individual preferences, while also addressing concerns about overly agreeable or manipulative AI interactions.

Two New Models: Instant and Thinking

The update introduces two distinct models: GPT-5.1 Instant and GPT-5.1 Thinking. Instant is designed for quick, straightforward responses, described by OpenAI as “warmer, more intelligent, and better at following instructions.” Thinking, on the other hand, is optimized for complex reasoning tasks. It is engineered to allocate more processing power (tokens) to difficult queries, resulting in higher-quality answers.

Users don’t need to choose exclusively; GPT-5.1 Auto automatically routes simpler requests to Instant and more complex ones to Thinking, streamlining the experience. OpenAI plans a phased rollout, prioritizing paid subscribers before extending access to free users. Legacy GPT-5 models will remain available for three months for those who prefer them.

Simplified Personality Presets

OpenAI has revamped its personality settings, making customization more accessible. Users can now select from six pre-defined tones:

  • Default: A balanced, neutral style.
  • Friendly: A warmer, more conversational approach (updated from the previous “Listener” persona).
  • Efficient: A concise, task-oriented tone (updated from “Robot”).
  • Professional: Formal and business-like.
  • Candid: Direct and unfiltered.
  • Quirky: Playful and unconventional.

The Cynic and Nerd personas have been retained as Cynical and Nerdy, respectively. Users can also fine-tune the chatbot’s tone directly in personalization settings, including emoji usage.

Why Tone Matters

The ability to control AI personality is not just about preference; it’s about safety. An overly accommodating chatbot can reinforce unhealthy behavior patterns or provide dangerously biased responses. OpenAI previously rolled back updates when users reported sycophantic interactions—where the AI excessively agrees with the user, potentially leading to harmful reinforcement.

“The best people in our lives are the ones who listen and adapt, but also challenge us and help us grow,” Fidji Simo, OpenAI’s CEO of applications, wrote in a blog post. “The same should be true for AI.”

These changes reflect a broader industry trend toward more responsible AI development, acknowledging that tone significantly shapes user trust and mental well-being. OpenAI’s goal is to create an AI assistant that feels personalized yet maintains constructive feedback, avoiding the pitfalls of blind agreement.