OpenAI quietly updated the large GPT-4o language model that underlies the ChatGPT service. The company did not list how it differs from previous ones, but did say that it has learned to give even better answers.
«As of last week, a new model GPT-4o appeared in ChatGPT. We hope you already like it – try it if you haven’t already. We think you’ll like it! ” said the official ChatGPT social media account X. “We have deployed the GPT-4o update, which we have found through experimentation and feedback to be preferred by ChatGPT users. This is not a new top-of-the-line model. While we’d like to tell you exactly how the model’s responses differ, learning how to compare and communicate model improvements in detail remains an area of research (which we’re working on!),” OpenAI wrote on its corporate blog afterward.
Initially, users assumed that GPT-4o in ChatGPT had learned to reason, but, as an OpenAI representative later explained, in reality, no new reasoning mechanisms appeared, although ChatGPT can describe the course of its “thoughts” upon special request. People also noted that the quality of ChatGPT responses has really improved, and apparently image generation by GPT-4o itself has started working – its predecessor, the basic version of GPT-4, not being multimodal, connected a separate DALL-E 3 model for this. But GPT-4o It does an even better job of creating images: it understands text clues more effectively and creates illustrated text in images more accurately and realistically.
Developers of third-party applications also noticed that the new model for ChatGPT is not yet available by default when working through the API. OpenAI explained that when working through the API, you can call it via “chatgpt-4o-latest”, but it is better to do this only for testing, and in most cases for third-party applications the “gpt-4o-2024-08-06” option is still recommended. . Although the new version, the company admitted, is optimized for chat mode and best reveals its abilities in dialogue.