Apple will teach Siri to better support dialogues with the user

The Siri voice assistant was considered quite advanced at the time it entered the market, but against the backdrop of the progress of modern chatbots, it began to lose to competitors in the ability to build related dialogues with the user. Apple is preparing to address this shortcoming, according to sources familiar with the company’s plans.

Image Source: Apple

The new generation of the Siri assistant, according to Bloomberg, will be based on more advanced large language models (LLM), which will provide the ability to conduct a dialogue. Siri will also be able to process more complex requests faster. This work is part of efforts to update the capabilities of Apple’s software in light of the rapid development of competing artificial intelligence systems. At the moment, even the latest Apple Intelligence complex is in many respects inferior to its competitors, not to mention the Siri service introduced 13 years ago.

The new generation of voice assistant in Apple’s internal communications appears under the designation “LLM Siri”. Larger language models will eventually become the basis for Siri’s voice interface. Its improved version will be introduced in 2025 in the iOS 19 (Luck) and macOS 16 (Cheer) operating systems. The new version of Siri will reach client devices no earlier than spring 2026. Taking into account the time reserve of one and a half years, the timing of the implementation of these innovations may be revised.

The developers are faced with the task of making the exchange of information between the user and Siri more similar to communication between two people. OpenAI’s ChatGPT and Google’s Gemini are also selected as benchmarks. Apple’s new voice interface will also work better with third-party software applications. Features like assignment writing and summarizing will also work better.

Some changes in Siri’s work will be implemented within iOS 18; the assistant will learn to better understand the context when the user is working with a smartphone. Apple’s first generation of LLM in iOS 18 is used to analyze information to further route the request directly to Siri or third-party large language models or applications. By the time iOS 19 is released, Apple will have a new generation proprietary LLM, which will provide end-to-end work with user data and requests, following the example of ChatGPT.

Next month, Apple will add ChatGPT to the Apple Intelligence ecosystem, to some extent compensating for the long work to create its own solution in this area. Later, users will have the opportunity to choose between chatbots from different developers, such as Google Gemini. The introduction of end-to-end work with in-house LLMs in the future will provide a higher degree of protection for user information. However, Apple is not going to completely abandon integration with third-party chatbots in the future. In preparation for the implementation of these plans, the company is reorganizing the structure of its specialized divisions and continues to hire specialists in relevant fields.

admin

Share
Published by
admin

Recent Posts

OpenAI accidentally deleted potential evidence in copyright lawsuit

Late last year, a lawsuit began in which The New York Times and other major…

44 minutes ago

Hidden features of Microsoft Bing Wallpaper scared users

Microsoft has released the Bing Wallpaper app, which updates your desktop background daily using images…

1 hour ago

“There will be more to come”: a Rockstar employee intrigued fans with “absolutely mind-blowing things” in GTA VI

While fans eagerly await the next GTA VI trailer, Rockstar Games' ambitious open-world crime thriller…

1 hour ago

“James Webb” was the first in history to find the “Einstein zigzag” – a unique curvature of space-time

Gravitational lensing, predicted 90 years ago by Einstein, was confirmed by observation four years after…

2 hours ago