OpenAI has introduced new features to simplify the process of creating applications based on artificial intelligence. Developers can now use an online tool to create voice-based AI solutions using a single set of instructions.
OpenAI gets most of its revenue from businesses that use the company’s neural networks to create their own AI applications. Expanding the ability to create such products makes sense as the AI battle escalates with companies like Google introducing algorithms into their products that can process different types of information, including text, images and video.
The process of creating voice assistants requires developers to go through at least three stages: converting audio into text, processing the request and generating a text response to it, and converting the received response into audio. As part of the rollout of new capabilities for creating voice AI applications, OpenAI introduced a tool for fine-tuning large language models after completing the training phase. This approach will improve the quality of responses that algorithms created by developers generate in response to queries in text format and using images. The fine-tuning phase can be accompanied by feedback from people who evaluate how well the algorithm produces answers.
OpenAI believes that using images to fine-tune models will give developers greater opportunities to improve AI algorithms’ understanding of what is shown in images. Applications created in this way can act, for example, as an advanced search for visual elements. In addition to this, OpenAI introduced a tool that will allow smaller AI models to learn from larger models, as well as “Fast Caching”, which will significantly reduce development costs by reusing text fragments previously processed by the algorithm. All presented innovations are already being tested with a limited number of OpenAI clients.