OpenAI announced that the ChatGPT chatbot, powered by generative intelligence, has the ability to process a video stream and talk to users about what it is “observing” using a smartphone or computer camera, or what it sees on the device’s screen. The new feature is available in Advanced Voice Mode.

Image source: OpenAI

The company announced that ChatGPT will receive a computer vision function that allows it to “see” using the user’s smartphone camera or through screen broadcasting. Owners of paid ChatGPT Plus, Team and Pro subscriptions now have access to ChatGPT Enhanced Voice with Video Recognition. The company says ChatGPT Enterprise and Edu subscribers won’t get the feature until January, and that there is no timetable for its launch in the EU, Switzerland, Iceland, Norway and Liechtenstein.

In a recent demo on CNN’s 60 Minutes, OpenAI President Greg Brockman tested an advanced voice mode with visual recognition with TV host Anderson Cooper on the chatbot’s anatomical skills. When Cooper drew body parts on the board, ChatGPT “understood” what he was drawing. At the same time, ChatGPT made an error in a geometry task in this mode, which indicates its tendency to hallucinate.

Since its announcement in May, the company has delayed the launch of an enhanced voice mode with visual recognition several times. In April, OpenAI promised that the mode would be available to users “within a few weeks” but admitted months later that it would take longer than planned. And when Enhanced Voice launched for some users in September, it didn’t have computer vision functionality.

Google and Meta✴ are also working on similar capabilities for their chatbots. This week, Google made its real-time video analysis AI feature Project Astra available to a group of “trusted testers” on the Android platform.

Leave a Reply

Your email address will not be published. Required fields are marked *