Free to use. Easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming, and more.
GPT-4o (“o” for “omni”) represents a major step forward in creating more natural human-computer interactions. Unlike previous models, GPT-4o accepts any combination of text, audio, image, and video as input and generates any combination of text, audio, and image as output. It offers incredibly fast response times for audio inputs, as low as 232 milliseconds, with an average of 320 milliseconds—comparable to human response times in conversation. It matches GPT-4 Turbo’s performance for text in English and code, while significantly improving on text in non-English languages. Additionally, GPT-4o is much faster and 50% cheaper in the API, with notable advancements in vision and audio understanding.
The Evolution from Previous Models
Before GPT-4o, interacting with ChatGPT in Voice Mode meant dealing with latencies averaging 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. Voice Mode was a pipeline of three separate models: one for transcribing audio to text, GPT-3.5 or GPT-4 for processing text input and generating text output, and a final model to convert text back to audio. This setup led to a loss of information since GPT-4 couldn’t directly process tone, handle multiple speakers, understand background noises, or generate nuanced outputs like laughter, singing, or emotional expressions.
A Unified, Multi-Modal Model
GPT-4o changes this by integrating text, vision, and audio into a single, end-to-end model. All inputs and outputs are processed by the same neural network, allowing for richer and more cohesive interactions. As our first model combining all these modalities, GPT-4o is just beginning to show its potential. We are still exploring the full range of its capabilities and understanding its limitations.
Experience the Future Today!
Explore GPT-4o and experience a more advanced and seamless interaction with AI. Dive into the future of multi-modal AI and see what this groundbreaking technology can do for you!