
OpenAI has added GPT-5.4 mini and GPT-5.4 nano to its AI lineup. The two smaller models are built for faster, high-volume workloads across coding, automation, and multimodal applications. Both models bring many of GPT-5.4’s capabilities into more efficient formats, where speed and cost play a larger role in real-world use.
GPT-5.4 mini offers clear improvements over GPT-5 mini across coding, reasoning, multimodal understanding, and tool use, while running more than twice as fast. It also comes close to GPT-5.4 in several benchmarks, including SWE-Bench Pro, where it reaches 54.4 percent compared to 57.7 percent for the larger model, and OSWorld-Verified, where it scores 72.1 percent.
GPT-5.4 nano sits at the other end of the scale as the smallest and cheapest option. It’s built for tasks where speed and cost are the most critical, including classification, data extraction, ranking, and lightweight coding. Although smaller, it further improves on GPT-5 nano and delivers solid performance across a range of workloads.
The two new models fit into environments where responsiveness is important. Coding assistants, for example, benefit from faster iteration cycles, while subagents handling background tasks can complete work quickly without slowing down the system. Applications that rely on screenshots or image input also benefit from the faster processing, particularly when real-time interaction is needed.
OpenAI benchmarks show that GPT-5.4 mini offers a strong balance between performance and latency. It consistently outperforms GPT-5 mini at similar speeds and approaches GPT-5.4-level results in several coding and reasoning tests. In Terminal-Bench 2.0, it reaches 60.0 percent compared to 38.2 percent for GPT-5 mini, while Toolathlon results show 42.9 percent versus 26.9 percent.
The models also support tasks that combine different model sizes. Larger systems can handle planning and coordination, while smaller models such as GPT-5.4 mini process narrower tasks in parallel, including codebase searches or document analysis,allowing developers to scale performance without relying on a single model for every task.
GPT-5.4 mini also shows gains in multimodal performance. It can interpret dense user interface screenshots and complete computer-based tasks quickly. On OSWorld-Verified, it performs close to GPT-5.4 and clearly ahead of GPT-5 mini, which reaches 42.0 percent.
GPT-5.4 mini and nano availability
GPT-5.4 mini is available across the API, Codex, and ChatGPT, with support for text and image inputs, tool use, and a 400k context window. Pricing is set at $0.75 per 1M input tokens and $4.50 per 1M output tokens. GPT-5.4 nano is available through the API at $0.20 per 1M input tokens and $1.25 per 1M output tokens.
You can find out more about the two new models and how they perform here.
What do you think about GPT-5.4 mini and nano? Let us know in the comments.
