OpenAI taps Cerebras to speed up ChatGPT’s replies

openai-taps-cerebras-to-speed-up-chatgpt’s-replies
OpenAI taps Cerebras to speed up ChatGPT’s replies
OpenAI and Cerebras

If you’ve ever felt frustrated waiting for OpenAI tools like ChatGPT to respond to a query, there’s some good news on the horizon. The AI firm is adding Cerebras systems to its compute stack to speed up how quickly replies are generated, especially during long or complex outputs.

The integration focuses on reducing inference delays, which affects how quickly users see results when generating text, code, images, or running AI agents.

SEE ALSO: Ads are coming to ChatGPT, and the $8 plan is the compromise

Cerebras builds AI hardware designed around a single, extremely large chip that combines compute, memory, and bandwidth in one place. That design avoids the handoffs that typically slow down inference on conventional hardware, particularly when models produce long responses rather than short replies.

The goal of bringing Cerebras into the mix is simple: faster responses change how people use AI. Every request follows a loop, a prompt goes in, the model processes it, and an output comes back. When that loop tightens, interactions feel more natural and less interrupted.

Low latency doesn’t just make things feel nicer, it changes behavior. When AI responds in real time, users tend to push it harder, ask follow-up questions, generate longer code blocks, and rely on it for tasks that feel closer to collaboration than lookup. That makes response speed less of a technical detail and more of a product feature.

Adding Cerebras to OpenAI

The Cerebras systems will be integrated into OpenAI’s inference stack in phases. The plan is to expand across different workloads rather than shifting everything at once. This will allow faster systems to be matched to tasks where response times matter the most. Not all inference will be treated the same.

“OpenAI’s compute strategy is to build a resilient portfolio that matches the right systems to the right workloads. Cerebras adds a dedicated low-latency inference solution to our platform. That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people,” explained Sachin Katti, a senior OpenAI executive who oversees networking and systems.

“OpenAI’s compute strategy is to build a resilient portfolio that matches the right systems to the right workloads. Cerebras adds a dedicated low-latency inference solution to our platform. That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people,” he added.

Andrew Feldman, co-founder and CEO of Cerebras, added, “We are delighted to partner with OpenAI, bringing the world’s leading AI models to the world’s fastest AI processor. Just as broadband transformed the internet, real-time inference will transform AI, enabling entirely new ways to build and interact with AI models.”

That comparison hints at what’s really being targeted. Faster inference doesn’t just shave seconds off answers. It supports new patterns, such as AI agents that act continuously, coding tools that feel responsive instead of laggy, and creative tools where waiting breaks concentration.

The rollout will happen in multiple tranches, with capacity coming online gradually through 2028. That long horizon suggests this isn’t a short-term performance tweak, but part of a larger shift in how OpenAI sees future AI infrastructure being assembled and scaled.

Rather than betting on a single type of hardware, OpenAI is leaning into a mixed approach. Some systems are better for training, others for massive parallel workloads, and some for fast, interactive inference. Cerebras fits squarely into that last category.

For users, the change is unlikely to come with a big announcement banner. It will show up as fewer pauses, smoother back-and-forth, and models that feel more willing to think out loud without stalling mid-sentence.

What do you think about faster, more real-time AI responses becoming a priority? Let us know in the comments.