
However 1,000 tokens per second is definitely modest by Cerebras requirements. The corporate has measured 2,100 tokens per second on Llama 3.1 70B and reported 3,000 tokens per second on OpenAI’s personal open-weight gpt-oss-120B mannequin, suggesting that Codex-Spark’s comparatively decrease velocity displays the overhead of a bigger or extra advanced mannequin.
AI coding brokers have had a breakout 12 months, with instruments like OpenAI’s Codex and Anthropic’s Claude Code reaching a brand new degree of usefulness for quickly constructing prototypes, interfaces, and boilerplate code. OpenAI, Google, and Anthropic have all been racing to ship extra succesful coding brokers, and latency has turn into what separates the winners; a mannequin that codes sooner lets a developer iterate sooner.
With fierce competitors from Anthropic, OpenAI has been iterating on its Codex line at a fast charge, releasing GPT-5.2 in December after CEO Sam Altman issued an inner “code purple” memo about aggressive strain from Google, then delivery GPT-5.3-Codex simply days in the past.
Diversifying away from Nvidia
Spark’s deeper {hardware} story could also be extra consequential than its benchmark scores. The mannequin runs on Cerebras’ Wafer Scale Engine 3, a chip the scale of a dinner plate that Cerebras has constructed its enterprise round since not less than 2022. OpenAI and Cerebras introduced their partnership in January, and Codex-Spark is the primary product to return out of it.
OpenAI has spent the previous 12 months systematically decreasing its dependence on Nvidia. The corporate signed an enormous multi-year cope with AMD in October 2025, struck a $38 billion cloud computing settlement with Amazon in November, and has been designing its personal customized AI chip for eventual fabrication by TSMC.
In the meantime, a deliberate $100 billion infrastructure cope with Nvidia has fizzled to date, although Nvidia has since dedicated to a $20 billion funding. Reuters reported that OpenAI grew unhappy with the velocity of some Nvidia chips for inference duties, which is strictly the sort of workload that OpenAI designed Codex-Spark for.
No matter which chip is underneath the hood, velocity issues, although it might come at the price of accuracy. For builders who spend their days inside a code editor ready for AI options, 1,000 tokens per second might really feel much less like fastidiously piloting a jigsaw and extra like operating a rip noticed. Simply watch what you’re slicing.









