# OpenAI Releases Light Version of “AI Programmer” Codex
OpenAI has released a lightweight version of its programming tool, Codex.
GPT-5.3-Codex-Spark is now in research preview.
You can just build things—faster. pic.twitter.com/85LzDOgcQj
— OpenAI (@OpenAI) February 12, 2026
GPT-5.3-Codex-Spark is positioned as a reduced version of the GPT-5.3 Codex model introduced in February. It is designed for faster inference. To achieve this, the startup used a specialized chip from its hardware partner Cerebras.
In January, the company announced the collaboration.
“Integrating Cerebras into our computing solutions aims to make AI response times significantly faster,” OpenAI stated at the time.
The company calls the Spark chip a “milestone” in the partnership.
The processor is intended for rapid real-time operation. It is based on Wafer Scale Engine 3—the third generation of Cerebras’ mega-chips, equipped with four trillion transistors.
OpenAI describes the new tool as a “daily productivity driver,” helping with rapid prototyping. The original version of GPT-5.3 Codex is meant for longer, more labor-intensive tasks.
In an official statement, OpenAI emphasized that Spark is designed for minimal latency in Codex.
“Codex-Spark is the first step toward Codex, which operates in two complementary modes: real-time collaboration for quick iterations, and long-term tasks requiring deep reasoning,” the company noted.
Recall that in February 2026, OpenAI released a separate application for the coding assistant Codex.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI releases the light version of "AI Programmer" Codex - ForkLog: cryptocurrencies, AI, singularity, future
OpenAI has released a lightweight version of its programming tool, Codex.
GPT-5.3-Codex-Spark is positioned as a reduced version of the GPT-5.3 Codex model introduced in February. It is designed for faster inference. To achieve this, the startup used a specialized chip from its hardware partner Cerebras.
In January, the company announced the collaboration.
The company calls the Spark chip a “milestone” in the partnership.
The processor is intended for rapid real-time operation. It is based on Wafer Scale Engine 3—the third generation of Cerebras’ mega-chips, equipped with four trillion transistors.
OpenAI describes the new tool as a “daily productivity driver,” helping with rapid prototyping. The original version of GPT-5.3 Codex is meant for longer, more labor-intensive tasks.
In an official statement, OpenAI emphasized that Spark is designed for minimal latency in Codex.
Recall that in February 2026, OpenAI released a separate application for the coding assistant Codex.