TLDRs;
Contents
- OpenAI is partnering with Broadcom to build its first AI chip, set to launch in 2026 for internal use.
- Broadcom’s AI revenue is booming, with expectations to reach $90B by 2027, fueled by custom chip demand.
- The chip, fabricated using TSMC’s 3nm process, will reduce OpenAI’s reliance on Nvidia and AMD.
- OpenAI’s move aligns with broader industry trends as AI leaders push toward hardware-software vertical integration.
OpenAI is preparing to enter the semiconductor space with its first custom-designed artificial intelligence chip, developed in partnership with Broadcom.
The project is expected to be completed by 2026, positioning the San Francisco-based company alongside tech giants like Google, Amazon, and Meta, which have also invested in proprietary chipmaking to support their AI ambitions.
The custom chip, according to sources familiar with the matter, will be used internally for OpenAI’s vast computational needs rather than offered to external customers. The move underscores the scale of resources required to power large-scale AI systems like ChatGPT, which depend on thousands of high-performance processors to train and run.
Broadcom becomes strategic AI chip partner
Broadcom, a global leader in semiconductor design, has emerged as a key beneficiary of the AI hardware boom. Its AI revenue surged by 63% year-over-year in the third quarter of 2025, driven in part by custom chip contracts with companies such as OpenAI.
🇺🇸 OPENAI TO MASS PRODUCE CUSTOM AI CHIPS WITH BROADCOM
OpenAI is set to begin mass production of its own artificial intelligence chips next year, co-designed with U.S. semiconductor giant Broadcom.
The deal, valued at $10 billion in orders, positions OpenAI alongside Google,… https://t.co/wPOzh8tlXn pic.twitter.com/PwMXt33D38
— Mario Nawfal (@MarioNawfal) September 5, 2025
The firm recently announced over $10 billion in new AI infrastructure orders, with CEO Hock Tan forecasting revenue growth to “improve significantly” in fiscal 2026. Industry analysts expect Broadcom’s AI-related revenues to reach between $60 billion and $90 billion by 2027 as demand for specialized chips accelerates.
By working with Broadcom, OpenAI gains access to advanced chip design capabilities without building its own costly fabrication facilities, following a strategy many AI companies now favor.
OpenAI diversifies beyond Nvidia and AMD
OpenAI currently relies heavily on Nvidia’s GPUs and AMD’s accelerators to power its AI operations. However, increasing demand across the industry has made supply limited and expensive. By designing custom chips with Broadcom and fabricating them with Taiwan Semiconductor Manufacturing Company (TSMC), OpenAI aims to cut costs and reduce dependence on external suppliers.
The new chip is expected to leverage TSMC’s cutting-edge 3nm process, enhancing performance and energy efficiency. This vertical integration strategy aligns OpenAI with other leaders in the tech sector, including Apple’s M-series chips and Google’s Tensor Processing Units (TPUs).
A new era of AI hardware competition
The race toward in-house chip design highlights a broader industry trend. AI companies are increasingly focused on controlling both software and hardware to optimize performance and manage costs. Google, for instance, has already used machine learning techniques to speed up TPU design, completing work in under six hours, a process that traditionally takes human engineers months.
OpenAI’s chip ambitions arrive as the company scales global infrastructure investments. In parallel with its Broadcom partnership, OpenAI is planning a massive 1-gigawatt data center in India under its Stargate initiative, signaling its intent to build the backbone of next-generation AI infrastructure.
If successful, the Broadcom partnership will not only secure OpenAI’s hardware future but also intensify competition in the semiconductor sector. By moving beyond reliance on Nvidia, OpenAI joins a wave of tech companies reshaping the AI chip ecosystem for years to come.