Chipstrat

Chipstrat

When Broadcom’s Customers Start Buying Chip Startups

Will these acquisitions disintermediate Broadcom?

Austin Lyons's avatar
Austin Lyons
Oct 29, 2025
∙ Paid
5
4
Share

While we’ve recently discussed Marvell XPUs, Broadcom is the undisputed XPU darling of the semiconductor industry.

After all, remember when Hock Tan dropped this bombshell last December in the Q4 24 earnings call?

Hock Tan: As you know, we currently have 3 hyperscale customers, who have developed their own multi-generational AI XPU road map to be deployed at varying rates over the next 3 years.

In 2027, we believe each of them plans to deploy 1 million XPU clusters across a single fabric.

We expect this to represent an AI revenue serviceable addressable market, or SAM, for XPUs and network in the range of $60 billion to $90 billion in fiscal 2027 alone.

Wall Street loved it.

And just a quarter later, Q1 25 earnings call in March,

Hock Tan: Beyond these 3 customers, we had also mentioned previously that we are deeply engaged with 2 other hyperscalers in enabling them to create their own customized AI accelerator. We are on track to take out their XPUs this year.

In the process of working with the hyperscalers, it has become very clear that while they are excellent in the software, Broadcom is the best in hardware. Working together is what optimizes via large language models.

It is, therefore, no surprise to us.

I love Hock’s confidence. He continued,

Hock Tan: Since our last earnings call that 2 additional hyperscalers have selected Broadcom to develop custom accelerators to train their next-generation frontier models. So even as we have 3 hyperscale customers, we are shipping XPUs in volume today, there are now 4 more who are deeply engaged with us to create their own accelerators. And to be clear, of course, these 4 are not included in our estimated SAM of $60 billion to $90 billion in 2027.

Who are these customers? Rumor has it they are Google, Meta, and ByteDance.

Source

And back in September there was rumor OpenAI would become an official customer, from CNBC:

While Broadcom doesn’t name its large web-scale customers, analysts have said dating back to last year that its first three clients were Google, Meta, and TikTok parent ByteDance.

“During the call, the company surprised us by noting that it had secured a $10B order from a fourth XPU customer (we believe this is OpenAI), adding significant upside to the company’s three current XPU customers (Google, Meta, and ByteDance),” analysts at Cantor wrote in a note late Thursday. “Shipments are expected to commence in 2026.”

Recently, it was confirmed very publicly that OpenAI is a customer:

And not just an XPU customer, but an end-to-end systems customer:

Sam Altman: We’ve been working together for about the last 18 months designing a new custom chip. More recently, we’ve also started working on a whole custom system. These things have gotten so complex, you need the whole thing…

Andrew Mayne: So this is going to entail both compute and chip design and scaling out?

Sam Altman: This is a full system. So we worked, we closely collaborated for a while on designing a chip that is specific for our workloads. When it became clear to us just how much capacity, inference capacity, the world was going to need, we began to think about, could we do a chip that was meant just for that kind of a very specific workload?

Broadcom is the best partner in the world for that, obviously. And then to our great surprise, this was not the way we started. But as we realized that we were going to really need the whole system together to support this, as this got more and more complex, it turns out Broadcom is also incredible at helping design systems. So we are working together on that entire package, and this will help us even further increase the amount of capacity we can offer for our services.

By the way, notice how Sam says they are talking about designing AI systems tailored to specific workloads as we’ve been discussing here and here?

Of course OpenAI, which started as a non-profit AI research lab, leans heavily on Broadcom to help OpenAI optimize for these workloads.

Charlie Kawwas, Broadcom: So for us, it’s been absolutely exciting and refreshing because the beauty of the work we do together is that we focus on a certain workload. We started first looking at the IP and AI accelerator, which is what we call the XPU. And then we realized very quickly that we now can actually go to the workload all the way down to the transistor. And as Greg was just explaining, how we can both work together to go customize that platform for your workload, resulting in the best platform in the world. Then we realized, as Sam was saying earlier on, it’s not just that XPU or accelerator. Actually, it’s the networking that needs to go to scale it up, scale it out, and scale it across.

This is clearly a great XPU systems win for Broadcom.

Now interestingly, sell-side analyst consensus is that OpenAI is not Broadcom’s fourth customer, but rather the fifth.

Fourth is still unknown. Google, Meta, ByteDance, ?, OpenAI.

Nice. It seems Broadcom’s XPU business is firing on all cylinders.

But…. lately we’re seeing reports of Broadcom’s customers buying their own AI ASIC startups. One month ago, Reuters explained that Meta plans to buy chip startup Rivos to boost semiconductor efforts. And just a few days ago, The Information reported that AI Chip Startup SambaNova Explores Sale After Stalled Fundraising. Rumor has the buyer is OCI (Oracle Cloud Infrastructure).

Source

At a minimum, OCI is kicking the tires on SambaNova chips, per this Senior Principal Software Engineer - AI GPU Innovation job:

Kick the tires on SambaNova and Groq

Regardless of who buys SambaNova, the big question we’re building toward is:

Will the acquisition of AI ASIC startups by AI labs/hyperscalers ultimately disintermediate XPU makers like Broadcom and Marvell?

I mean, it sure looks like these customers are bringing talent in-house so they can just do it themselves, right? Maybe OpenAI wouldn’t need to lean so heavily on Broadcom?

First, let’s quickly remember how the XPU design process works to illustrate who does what (Broadcom vs customer). I’ll draw some pictures to help.

Then we’ll explore the disintermediation question.

XPU Design Process

A quick refresher on the chip design process, and we’ll break down which parts the hyperscaler leads, which are collaborative, and which Broadcom owns outright.

At a very high, simplistic level, think of this as the XPU design process:

Very simplistic, and of course it’s not linear but has feedback loops, iterations, etc.

The front end defines what the chip should do. It turns a written system spec into a working circuit described in software.

The back end takes that circuit description and turns it into real silicon.

The details of each front-end step:

Requirements: The team defines what the chip must achieve. Throughput, latency, precision formats (FP4, FP8, etc), HBM capacity and bandwidth, power, die size, etc

Architecture: Engineers translate those goals into a block-level blueprint: how many compute cores, how much on-chip memory, how wide the interconnects are, and how the XPUs will link together in a rack.

Logic design: The blueprint becomes actual hardware code written in a language like Verilog or VHDL. This code describes how every block behaves, allowing designers to run simulations and check that the logic functions correctly.

Circuit design: The verified logic is now turned into gate-level circuitry. They integrate key IP blocks like SRAMs, PLLs, and analog IP such as SerDes and HBM interfaces.

And on the back end:

Physical design: The circuit is laid out on silicon. Teams floorplan where each block sits, route the connections between them, and close timing, power, and thermal goals. The result is a detailed geometric layout ready for manufacturing.

Verification. The layout undergoes exhaustive checks to ensure it meets all design rules and maintains the original intent. This step is super important, you don’t want to “respin” the silicon. Once test silicon comes back, engineers validate that the real chip works across voltage, temperature, and workload conditions.

Fabrication: The verified layout is sent to the foundry for production. Hundreds of die per wafer!

Packaging: Finished dies are assembled with HBM stacks and I/O chiplets using advanced packaging such as CoWoS, tested for performance, and binned for datacenter deployment.

Responsibilities

So where does Broadcom fit in? And how will that change with these acquisitions? And this is just the XPU… what about the end-to-end system? IP, attach silicon, etc?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Austin Lyons
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture