When investors begin to doubt themselves, a certain silence descends upon the market. The hesitancy in trading volume, the cautious language in analyst notes, and the way financial media begins to hedge every headline with “but” are all almost palpable. AI stocks are now quiet. And it’s probably not the right moment to leave.
The AI trade that characterized 2024 and the majority of 2025 was fairly simple. The chips were provided by Nvidia. Everyone else rushed to purchase them. Hyperscalers revealed enticing capital expenditure plans, data centers grew, and early investors made real money. However, the raw hardware gold rush is coming to an end. It is being replaced by something more intricate and, if you know where to look, possibly more profitable.
| Category | Details |
|---|---|
| Companies Covered | Broadcom (AVGO), Nvidia (NVDA), Marvell Technology (MRVL) |
| Stock Exchanges | NASDAQ |
| Broadcom Market Cap | ~$1.5 Trillion |
| Nvidia Market Cap | ~$4.3 Trillion |
| Broadcom Q1 FY2026 Revenue Target | $19.1 Billion (28% YoY growth) |
| Broadcom AI Backlog | $73 Billion |
| Nvidia Forward P/E | ~20.2x |
| Marvell FY2026 Revenue | $8.2 Billion (42% YoY) |
| Marvell Data Center Revenue Share | 74% of total sales |
| Key Themes | Custom Silicon, AI Infrastructure, Inference Chips, Connectivity |
| Reference Links | Broadcom Investor Relations / Nvidia Investor Relations |
In serious discussions about what this rally’s next leg will entail, the name Broadcom keeps coming up. Instead of using general-purpose silicon that is sold off a shelf, the company has spent years discreetly developing the infrastructure needed to create custom AI accelerators, or chips made especially for customers like Alphabet and Meta. It may not seem important, but that distinction is crucial.
Generic hardware companies find it difficult to match the tighter relationships, stickier revenue, and margins that come with custom chips. With its AI-related backlog now at $73 billion, Broadcom recently announced a revenue target of $19.1 billion for the first quarter of fiscal 2026, a 28% increase year over year. That isn’t conjectural. That is contracted business that will continue for almost two years.
The disparity between that backlog figure and the uncertainty that still permeates so many aspects of the tech industry is difficult to ignore. There’s a feeling that Broadcom has subtly emerged as one of the most structurally significant AI firms without ever receiving the same breathless attention that Nvidia frequently receives. For the type of high-density AI clusters that hyperscalers are currently constructing, its Tomahawk 6 networking switches, operating at 102 terabits, are being booked at record levels. Broadcom’s orbit appears to be growing, as evidenced by the recent announcement of a significant partnership with Anthropic.
Naturally, Nvidia is here to stay. Whether any competitor can actually challenge its hegemony in AI computation within a timeframe that matters to present investors is still up in the air. The picture of valuation has changed. Nvidia is currently trading at prices that two years ago would have seemed nearly unattainable at about 20 times forward earnings. The company consistently introduces new chip architectures, Blackwell being the most notable recent example, that enable its customers to upgrade from outdated hardware in a way that is truly beneficial rather than a grudging necessity.
It is still unable to meet the demand for its GPUs. For a company that was already perceived as having a monopoly on a product that the world seems to be obsessed with, that is an amazing position to be in.
Investors would be wise to consider what Nvidia’s predicament reveals about the larger AI trade. The existence of chips is no longer the bottleneck. It has to do with everything around them. infrastructure for cooling. fast networking. bandwidth of memory. optical connections. These physical limitations begin to bind more tightly than the chip supply itself did as data centers surpass 100,000 GPUs per cluster. Because of this setting, discussions about Marvell Technology are becoming more and more interesting.
Three years ago, Marvell held a position that was essentially nonexistent on a significant scale. In a single year, its custom silicon business—ASICs made for particular hyperscaler workloads—grew from almost nothing to about $1.5 billion in revenue.
Data center products now account for nearly three-quarters of total sales, with fiscal 2026 revenue reaching $8.2 billion overall, up 42% year over year. As this develops, it is hard not to see the clear parallel to Nvidia’s own early trajectory, where the company was unable to avoid a structural chokepoint in an industry that was spending hundreds of billions of dollars.
In the short term, Marvell’s $3.25 billion acquisition of Celestial AI appears daring, but in hindsight, it is clear. As cluster density rises and copper’s shortcomings become more obvious, photonic interconnect technology—transmitting data using light instead of electrical signals—is probably going to become crucial. Instead of relying on a third party, Marvell now possesses that capability internally. It’s too early to tell, but this one transaction might prove to be the pivotal strategic moment in the company’s history.
The more general point is that the businesses making the most headlines are not always the ones best positioned for the future of artificial intelligence. The suppliers of custom silicon, networking fabric designers, and companies whose products sit between Nvidia’s chips and the actual inference workloads that AI customers are paying for are the ones that hyperscalers cannot build without.
Each of the three companies—Broadcom, Nvidia, and Marvell—represents a distinct layer of that stack and has proven to be extremely challenging to replace in their own unique ways. Investors may want to pay attention before the next leg up makes the entry price a distant memory because of the uncommon combination of that kind of structural importance, accelerating revenue, and visible backlogs.

