Key Highlights
- Shares of Marvell climbed as much as 6.3% during premarket hours following news of potential collaboration with Google on two specialized AI processors.
- The first processor is a memory processing unit engineered to complement Google’s tensor processing unit (TPU); the second represents a next-generation TPU optimized for AI inference tasks.
- Google is targeting completion of the memory processor’s design by next year, with plans to proceed to test manufacturing afterward.
- This collaboration reflects Google’s strategic effort to establish TPUs as a viable alternative to Nvidia’s dominant GPU offerings.
- Alphabet’s Q1 2025 financial results, scheduled for April 29, could provide additional insight into its semiconductor investment strategy.
Marvell Technology (MRVL) experienced a significant premarket surge this Sunday after The Information published details of Alphabet’s Google entering negotiations with the semiconductor company to jointly engineer two cutting-edge AI chips.
Marvell Technology, Inc., MRVL
The shares advanced 6.3% by approximately 4:38 AM ET, delivering an energetic start to the trading week for investors.
Based on the publication’s sourcesâtwo individuals with direct knowledge of the discussionsâthe partnership centers on developing a memory processing unit (MPU) engineered to operate in tandem with Google’s current tensor processing unit (TPU) architecture. The second processor represents an entirely new TPU variant specifically tailored for AI inference applications.
The companies aim to finalize the memory processor’s architecture within the next year, subsequently transitioning to test manufacturing phases.
Google Expands Its Semiconductor Ecosystem
This partnership represents part of a broader strategy rather than an isolated initiative. Google has been systematically developing a semiconductor partner network, collaborating with industry players such as Intel and Broadcom in addition to Marvell.
Throughout most of its existence, Google maintained TPUs exclusively for internal operations. This approach transformed in 2022 when the company’s cloud business assumed responsibility for external chip distribution and began marketing TPUs to enterprise clients.
Following that pivot, Google has accelerated both manufacturing capacity and commercial sales. In the previous year, the tech giant introduced direct TPU sales into customers’ private data centersâextending beyond its traditional cloud platform delivery model. This represents a substantial evolution in distribution strategy.
Earlier this month, Google formally unveiled TorchTPU, an initiative designed to establish native compatibility between its processors and PyTorchâthe predominant AI development framework. This advancement reduces friction for engineers who have established their operations around PyTorch and are evaluating alternatives to Nvidia’s ecosystem.
TPU revenue has emerged as an increasingly significant component of Google Cloud’s financial performance as the organization seeks to demonstrate to stakeholders that its artificial intelligence expenditures are yielding tangible returns.
Nvidia’s Competitive Position
Nvidia maintains market leadership in AI computing infrastructure, but Google’s strategic maneuvers are intensifying competitive dynamics.
The Marvell collaboration strengthens Google’s capabilities in the inference chip segmentâa market where Nvidia has also been pursuing aggressive expansion. Reports indicate Nvidia is engineering new AI inference processors incorporating technology licensed from Groq.
With Google, Marvell, Intel, and Broadcom all pursuing similar objectives, the inference processor marketplace is experiencing rapid consolidation and intensified competition.
Google’s first-quarter financial disclosure is scheduled for April 29. Market observers will be scrutinizing management commentary regarding TPU production scaling plans, cloud services revenue trajectories, and how the Marvell negotiations fit into the company’s longer-term semiconductor development timeline.





