Key Takeaways
- Amazon Web Services has committed to acquiring 1 million GPUs from Nvidia through the end of 2027.
- Deliveries commence in 2025 and continue over a three-year period.
- The comprehensive agreement encompasses networking equipment, Groq inference processors, and future-generation Blackwell and Rubin architectures.
- AWS plans to deploy seven distinct Nvidia chip varieties for AI inference operations.
- Shares of both NVDA and AMZN experienced modest gains in extended trading after the disclosure.
The Amazon Web Services partnership represents one of Nvidia’s most substantial single-client semiconductor commitments to date. The arrangement becomes increasingly compelling upon closer examination of its scope and structure.
According to Ian Buck, Vice President at Nvidia, Reuters confirmed the 1 million GPU delivery schedule will commence in 2025 and extend through 2027. This timeframe aligns precisely with CEO Jensen Huang’s forecast of a $1 trillion addressable market for Nvidia’s Blackwell and Rubin processor lines during the identical window.
The partnership extends far beyond simple GPU procurement. AWS is acquiring an extensive portfolio of Nvidia infrastructure, including Spectrum-X and ConnectX networking solutions. This development carries particular significance since AWS has traditionally relied on proprietary networking infrastructure. Integrating Nvidia’s networking products represents a strategic departure from previous practices.
Amazon Embraces Nvidia’s Complete Inference Solution
AI inference—the computational process enabling AI systems to produce outputs and execute tasks—forms the foundation of this partnership’s technical framework. AWS intends to utilize seven distinct Nvidia chips for managing inference operations.
Buck articulated the strategy directly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq processors, unveiled by Nvidia this week following a $17 billion licensing arrangement with an AI semiconductor company, constitute one component of the inference architecture. These chips operate in conjunction with six additional Nvidia processors to deliver what the organization characterizes as superior inference capabilities.
AWS will also implement Nvidia’s Blackwell processors and anticipates integrating the forthcoming Rubin platform upon availability. Neither Nvidia nor Amazon has revealed the monetary value of this arrangement.
Both companies’ shares registered moderate increases during after-hours trading Thursday following the announcement. NVDA closed regular trading down approximately 1%, while AMZN declined roughly 0.5%.
AWS Maintains Parallel Custom Chip Development
Amazon continues developing proprietary AI processors, including its Trainium2 chip. Nevertheless, the cloud giant is simultaneously leveraging Nvidia hardware for the most intensive computational requirements. The dual strategy appears synergistic rather than conflicting.
This agreement underscores ongoing substantial capital allocation toward AI infrastructure among leading cloud computing providers. AWS isn’t abandoning its custom silicon initiatives—instead, it’s augmenting them with Nvidia hardware for specialized high-performance applications.
The Nvidia-AWS collaboration was initially revealed this week without precise timeline details. Buck’s Thursday statements to Reuters provided the most definitive information to date: deliveries beginning in 2025, continuing through 2027’s conclusion, and encompassing a diverse selection of Nvidia offerings across computational processing, networking infrastructure, and inference technology.





