Key Highlights
- At GTC 2026, Nvidia introduced the Groq 3 LPU, a processor optimized exclusively for AI inference operations
- The company’s LPX server configuration integrates 128 Groq 3 chips and works alongside Vera Rubin NVL72 systems to deliver throughput improvements of 35x per megawatt
- A standalone Vera CPU rack marks Nvidia’s entry into direct rivalry with Intel and AMD’s data center CPU business
- Designed for agentic AI applications, the Vera CPU excels at autonomous tasks like web navigation and file data retrieval
- Fiscal 2026 saw Nvidia’s data center segment generate $193.5 billion in revenue, a significant jump from the previous year’s $116.2 billion
At this year’s GTC conference in San Jose on Monday, Nvidia unveiled a comprehensive portfolio of processors and server infrastructure that signals its ambitions far beyond traditional GPU manufacturing.
The star of the show was the Groq 3 language processing unit, commonly abbreviated as LPU. Following a $20 billion acquisition completed last December, Nvidia obtained licensing rights to Groq’s technology and welcomed key personnel including founder Jonathan Ross and president Sunny Madra to the team.
Groq 3 specializes in inference workloads — the operational phase that follows model training. Whenever users interact with AI chatbots by submitting queries and receiving responses, they’re engaging with inference processes. This segment of artificial intelligence is experiencing rapid expansion, creating opportunities for purpose-built silicon to outperform multipurpose GPUs.
According to Ian Buck, Nvidia‘s VP of hyperscale and HPC, the Groq 3’s memory architecture delivers superior speed compared to GPU memory systems, though GPUs maintain advantages in total capacity. The strategy involves leveraging both technologies’ respective strengths.
This philosophy drives the LPX server rack design — an infrastructure solution containing 128 Groq 3 LPUs. When deployed alongside the Vera Rubin NVL72 rack configuration, Nvidia claims clients can achieve throughput gains of 35x per megawatt and unlock 10x greater revenue potential. The system targets trillion-parameter language models and context windows spanning millions of tokens.
Vera CPU Enters the Intel and AMD Arena
Equally significant was the announcement of the standalone Vera CPU rack. While Vera processors were initially introduced as components within the Vera Rubin superchip — featuring one Vera CPU paired with dual Rubin GPUs — Nvidia now offers Vera as an independent product line.
The rack configuration bundles 256 liquid-cooled Vera processors into an integrated system. Nvidia positions it as the optimal CPU for agentic AI workloads — autonomous artificial intelligence capable of web browsing, document parsing, or executing complex multi-stage operations without human intervention.
“We’ve designed a new kind of CPU, the Olympus core, engineered by NVIDIA for AI execution,” Buck explained. Vera additionally supports data mining operations, personalization engines, and contextual analysis that enhances AI model performance.
This move positions Nvidia as a direct competitor to Intel and AMD in the data center CPU marketplace — territory these chipmakers have controlled for decades.
Nvidia recently secured a partnership with Meta for large-scale deployment of its previous-generation Grace CPUs — representing the biggest installation of its kind. The Vera introduction amplifies this strategic direction.
Additional GTC Hardware Announcements
The company also presented the Bluefield-4 STX storage rack and Spectrum-6 SPX networking rack, completing a comprehensive data center hardware ecosystem.
Major cloud providers including Amazon, Google, Meta, and Microsoft are projected to invest a combined $650 billion in AI infrastructure throughout this year.
Nvidia’s data center division generated $193.5 billion in fiscal 2026 revenue, compared to $116.2 billion during fiscal 2025.
Wall Street analysts maintain a consensus Strong Buy rating on NVDA, with 38 Buy ratings and one Hold rating issued in the last three months, establishing an average price target of $273.61.




