Framework | Framework Computer | Modular Laptops & PCs You Can Repair
Cool laptops and desktops for Linux.
Public notes from activescott tagged with #hardware
Cool laptops and desktops for Linux.
“CPUs are becoming the bottleneck in terms of growing out this AI and agentic workflow,” Dion Harris, Nvidia’s head of AI infrastructure, told CNBC this week, calling it an “exciting opportunity.”
The chip giant announced its first data center CPU, Grace, in 2021, and the next generation, Vera, is now in production. The CPUs are typically deployed alongside Nvidia’s famous Hopper, Blackwell or Rubin GPUs in full rack-scale systems.
Exploding demand for GPUs has turned Nvidia into a household name and the most valuable publicly traded company in the world, with a $4.4 trillion market cap. Its broader chip strategy took a major turn in February, when Nvidia struck a multiyear deal with Meta that included the first large-scale deployment of Grace CPUs on their own, with plans to deploy Vera in 2027.
Thousands of standalone Nvidia CPUs are also helping power supercomputers at the Texas Advanced Computing Center and Los Alamos National Lab, Nvidia told CNBC.
Bank of America predicts the CPU market could more than double, from $27 billion in 2025 to $60 billion by 2030. In the latest quarter alone, Nvidia generated data center revenue of over $62 billion, up 75% from a year earlier.
Harris said Nvidia took a fundamentally different approach in design that makes its CPUs “best suited” for data processing and agentic AI workflows, compared to the more general-purpose CPUs made by industry leaders Intel and AMD.
A big difference is in the number of cores in each CPU.
AMD’s EPYC line and Intel’s Xeon high-performance server CPUs typically have 128 cores, compared to 72 cores in Nvidia’s Grace CPU.
“If you’re a hyperscaler, you want to maximize the number of cores per CPU, and that essentially drives down the cost, the dollars per core. So that’s one business model,” Harris explained.
Instead, Nvidia designed its CPU specifically to help its star GPUs run AI workloads.
“Your single-threaded performance becomes much more important than your dollars per core because you’re trying to make sure that that very expensive resource, being the GPU, isn’t sitting there waiting,” Harris said.
Nvidia also bases its CPUs on Arm architecture, more typically used for chips in lower-power devices like smartphones, while Intel and AMD base their CPUs on traditional x86 architecture. Introduced by Intel nearly 50 years ago, x86 is the leading instruction set that has dominated PC and server processor designs since its inception.
AMD’s Norrod said Nvidia has, “Optimized their chips very well, I think, for feeding their GPUs. They’re not well optimized for general-purpose applications.”
drive enclosures,
MicroZig is a toolbox for building embedded applications in Zig.
MicroZig is a toolbox for building embedded applications in Zig.
FurMark 2 is the successor of the venerable FurMark 1 and is a very intensive GPU stress test on Windows (32-bit and 64-bit) and Linux (32-bit and 64-bit) platforms. It's also a quick OpenGL and Vulkan graphics benchmark with online scores. FurMark 2 has an improved command line support and is built with GeeXLab.
Work at Internal Drive Speed: Edit 8K video, run AI models, and access files with up to 6302MB/s real-world performance that matches your Mac Studio's internal SSD1