The GPU Party Rages On; Where are the Opportunities for Tier 2?
The tech world got a glimpse of the enormity of the AI training market in the last month as the big three hyperscalers revealed their third quarter growth numbers: Microsoft’s cloud revenue grew 22% year over year; AWS reported a 19% increase in sales; Google Cloud posted a 35% revenue increase, led by AI-focused infrastructure.
As the market expands, everyone with GPUs to rent is getting a bigger piece of the action. But for the rest of the digital infrastructure industry, building out GPU clusters requires more money and gigawatts than most CSPs can muster. That doesn’t mean there are no opportunities to be had.
“IDC expects the CSPs—excluding hyperscale—to have the greatest percentage growth in power capacity,” says Sean Graham, research director for IT data centers at IDC. “We expect their power capacity to grow at 19 percent a year from 2024-2028.”
William Bell, EVP of Products at PhoenixNAP, agrees that most of the tier 2 cloud industry hasn’t had the resources to join the GPU party.
“Inference is the opportunity,” Bell says. “Right now, the tier 2s are trying to understand how they can augment their current deployments to capitalize as efficiently as possible on the inference play.”
Tier 2 CSPs may also be able to get in the game of building smaller clusters for sovereign or private AI. For these jobs or inference workloads, that may require evaluating data center partners to understand if they can meet the power and cooling requirements of tomorrow’s racks.
In the end though, the alternative cloud’s competitive advantage will likely remain the same.
“It’s really cost for performance; it always has been cost/performance,” Bell says. “If [customers are] not getting all these specialized, sexy services they can get from the hyperscalers, then it has to come at a significant savings.”
Just for fun, here’s a snapshot of what’s going on at some of the tier 2 CSPs that are reveling in the frenzy for GPUs.
CoreWeave
It seems barely a week goes by without big news from CoreWeave. Dubbing itself “the AI hyperscaler,” CoreWeave is projecting $2B in revenue this year, and $8B next year. Supported by investments from Nvidia, Cisco and Microsoft (its biggest customer) CoreWeave is said to be planning a $35B IPO in 2025.
Digital Ocean
Digital Ocean recently announced the availability of bare metal GPUs, to complement the GPU Droplets that were announced over the summer. Digital Ocean CEO Paddy Srinivasan has been open about his plans to woo smaller customers away from AWS and Microsoft. Their recent earnings showed a 12% increase in revenue and an acceleration of AI and machine learning customers.
Vultr
Vultr is also in the mix on democratizing AI infrastructure. At its core, Vultr offers businesses access to GPU computing power from fractions of a GPU on a virtual machine to multi-GPU bare metal servers. Earlier this year, Vultr announced its strategic partnership with Nvidia-owned AI optimization leader, Run:ai, just one of Vultr’s Cloud Alliance members. Vultr also has the bragging rights for being the first provider to virtualize the NVIDIA A100, the most popular GPU for machine learning.
Lambda
Lambda shook things up over the summer with the launch of its one-click GPU clusters designed for shorter-term training workloads. Earlier this year, Lambda raised $320 million in a Series C funding to scale its GPU Cloud for AI giving its customers access to thousands of GPUs with NVIDIA Quantum-2 InfiniBand networking.
Want to learn more about innovation and growth in the alternative cloud universe? Get your ticket now to CloudFest USA (Nov. 5-6, 2025, Miami). ~ Rebecca Sausner, CEO, CloudFest USA
Want to See the News First?
Subscribe to the Alt/Cloud Newsletter!