
CoreWeave: Can NVDA's 'godson' really ride its coattails?

As GPUs take a larger share in AI, the already oligopolistic U.S. cloud market in 2025 is seeing a crop of new entrants. In this piece, Dolphin Research uses new-cloud player $Coreweave(CRWV.US) as a case study to examine how AI reshapes infra cloud biz models, key inputs, and whether CoreWeave is a high-quality business with long-term value.
The next piece will shift to a quantitative view. We will assess revenue runway, cost structure, and capital returns to judge valuation and risk-reward at the current juncture.
Detailed analysis below:
I. A lens on the cloud computing business model
CoreWeave operates in IaaS, whose core value lies in aggregating upstream supply and matching downstream demand at scale. The model uses large, pooled demand to amortize data center capex and R&D.
1.1 Demand aggregation
Simply put, it is a shared data center. Historically, each enterprise had to build its own server room, which was costly and underutilized, with peaks and troughs varying by vertical and time of day or year.
A public cloud facility smooths these usage cycles and lifts utilization. For cloud providers, the more customers and the more diverse their demand curves (across industries and time zones), the better. (MSFT CEO Nadella made a similar point on a recent call.)

1.2 Supply aggregation
A production-ready IaaS data center stands on three infrastructure layers: first, civil works and energy; second, IT and non-IT hardware; third, software and engineering. The third layer is talent-heavy with limited capex, so we focus on the first two tangible layers.
1) Layer 1 — Civil & energy: secure land, build facilities, and create a server-ready site. Depending on footprint and scale, constructing a new shell data center takes roughly 1–3 years.
Providers can build and own these assets or outsource to specialists such as Equinix or Digital Realty and lease capacity. Beyond land and buildings, stable access to power, water, and network backhaul is critical and is a key bottleneck to cloud capacity expansion. Energy supply is largely an operating expense post go-live, with limited upfront capex.
Research suggests land and shell construction typically account for ~5–10% of total data center build cost excluding software. This is the first-layer share.
2) Layer 2 — IT equipment sits atop land and buildings and includes all hardware directly tied to compute. This primarily covers servers, networking, and storage.
a. Servers are the heart of the data center—essentially large-format computers built from high-performance chips (GPU/CPU), motherboards, and memory. Today, advanced chips and memory are major supply bottlenecks for AI data centers.
Clouds can do in-house design with ODM manufacturing or buy turnkey from vendors like Dell. Tier-1 providers such as Azure and AWS skew to custom-designed servers, while CoreWeave mainly procures or leases fully assembled systems from upstream vendors.
b. Networking: traditional setups include switches to move data among servers, routers to connect the DC to external networks, and optics/cabling. In the AI era, interconnect speeds must rise sharply, requiring AI-grade fabrics such as Nvidia Quantum.
c. Storage: clusters of HDDs or SSDs store the massive data generated in operations. By industry practice, IT gear is the dominant capex block at ~60–70% of data center hardware spend, with servers alone at ~40–50%.
Given higher unit prices for AI chips, memory, and high-performance networking, the IT share is even higher in AI-focused builds. This mix shift is structural.
3) Layer 2 — Non-IT equipment: beyond compute, DCs require extensive non-IT systems to operate. These include power, cooling, racks, and monitoring/DR.
a. Power systems: transformers to step down grid electricity, backup generators to ride through outages, UPS for short-term continuity, and energy storage to smooth load peaks and grid interruptions.
b. Cooling: thermal management for server clusters and the entire DC via air or liquid cooling. With AI pushing per-card power density and heat, liquid cooling adoption is rising.
c. Racks provide the physical framework. While the metal itself is simple, optimal layout and integration with other systems to improve cooling and data flow require domain know-how.
d. Monitoring & DR: systems to track equipment status, power, thermal, and network conditions, enabling fast response to crashes, power loss, network failure, or fire. This reduces downtime and risk.
Surveys suggest non-IT systems such as power and cooling account for roughly 20–30% of total build. The share is somewhat lower in AI DCs.

II. Low long-term visibility
As above, IaaS value creation hinges on aggregating compute demand and production inputs to match at scale and drive lower unit costs. Scale and mix are the levers.
For any cloud provider, the ability to aggregate demand downstream and consolidate the supply chain upstream is a core moat. Beyond these hard assets and contracts, software and engineering capabilities are key soft moats, which we discuss next.
2.1 Highly concentrated customer mix
Legacy cloud leaders historically served highly fragmented demand across many verticals, so any single customer contributed a small share. Diversification smoothed cyclicality.
CoreWeave is the opposite: its customer base is highly concentrated, anchored by AI model leaders and large tech firms capable of in-house model development and optimization. This skews exposure.
In FY2024, CoreWeave generated about $1.9bn of revenue, with nearly 80% from just two customers — Microsoft and Nvidia. Microsoft alone contributed 62%.
Its current backlog tells the same story: as of 3Q25, the remaining performance obligations were $55.6bn, of which nearly $47.0bn came from OpenAI, Microsoft, and Meta combined. This shows extreme concentration.
Because a sizable portion of the MSFT contract serves OpenAI, there may be double counting. But it also implies that ultimate end-demand is even more concentrated in OpenAI.
CoreWeave reportedly also works with Google (again, largely for OpenAI workloads), as well as Databricks, IBM, Cohere, and Poolside. These are all tech-forward customers.
Overall, most of CoreWeave’s biz depends on a handful of leaders, primarily OpenAI. The long tail is still mainly AI/tech, offering limited diversification across industries.
This contrasts with legacy cloud models where breadth across industries dampens volatility and lowers sector-downturn risk. With such concentration in a few clients and a narrow set of industries, CoreWeave faces higher demand risk, and has limited pricing power with customers that represent the bulk of revenue.



2.2 Beyond hard assets, what about soft power?
Clouds also differentiate through software and engineering. Research indicates CoreWeave’s key soft advantage is rapid DC rollout — taking a site from zero to live in roughly 3–5 months — but it is relatively weaker in software/programming.
1) CoreWeave’s technical services
Per the prospectus, offerings fall into three plus one categories. a) Infrastructure Service: bare hardware rental such as high-performance chips and storage without software, which is reportedly a small slice.
b) Managed Software Service: basic software tooling atop hardware. Notably, the Bare Metal model gives customers 100% dedicated access to physical servers, rather than traditional VMs, and is the default for top clients (at least Microsoft).
This is a key AI-cloud vs. traditional-cloud difference. In legacy workloads, compute demand is elastic and bursty, allowing providers to multiplex the same physical fleet across users to raise utilization and margins.
In AI, especially training, compute demand is large-scale, continuous, and sustained. Training new frontier models can require 10k+ GPUs nearly fully utilized for weeks to months, while inference is less concentrated but still more so than traditional workloads.
As a result, AI compute tends to be exclusive and non-fungible over the job duration. Providers have limited room to dynamically reassign physical resources across customers.
Beyond Bare Metal, this tier also includes CKS and VPC VM-like services, which resemble traditional rental models. These complement the core offering.
c. Application Software Service is roughly analogous to PaaS. Beyond VMs and basic tooling, it adds higher-level capabilities such as resource/job optimization and preloaded AI models or functions for immediate use.
d. Mission Control & Observability monitors hardware usage and job progress, provides visibility to clients, and ensures stable operations through runtime issue resolution. This reduces operational risk.

2) How strong is CoreWeave’s software?
Market commentary suggests CoreWeave is not strong in programming/software, while it excels in rapid, effective DC deployment and engineering execution. This shapes its competitive posture.
By contrast, rival new-cloud player Nebius reportedly has stronger software/programming capabilities. Management pedigrees also hint at differing orientations.
Nebius’s CEO founded Yandex, Russia’s largest internet company, and its chairman co-founded CompTex and InfiNet, leaders in telecom and wireless broadband. These backgrounds are software- and network-heavy.
CoreWeave’s three co-founders come from energy investing, and the company started as a Bitcoin mining operator, with little prior tech or cloud experience. Among senior hires, only the CTO and COO bring tech backgrounds, with the COO previously an SVP at Oracle Cloud.
From the core team’s background, CoreWeave’s tech depth appears limited, while it has some edge in securing power — a key bottleneck. This fits its engineering-first profile.

Today, pricing largely remains hardware-based, with services like Slurm and monitoring bundled as free add-ons. Monetization is thus skewed to infra rental.

3) Why soft power matters
For a firm like CoreWeave with strong engineering but weaker software, what are the medium- and long-term implications? First, today’s growth is enabled by a cyclical AI supply shortfall and rapid iteration.
CoreWeave’s ability to assemble and ramp a DC in roughly three months (assuming kit and power are ready) is among the fastest. This matches the pace of top tech and AI leaders’ compute needs.
Also, its key customers are Microsoft, OpenAI, Meta, and Google. These giants have deep in-house technical strength and often prefer Bare Metal to maximize control of underlying hardware, reducing the need for CoreWeave’s higher-layer software.
But longer term, to sustain unique edge and diversify revenue away from a few anchors, CoreWeave must win traditional and mid-sized enterprises. These customers typically need stronger platform and software services.
The crux is whether CoreWeave can scale now during the supply crunch and then shore up its technology stack before the current window closes. Execution on this pivot will define durability.
2.3 What is CoreWeave’s bargaining power upstream?
Partly due to AI-era dynamics and partly due to its own customer mix and tech limitations, CoreWeave’s demand aggregation is high-volume but average quality. How does it fare in supply consolidation?
Disclosures show supplier concentration is also high. In 2023–2024, the top three suppliers accounted for roughly 80–90% of total purchasing.
a. Largest supplier — Nvidia: likely 50–60% of total purchases in 2023–2024, supplying GPUs and high-performance interconnect systems (Spectrum-X and Quantum-X). This is the critical chokepoint.
b. Second/third — Dell and Super Micro: based on filings and reporting, each likely represents 10–20% of purchases, supplying servers. These are key partners.
c. Other important suppliers: for land, buildings, and power, CoreWeave has disclosed Core Scientific and Applied Digital. Core Scientific, a Bitcoin mining and DC hosting firm, initially leased facilities to CoreWeave.
In 2025, CoreWeave acquired Core Scientific for $9.0bn, shifting from a pure-lease, asset-light model to partial owner-operator. This marks a strategic move upstream in hard assets.

Even so, concentration means the loss of any key supplier could disrupt the chain. The classic industry ‘smile curve’ applies: design-heavy upstream and end-customer-facing downstream tend to command stronger bargaining power.
CoreWeave sits mid-chain in a lower-value position, dependent on layered suppliers upstream and, downstream, often supplying the hyperscalers rather than engaging broad end customers directly. This limits take-rate.
Against its largest supplier Nvidia, CoreWeave lacks proprietary chips and the balance sheet heft of hyperscalers, leaving little pricing leverage. Support from Nvidia is a one-way nurturing, not a function of CoreWeave’s bargaining power.
With Dell and other server vendors, there is mutual dependence, as CoreWeave is a notable AI server customer. But CoreWeave buys end-to-end solutions from Dell — racks, cooling, full systems, and some basic software — highlighting weaker in-house server design capability versus hyperscalers or leaders like Alibaba Cloud.
The more you rely on suppliers, the more margin you cede. As a result, market consensus pegs CoreWeave’s true GPM at ~25–30%, well below MSFT Intelligent Cloud’s 60%+, reflecting weaker bargaining power.


2.4 Takeaways — Limited value capture in the stack
In sum, as the final aggregator of compute demand and supply, CoreWeave does not look like a franchise with high visibility to sustain unique competitiveness and share in a market dominated by giants over the next 3–5 years. Once supply normalizes, it is hard to see it competing head-on with the Big Three.
Key issues include: a) an extremely concentrated customer base in a few firms and sectors, with severe impact if a major client churns (Microsoft has stated a preference to insource DCs over time). b) Offerings are still largely basic hardware rental with limited value-add, which neither deepens lock-in nor eases expansion into SME customers or higher-margin PaaS.
c) Similar over-reliance on a few large suppliers, limited customization, and insufficient scale to be indispensable to top vendors, capping value capture. That said, low long-term visibility does not preclude strong medium-term growth, which we will analyze next from a nearer-term perspective.
<End of text>
Risk disclosure & statement:Dolphin Research Disclaimer & General Disclosure
The copyright of this article belongs to the original author/organization.
The views expressed herein are solely those of the author and do not reflect the stance of the platform. The content is intended for investment reference purposes only and shall not be considered as investment advice. Please contact us if you have any questions or suggestions regarding the content services provided by the platform.
