CoreWeave Details Expansion Financing, Power Constraints and NVIDIA Growth Plans at Conference
by Mitch Edgeman · The Markets DailyCoreWeave (NASDAQ:CRWV) executives and a peer AI infrastructure provider outlined how they are financing rapid expansion, managing power constraints, and broadening product offerings as demand for AI compute accelerates, according to remarks from a recent conference discussion.
CoreWeave leadership role and growth context
Nick Robbins, a vice president in corporate development at CoreWeave, said he leads the company’s equity and equity-linked financing efforts and also oversees investor relations. Robbins said he has been with the company as an employee for about seven months, after previously working with the founders as an advisor while at Morgan Stanley and as a pre-IPO investor.
In the discussion, the host cited a sharp revenue increase “from $200 million to $5 billion” over a couple of years, expectations to double again this year and in 2027, and a backlog described as roughly $66 billion to $67 billion. Robbins argued that while the company has addressed prior questions about differentiation, the market still appears to wrestle with how a “hyperscale cloud” can be built rapidly without a legacy cash-flow engine like traditional hyperscalers.
Contract model and margin framework
Robbins said CoreWeave’s model is built around longer-dated “take-or-pay” contracts that provide visibility into future cash flows, even as the company incurs costs ahead of revenue during hypergrowth. He said that at the contract level, stabilized margins are “in the mid-20s,” and that the company is willing to accept earlier-period cost pressure because it expects contracts—described as having a weighted average term of about five years—to generate enough cash flow to service debt, cover operating expenses, and deliver free cash flow to shareholders over the contract life.
Credit-market concerns and delayed draw term loans
Addressing investor unease about credit markets, Robbins said the market may misunderstand CoreWeave’s primary approach to borrowing, which he described as asset-level financing through delayed draw term loans (DDTLs). He said these facilities are not accessed “speculatively,” but rather against specific contracted customer revenues with fixed pricing per GPU hour.
Robbins said the three key factors for DDTL financing are:
- Contract quality, which he said the company “industrialized” from 2022 to 2023 to ensure financeability;
- Customer creditworthiness, with Robbins saying most backlog is tied to investment-grade customers and the remainder largely to highly creditworthy AI leaders;
- Execution, which he said drives lender confidence and pricing over time.
Robbins provided an example of cost-of-capital improvement, saying the firm’s first DDTL was priced at “SOFR + 962,” while a later DDTL (DDTL 3), which he said was for OpenAI, reflected roughly a 600 basis-point reduction versus earlier pricing. He attributed the change to capital markets gaining confidence in CoreWeave’s execution rather than simply differences in customer credit profiles.
On expected capital needs, Robbins said CoreWeave’s guidance included “$30 billion” of CapEx, and he emphasized that “substantially all” of it is tied to contracts already in the revenue backlog. He said funding sources include DDTLs as the primary mechanism, as well as customer prepayments and opportunistic top-level financings such as high-yield or convertibles. He added that, to date, the company has financed up to about 90% of contract-level CapEx through DDTLs, with the remainder coming from prepayments and operating free cash flow.
Power strategy, site selection, and data center build approach
Robbins said time to power and the source of power are key factors in selecting new sites, and he described CoreWeave’s emphasis on nearer-term capacity. He said the company typically contracts power 12 to 24 months before it comes online and typically brings it to customers six to 12 months ahead of delivery, rather than pursuing sites several years out.
He also said CoreWeave prefers grid power over behind-the-meter power, arguing that grid connectivity reduces the redundancy required to achieve uptime. Robbins said he believes the recent “power shortage” is less about a lack of electrons and more about physical constraints—transformers, batteries, backup generation, transmission lines, and labor availability. He said CoreWeave works with roughly 40 data center development partners and shares best practices across that network.
On build strategy, Robbins said most of the company’s 3.1 gigawatts of contracted power as of Dec. 31—substantially all expected online by the end of 2027—is “very largely leased.” He said CoreWeave expects self-build to become a bigger portion over time, pointing to a first self-build project at the Kenilworth, New Jersey campus (referred to as the “old Merck campus”) that he said should come online in the next one to two years, though he noted the company has not provided specific guidance. He described CoreWeave’s self-build approach as joint ventures in which the company is a minority equity partner and long-term tenant, aiming for greater control and efficiency while limiting incremental CapEx burden.
NVIDIA partnership, product expansion, and technology roadmap
Robbins discussed a January announcement expanding CoreWeave’s relationship with NVIDIA. He said the announcement included an expectation to add 5 gigawatts of AI cloud capacity by 2030 and described an “asset-light” monetization opportunity where NVIDIA would validate CoreWeave’s reference architecture and software with the intent to make them available to other enterprise and sovereign customers in their own data centers. He also cited collaboration around procuring land, power, and shell capacity as a way to accelerate deployment amid what he characterized as demand that is “overwhelming,” including turning potential customers away due to capacity limits.
Robbins also said the company is focused on growing higher-margin add-on services beyond GPUs, comparing the strategy to AWS’s evolution from infrastructure toward cross-selling. He said CoreWeave’s additional services—such as storage, networking, and CPU—had reached a $100 million run rate, but noted that even rapid growth in those offerings may take time to become material given the pace of expansion in the core GPU business.
On GPU technology, Robbins said Blackwell is rapidly displacing Hopper in prevalence and suggested that the market is only beginning to see the impact of models trained on Grace Blackwell systems. He added that broader adoption could still drive strong demand for older generations such as Ampere and Hopper, especially among enterprises starting deployments with more familiar platforms. He also said the shift from air-cooled to liquid-cooled data centers is a major design inflection, with higher-density systems requiring liquid cooling and influencing how flexible a site can be for future GPU generations.
Looking to 2026, Robbins said he is most focused on execution—scaling what he described as some of the largest compute builds in the world while deploying newer generations of technology—and suggested the market may underestimate the sophistication required to deliver AI cloud at scale.
The event also included a separate discussion with Michael Gordon of Crusoe, who described Crusoe’s energy-first origins in using stranded energy for Bitcoin mining and said the company now operates a diversified AI infrastructure platform spanning hyperscaler data centers and a cloud offering for AI-native customers. Gordon highlighted Crusoe’s vertically integrated manufacturing capabilities, citing efforts to address supply-chain bottlenecks such as switchgear lead times and pointing to its Abilene, Texas “Stargate” facility, which he described as a 1.2 gigawatt campus designed to operate as a single cluster.
About CoreWeave (NASDAQ:CRWV)
CoreWeave is a U.S.-based provider of GPU-accelerated cloud infrastructure designed to support compute-intensive workloads such as artificial intelligence, machine learning, visual effects rendering and other high-performance computing applications. The company supplies access to large fleets of modern GPUs and complementary infrastructure that enable customers to train and deploy large models, run inference at scale, and process graphics-heavy workloads with low latency and high throughput.
CoreWeave’s product offering includes on-demand and dedicated GPU instances, bare-metal servers, private clusters and managed services tailored for enterprise and developer use.