Summary

Vertiv Holdings (NYSE: VRT) is the dominant independent provider of critical power and cooling infrastructure for datacenters, with roots going back to Emerson Electric’s Network Power division. Sold to Platinum Equity in 2016 for ~$4B and rebranded as Vertiv, it went public via SPAC merger with Goldman Sachs's GS Acquisition Holdings on February 10, 2020. The company sits at a structurally advantaged position in the AI buildout: it does not own or operate datacenters, but virtually every large-scale datacenter — hyperscaler, colocation, and AI-factory — requires Vertiv's core product categories (uninterruptible power supplies, power distribution units, busway, and cooling equipment). FY2025 revenue reached $10.23B, up ~28% year-over-year, with Q4 2025 order growth of +252% organically and a backlog that surged to $15.0B — up 109% from the prior year.

The AI wave has transformed Vertiv’s growth trajectory. High-density GPU clusters (20–100+ kW/rack) require fundamentally different power and cooling infrastructure than traditional 5–10 kW/rack IT workloads. Air cooling alone cannot remove the heat generated by modern AI accelerators at scale, forcing operators to adopt direct-to-chip liquid cooling, immersion cooling, or hybrid architectures — all of which Vertiv supplies. The company has deepened its alignment with NVIDIA through a 2024 Solution Advisor partnership in the NVIDIA Partner Network, co-development of reference architectures for current AI factory deployments, and a forward-looking 800 VDC power platform targeting NVIDIA’s Rubin Ultra platforms in 2027. Vertiv’s MegaMod HDX prefabricated modular system — supporting up to 10 MW and 100+ kW/rack in factory-assembled modules — directly compresses datacenter deployment timelines, a critical differentiator as operators race to add capacity.

Vertiv competes against Schneider Electric (APC/Galaxy brands) and Eaton in legacy power distribution, but has moved faster than either into AI-specific liquid cooling configurations and next-generation power architectures. Its global manufacturing footprint — spanning the Americas, Europe, Middle East, and Asia — provides supply chain redundancy, though tariff exposure and component lead times (particularly large transformers and LFP battery cells) remain operational constraints. With 2026 guidance of $13.25–13.75B revenue (+27–29% organic growth) and a book-to-bill of ~2.9x exiting 2025, Vertiv’s forward order visibility is exceptional for a capital-goods company.

Key Facts

  • Ticker: NYSE: VRT
  • Founded: 2016 (as Vertiv, spun from Emerson Electric’s Network Power; operational history traces to 1946)
  • HQ: Columbus, OH (corporate); Delaware, OH (primary manufacturing)
  • Type: Public (NYSE: VRT); went public via SPAC merger February 10, 2020
  • CEO: Giordano Albertazzi (since January 2023)
  • FY2025 Revenue: $10.23B (+27.69% YoY)
  • FY2025 Adjusted Operating Profit: ~$2.06B (midpoint of guidance); adjusted operating margin ~20%
  • FY2025 Adjusted Diluted EPS: ~$4.10
  • FY2025 Adjusted Free Cash Flow: ~$1.5B
  • Q4 2025 Backlog: $15.0B (up 109% YoY); Q4 2025 book-to-bill ~2.9x
  • Market Cap: ~$89–92B (March 2026)
  • 2026 Guidance: Net sales $13.25–13.75B; adjusted EPS $5.97–6.07; organic growth 27–29%
  • Geographic Revenue Mix (2025): Americas 62%, APAC 20%, EMEA 18%
  • Employees: ~30,000 globally
  • Key Product Lines: UPS (Liebert brand), power distribution units (PDUs), busway (PowerBar), cooling distribution units (CDUs), direct-to-chip liquid cooling, immersion cooling, prefabricated modular datacenters (MegaMod, SmartRun), IT infrastructure management (Vertiv Unify)
  • Global Presence: Manufacturing in 20+ countries; service operations in 130+ countries; 4,000+ field service engineers
  • Strategic Partnerships: NVIDIA (Solution Advisor NPN, 800 VDC co-development); Intel (two-phase direct-to-chip for Gaudi3); Compass Datacenters (hybrid liquid/air deployment)
  • Prior owner: Emerson Electric (to 2016) → Platinum Equity (2016–2020) → Public (2020–present)

What It Is / How It Works

Vertiv is a critical infrastructure supplier to the datacenter industry — it does not own, operate, or develop datacenters, but it supplies the power and cooling systems that make datacenters functional. Nearly every large-scale facility globally runs Vertiv (or competitor) products at multiple points in the power and thermal chain: from the utility service entrance through switchgear, through UPS systems providing battery-backed ride-through, through power distribution to the rack level, and through cooling systems that remove the heat generated by compute.

The power chain Vertiv owns: In a modern datacenter, utility AC power enters at medium voltage (typically 13.8–34.5 kV), steps down through transformers to low-voltage AC, flows through switchgear and transfer switches, then enters UPS systems (Vertiv Liebert brand) that provide seconds-to-minutes of battery-backed ride-through during utility interruptions. From the UPS, power flows through power distribution units (PDUs) and busway (PowerBar) to individual racks, where rack-level PDUs distribute to servers. Vertiv competes at each of these stages, with particular strength in the UPS and PDU/busway layers.

The cooling chain: Traditional air-cooled datacenters use Computer Room Air Conditioners (CRACs) or Computer Room Air Handlers (CRAHs) — also Vertiv products (CoolRow, Liebert brand). As rack densities escalate beyond ~15–20 kW/rack, air cooling becomes thermally insufficient or economically inefficient. Vertiv has built out a full liquid cooling portfolio: CoolChip Cooling Distribution Units (CDUs) for direct-to-chip applications (chilled water pumped through cold plates mounted directly on GPU heat spreaders), CoolCenter Immersion for single-phase immersion cooling (servers submerged in dielectric fluid), and the CoolPhase Flex hybrid system for environments transitioning between air and liquid. CDU capacities range from small-cluster deployments to 600 kW per unit; immersion systems support 25–240 kW per system in current commercial configurations.

Modular and prefabricated systems: The MegaMod HDX (launched January 2026) is Vertiv’s most comprehensive AI-oriented product: a factory-assembled modular datacenter enclosure that integrates direct-to-chip liquid cooling, air cooling, UPS (Liebert APM2), busway (PowerBar), and infrastructure monitoring (Vertiv Unify) into a prefabricated module. The compact variant supports up to 13 racks and 1.25 MW; the combo variant scales to 144 racks and 10 MW at 50–100+ kW/rack. Claimed deployment speed is 1+ MW per day with a single crew — approximately 85% faster than traditional stick-built methods. SmartRun (a precursor/companion product) packages high-density power busbar, liquid cooling piping, network infrastructure, and hot-aisle containment into a single prefabricated overhead module for faster aisle-level deployment in existing facilities.

800 VDC next-generation architecture: The most forward-looking Vertiv initiative is the 800 VDC power distribution platform, co-developed with NVIDIA and targeted at the 2027 rollout of NVIDIA Rubin Ultra platforms. Traditional server power architectures convert medium-voltage AC → low-voltage AC → 54 VDC inside the rack, with conversion losses at each step. At megawatt-scale rack densities (multi-MW AI accelerator pods), this chain becomes inefficient. 800 VDC distribution converts MV AC directly to 800 VDC at the facility level, distributes at high voltage (lower current = lower I²R losses), and steps down at the rack once. Vertiv’s 800 VDC portfolio — centralized rectifiers, DC busways, rack-level DC-DC converters — is planned for H2 2026 release, staged to be in customer hands before Rubin Ultra shipping dates. Vertiv is already engaged in early design phases of “several large-scale AI factory projects” using this architecture.

Services layer: Liquid cooling is a departure from air-cooled infrastructure for most datacenter operators — different chemistry, different maintenance procedures, different failure modes. Vertiv launched a global Liquid Cooling Services portfolio in February 2025, covering installation, commissioning, maintenance, and emergency response. This is strategically significant: services revenue tends to be higher-margin and recurring, and the institutional knowledge from operating liquid cooling systems across hundreds of facilities is a moat competitors cannot easily replicate.

Notable Developments

  • 2026-02: Q4 2025 results — organic orders +252%, backlog $15.0B (+109% YoY), book-to-bill ~2.9x; 2026 guidance $13.25–13.75B revenue (27–29% organic growth), adjusted EPS $5.97–6.07. (Vertiv IR)
  • 2026-01: MegaMod HDX new configurations launched — compact (13 racks, 1.25 MW) and combo (144 racks, 10 MW) variants supporting 50–100+ kW/rack; commercial availability in North America and EMEA. (PR Newswire)
  • 2025-10: Q3 2025 results — organic orders +60%, backlog $9.5B; 2025 guidance raised; adjusted EPS guidance increased to $3.80–$4.10. (Vertiv IR)
  • 2025 (late): 800 VDC platform engineering readiness announcement — Vertiv and NVIDIA collaborating on centralized rectifiers, DC busways, rack-level DC-DC converters; portfolio targeted for H2 2026 release, timed to NVIDIA Rubin Ultra 2027 deployment. (Vertiv / PR Newswire)
  • 2025-02: Global Liquid Cooling Services portfolio launched — installation, commissioning, maintenance, and emergency response for direct-to-chip and immersion systems; first dedicated liquid cooling service offering from Vertiv. (Vertiv)
  • 2025: Compass Datacenters partnership announced — Vertiv named as key partner for hybrid liquid/air cooling deployments; Compass operating Vertiv liquid cooling at commercial scale. (Data Center Frontier)
  • 2025: Colosseum NVIDIA DGX supercomputer deployment in Italy — Vertiv, NVIDIA, and iGenius collaborate to power one of the world’s largest DGX Grace Blackwell installations; Vertiv providing power and cooling infrastructure. (Data Centre Magazine)
  • 2025: Americas manufacturing expansion — four new or expanding manufacturing facilities announced in the Americas, reinforcing domestic supply chain as AI datacenter demand accelerates.
  • 2024-03: Vertiv joins NVIDIA Partner Network as Solution Advisor: Consultant — the only large physical infrastructure vendor in the NPN; selected to provide expert consultation on NVIDIA-based datacenter implementations. (Vertiv EMEA)
  • 2024 (late): Scott Armul appointed EVP Global Portfolio and Business Units (effective Jan 1, 2025) — signals organizational alignment to product-led AI infrastructure strategy; Stephen Liang retains CTO role focused on forward technology vision. (Vertiv)
  • 2024: FY2024 revenue ~$8.0B (+17% YoY); adjusted operating margin ~19%; adjusted free cash flow ~$1.135B; shareholder return >136% for calendar year.
  • 2023-2024: Intel collaboration on pumped two-phase (P2P) direct-to-chip cooling for Intel Gaudi3 accelerator — pilot handling up to 160 kW/rack; validation of next-generation cooling for non-NVIDIA AI hardware.
  • 2023-01: Giordano Albertazzi became CEO; accelerated AI infrastructure pivot and NVIDIA partnership strategy.
  • 2020-02-10: Went public (NYSE: VRT) via SPAC merger with GS Acquisition Holdings Corp (Goldman Sachs / David Cote vehicle); raised $690M in the SPAC IPO.
  • 2016-12: Platinum Equity acquired Emerson Electric’s Network Power division for ~$4B; business rebranded as Vertiv.

Key People

Giordano Albertazzi — CEO

  • Profile: vertiv.com/executives
  • LinkedIn: linkedin.com/in/giordanoalbertazzi
  • Role: Chief Executive Officer since January 2023; Director
  • Background: Long-tenured Vertiv executive who rose through the company’s European and global operations before becoming CEO; Italian-born, based in Columbus OH. Led the company through its AI-driven revenue acceleration: FY2024 saw shareholder returns >136% under his leadership.
  • Notes: Primary architect of Vertiv’s NVIDIA partnership strategy and the 800 VDC platform co-development initiative; has repositioned Vertiv as an AI-native infrastructure company rather than a legacy power/cooling vendor

Scott Armul — EVP Global Portfolio and Business Units

  • Role: Executive Vice President, Global Portfolio and Business Units (effective January 1, 2025); reports to Albertazzi
  • Background: Oversees engineering R&D, product business units (thermal management, power management, IT systems, infrastructure solutions, global services); responsible for the product roadmap that includes MegaMod HDX, 800 VDC, and liquid cooling portfolio expansions
  • Notes: Appointment in January 2025 signals Vertiv reorganizing for product-led growth; separating product roadmap ownership (Armul) from technology strategy (Liang CTO)

Stephen Liang — CTO and EVP Products & Solutions

  • Role: Chief Technology Officer and Executive Vice President; focus shifted to CTO technology vision and strategy as of January 2025
  • Background: Led Vertiv’s engineering and product organization; as CTO is now focused on defining future technology direction — including 800 VDC architecture roadmap, next-generation liquid cooling, and AI factory system design
  • Notes: The Armul/Liang organizational split is Vertiv’s way of separating near-term product execution (Armul) from 2+ year technology bets (Liang)

Key People — Last Reviewed: 2026-04-02

Supply Chain Position

Vertiv sits at the critical infrastructure layer between utility power and computing hardware — essentially mandatory infrastructure for any large-scale datacenter:

Layer Vertiv’s role
UPS / Power Conditioning Liebert-brand UPS systems (from network/server UPS to large-format 3-phase systems); LFP battery option replacing legacy VRLA; primary ride-through protection between utility and compute
Power Distribution Switchgear, PDUs, busway (PowerBar); distributes conditioned power from UPS to rack level; PowerBar is a key component of the MegaMod HDX and SmartRun prefab modules
Cooling — Air CoolRow CRAC/CRAH units; hot/cold aisle containment; legacy air-cooled portfolio; still dominant in facilities below ~15 kW/rack
Cooling — Direct-to-Chip CoolChip CDUs (Cooling Distribution Units) up to 600 kW; pumped coolant to GPU cold plates; primary cooling path for NVIDIA H100/GB200/Blackwell and AMD MI300x/MI355x clusters
Cooling — Immersion CoolCenter Immersion — single-phase immersion, 25–240 kW per system; single- and multi-tank configurations; primarily deployed in HPC and specialized AI inference
Cooling — Hybrid CoolPhase Flex — hybrid air/liquid for facilities transitioning from air-only to liquid; allows partial liquid retrofit without full facility redesign
Prefab Modular MegaMod HDX (up to 10 MW, 144 racks, 50–100+ kW/rack); SmartRun overhead prefab modules; compress deployment timelines for AI factory buildout
Next-Gen Power (2026–) 800 VDC platform — centralized rectifiers, DC busway, rack-level DC-DC converters; co-developed with NVIDIA for Rubin Ultra 2027; 2–3% efficiency gain vs. traditional AC distribution at megawatt scale
Services 4,000+ field service engineers globally; Liquid Cooling Services (Feb 2025); commissioning, maintenance, emergency response for liquid systems; highest-margin recurring revenue stream
Software / Monitoring Vertiv Unify DCIM platform; real-time power, thermal, and capacity monitoring; integrated into MegaMod HDX modules

Key customer relationships (by type):

  • Hyperscalers (unnamed publicly): Microsoft, Google, Meta, Amazon are all inferred customers given Vertiv’s market share; specific deployments not disclosed
  • NVIDIA: Direct co-development and NPN partnership; Vertiv products deployed in NVIDIA DGX reference architectures
  • Compass Datacenters: Named commercial deployment partner for hybrid liquid/air
  • iGenius (Italy): Colosseum NVIDIA DGX deployment

⚑ Schneider Electric competitive pressure: Schneider Electric (APC/Galaxy brands, Ecostruxure DCIM) is Vertiv’s closest peer across power and cooling categories. Schneider has deeper enterprise IT integration (partnerships with Cisco, HPE) and a strong channel in rack-level UPS. Vertiv’s differentiation is NVIDIA alignment depth, liquid cooling portfolio maturity, and the 800 VDC architectural lead. Schneider has its own liquid cooling products but lags Vertiv in the NVIDIA co-development partnership.

⚑ Eaton overlap in UPS and switchgear: Eaton (ETN) competes heavily in 3-phase UPS, switchgear, and busway for datacenter applications. Eaton has strong relationships in industrial and utility markets that overlap with large-campus power distribution. The two companies are essentially tied as #1 and #2 in US datacenter UPS market share; Vertiv has the NVIDIA alignment advantage in AI-specific deployments.

⚑ Huawei Digital Power (non-US): In markets outside the US (particularly APAC and Middle East), Huawei Digital Power is a significant competitor offering full-stack power and cooling solutions, often at lower price points. US export controls limit Huawei’s ability to win contracts in US-based datacenters but not in global deployments. Vertiv’s APAC manufacturing (Mianyang/Jiangmen, China; Ambernath/Chakan/Pune, India) positions it to compete locally.

⚑ LFP battery supply dependency: Vertiv’s transition to LFP-based UPS systems depends on LFP cell supply from CATL, BYD, and Samsung SDI — the same manufacturers supplying grid storage and EV markets. A supply crunch in LFP cells (historically price-volatile) would affect Vertiv’s UPS margins and delivery timelines.

⚑ Large transformer lead times: Large power transformers (>100 MVA) required for campus-scale deployments currently carry 2–3 year lead times from US-based manufacturers (GE Vernova, ABB, Siemens Energy). Vertiv does not manufacture transformers, but transformer availability constrains the facilities it supplies — a binding constraint on overall AI campus buildout pace.

Claim Verification

Claim: MegaMod HDX enables “85% faster deployment” than stick-built methods, “1+ MW per day with a single crew”

Status: Company-stated; directionally credible for factory-built vs. field-built comparison; specific figures are marketing claims without published third-party validation

Supporting:

  • Prefabricated modular construction is a well-documented time-compression strategy in datacenter construction; factory assembly parallelizes work that would be sequential in field construction
  • “1 MW per day” is a specific enough claim to be falsifiable; Compass Datacenters and other deployment partners could validate (none has published verification)
  • MegaMod HDX is a mature product category — Vertiv has been building modular datacenter products since the pre-AI era; the latest generation extends proven architecture to higher densities

Refuting / questioning:

  • “85% faster” likely compares factory-assembled modules to full stick-built construction, not to other prefab options; the comparison baseline significantly affects the claim’s meaning
  • Site preparation, utility interconnect, and civil work are not compressed by modular construction — the module deployment speed may reflect only the above-floor module installation, not total time-to-power
  • “Single crew” performance will vary significantly by site conditions, crane access, and module size

Summary: The directional claim is credible and consistent with industry experience of prefab construction. The specific 85% figure is a marketing metric without published methodology; treat as indicative rather than precise.

Claim: 800 VDC architecture reduces distribution losses vs. traditional 54 VDC / AC architecture

Status: Technically sound in principle; specific efficiency gain figures (Vertiv cites “2–3%” improvement) are plausible but not independently verified at commercial scale

Supporting:

  • Power = V × I; at constant power, higher voltage means lower current, which reduces resistive (I²R) losses in conductors — this is basic electrical physics, not a marketing claim
  • Eliminating one AC-to-DC conversion step (medium-voltage AC → low-voltage AC → 54 VDC becomes MV AC → 800 VDC directly) removes a conversion stage with ~1–2% loss per stage
  • NVIDIA’s own announcement of 800 VDC support for Rubin Ultra validates the technical direction; Vertiv is not unilaterally proposing a proprietary standard

Refuting / questioning:

  • 800 VDC creates new safety requirements (arc flash energy is higher at 800 VDC than at 54 VDC or 480 VAC); compliance with NFPA 70 and IEC 60950 at these voltages adds cost and specialized installation requirements
  • Efficiency gains at rack level depend heavily on the DC-DC converter efficiency at rack input; if the rack-level converter is less efficient than a standard AC power supply, the distribution gain may be partially offset
  • Commercial deployments at gigawatt scale using 800 VDC have not yet been completed; the “several large-scale AI factory projects” Vertiv references as early adopters have not been publicly identified

Summary: Technically sound and directionally correct. The efficiency claim is credible in principle; quantified gains should be treated as engineering estimates until first commercial deployments publish measured PUE and power conversion metrics.

Claim: CoolCenter Immersion supports “25–240 kW per system”

Status: Specification-stated; upper range (240 kW) is a product specification, not a measured customer deployment figure; technically plausible for large-format single-phase immersion systems

Supporting:

  • Single-phase immersion cooling (servers submerged in dielectric fluid; fluid absorbs heat and is circulated to a heat exchanger) scales with tank volume and fluid flow rate; 240 kW per tank is physically achievable
  • Vertiv has commercial immersion deployments in EMEA (announced separately from North America availability)
  • Competing products (Green Revolution Cooling, LiquidStack) support comparable densities, providing market validation of the range

Refuting / questioning:

  • System capacity at the high end (240 kW) may require specific fluid chemistry, tank configuration, and secondary cooling plant that are not standard; real-world deployments may operate conservatively below rated maximum
  • Immersion cooling at 240 kW/system is still a small fraction of a full AI training cluster pod (which may be multi-MW); immersion remains niche compared to direct-to-chip CDUs for most hyperscale deployments

Summary: Specification is credible and consistent with the physics of single-phase immersion cooling. The 240 kW ceiling is a maximum-configuration figure; typical deployments should be verified against operator-specific configurations.

Claim: Q4 2025 backlog of $15.0B represents “109% growth YoY”

Status: Company-reported in Q4 2025 earnings release (February 2026); not independently audited pre-publication, but consistent with publicly filed financial data

Supporting:

  • Q4 2025 earnings release was publicly filed with the SEC; backlog figures are disclosed in quarterly reports and subject to auditor review annually
  • The growth trajectory is consistent with prior quarters: Q3 2025 backlog was $9.5B; the Q4 jump to $15B aligns with the 252% organic order growth reported for Q4 2025

Refuting / questioning:

  • “Backlog” definitions vary; Vertiv’s backlog includes orders received but not yet shipped. Cancellation risk exists if AI buildout pace slows — a portion of the backlog is contingent on customer projects proceeding
  • Order growth of 252% in a single quarter is unusually high; may reflect a small number of very large hyperscaler orders or pull-forward of orders in advance of price changes

Summary: Figure is company-reported and internally consistent with disclosed quarterly data. The key risk is order cancellation or deferral if the AI infrastructure buildout pace changes; backlog is a lead indicator, not guaranteed revenue.

Sources