India's GPU Build-Out and the Cooling Technology Shift
Indian data centre capacity has moved past the 1,400 MW mark, with committed pipeline additions of 3,500 to 4,000 MW targeted for delivery between 2026 and 2028. The IndiaAI Mission, the Ministry of Electronics and Information Technology's GPU procurement programme (initial tranche of 18,693 GPUs under the common compute facility), and private hyperscaler commitments from AWS, Microsoft Azure, Google Cloud, Oracle, and CtrlS have reshaped the thermal profile of the sector. A standard 19-inch rack provisioned for Nvidia H100 or H200 GPUs can draw 80 to 120 kW, and racks configured for the B200 and GB200 NVL72 platforms push past 130 kW with the NVL72 reference design landing at approximately 120 kW per rack. These power densities have broken what traditional air-cooled Indian data centres were designed for.
Mumbai, Chennai, Hyderabad, and Navi Mumbai are the four primary clusters absorbing this load. Mumbai's Chandivali, Powai, and Mahape micro-markets host the largest interconnection density. Chennai's Ambattur and Siruseri corridors have become the south's preferred low-latency landing points given the cable landing stations at Versova and Tiruvanmiyur. Hyderabad is attracting hyperscale campuses at Shamshabad and Chandanvelly on back of Telangana's Data Centre Policy 2024 incentives. Navi Mumbai is seeing campus builds at Airoli, Rabale, and Mahape on plots of 15 to 30 acres. Each of these geographies now features operators deploying cooling technologies that Indian insurers have almost no underwriting history for.
For IRDAI-licensed insurers, this shift matters because the Electronic Equipment Insurance (EEI) policy wording, the Machinery Breakdown (M&BD) policy, and the Standard Fire and Special Perils (SFSP) policy were all drafted with assumptions about air-cooled data rooms: raised floors, CRAC and CRAH units, hot and cold aisle containment, and water kept well away from IT equipment. Liquid coming into physical contact with electronics was traditionally a loss event. In a 2026 hyperscale facility, liquid deliberately flowing through, around, and inside servers is the operating state, and the insurance product architecture has not caught up.
Single-Phase and Two-Phase Immersion Cooling Risks
Immersion cooling submerges servers in a tank of dielectric fluid. In single-phase systems, the fluid (typically a synthetic hydrocarbon such as ExxonMobil PAO, Shell S5 X, or a GTL-based fluid like Submer SmartCoolant) remains in liquid form and is circulated through a heat exchanger. In two-phase systems, the fluid (historically 3M Novec 649 and Novec 7100, with fluorine-free alternatives like Castrol ON and Chemours Opteon SF10 now entering the Indian market) boils at around 49 to 62 degrees Celsius, the vapour rises to a condenser coil, and condensate returns to the tank.
The loss exposures are distinct. Single-phase fluid degradation is the first underwriting concern. Dielectric fluids oxidise, absorb contaminants from server plastics and adhesives, and accumulate copper and zinc ions from cable shields over 24 to 36 months. Degraded fluid loses its dielectric strength and thermal conductivity. When resistivity drops below the manufacturer threshold (typically 10 to the 12th power ohm-centimetre), arcing inside the tank becomes possible, and heat transfer coefficient degradation causes GPU junction temperatures to rise above the 85 degrees Celsius design point, triggering clock throttling or outright shutdown. A single tank of premium single-phase fluid for a 100 kW rack costs INR 22 to 35 lakh, and full replacement across a 5 MW immersion hall can reach INR 12 to 18 crore.
Containment failure is the second exposure. Tanks holding 1,000 to 2,500 litres of fluid sit on floors that were poured for conventional IT loads. A tank weld failure, a gasket breach at a quick-disconnect fitting, or a seismic event that displaces a tank can release fluid onto raised flooring, into cable trenches, and through floor penetrations into electrical rooms on lower levels. Cleanup is expensive and slow. Hydrocarbon fluids require trained hazmat response, absorbent booms, and disposal through CPCB-authorised hazardous waste handlers under the Hazardous and Other Wastes (Management and Transboundary Movement) Rules, 2016.
Two-phase systems introduce environmental liability that single-phase does not. The legacy 3M Novec fluids are fluorinated compounds classified among the PFAS family, and 3M has committed to exiting fluorochemical production by end of 2025. Operators running Novec 649 tanks now face a fluid supply cliff, rising per-litre costs (Novec 649 pricing has moved from USD 120 per kg to USD 180 to 220 per kg on spot availability), and the prospect of future regulatory restrictions from the CPCB or under a global PFAS treaty. Insurers are asking whether existing EEI and liability wordings respond to pollution cleanup and to the obsolescence risk if a tank must be drained and the fluid cannot be replaced at original cost.
Direct-to-Chip Liquid Cooling: Coolant Leaks, Corrosion, and Manifold Failures
Direct-to-chip (DLC) cooling runs a water-glycol mixture (typically 25 percent propylene glycol or a proprietary coolant such as Nvidia's reference blend or CoolIT CHx coolant) through cold plates mounted directly on GPU, CPU, and HBM packages. The coolant loops through a Coolant Distribution Unit (CDU), which separates the facility water loop from the technology coolant loop via a brazed plate heat exchanger. Each rack has a manifold feeding 8 to 18 tapped circuits per server. Rear-door heat exchangers (RDHx) represent a middle path, using chilled water inside a finned coil on the rear of the rack to remove heat from exhaust air without wetting the servers directly.
DLC failure modes break into three categories. First, leaks at quick-disconnect couplings. Each rack features dozens of couplings (CPC Everis, Staubli SPT, Parker FEM series), and installation torque errors, repeated service disconnections, or coupling fatigue after thousands of cycles can produce weeping joints. A coolant drip of 50 ml onto a live GPU baseboard typically results in baseboard replacement cost of USD 35,000 to 50,000 per unit and, in a dense configuration, can short adjacent boards. Second, corrosion and biofouling. Water chemistry matters. If facility water is introduced into the technology loop during a maintenance error, dissolved oxygen, chlorides above 50 ppm, or sulphates trigger pitting corrosion on copper cold plates. Biological growth in stagnant sections of the loop can clog cold plate microchannels, causing localised overheating and thermal runaway. Third, CDU and manifold failures. A CDU pump loss, heat exchanger fouling, or manifold flange rupture can cascade across an entire pod of 32 to 72 racks.
Indian operators are finding that conventional M&BD policy wording does not neatly respond to these events. A cold plate pitting event over 18 months of operation is a degradation, not a sudden and fortuitous event, and M&BD exclusions for gradual deterioration may bite. A coupling weep that damages electronics sits in the grey zone between EEI (electronic equipment) and M&BD (mechanical cooling plant), and claims managers have had disputes over which policy answers. Loss adjusters with semiconductor or data centre backgrounds are scarce in India, and the few available (with firms like Crawford, Charles Taylor, and Cunningham Lindsey India) charge premium rates that some domestic insurers resist paying.
Fire Suppression Compatibility: Sprinklers, Novec 1230, FM-200, and Inert Gas
Fire suppression design for advanced cooling data halls has become one of the most contentious underwriting questions of 2026. The options are narrower than operators often assume. Pre-action water-based sprinklers remain the default fallback in Indian commercial buildings under the National Building Code 2016 and relevant state fire safety rules, but they present obvious compatibility concerns in liquid-cooled halls where any additional water discharge can compound an existing leak event and where electrical systems may remain energised during sprinkler activation.
Clean agent systems are the preferred primary protection. FM-200 (HFC-227ea) remains widely installed in Indian data centres built between 2010 and 2022, but its global warming potential of 3,220 and its phase-down trajectory under the Kigali Amendment to the Montreal Protocol, which India ratified, have pushed new builds toward alternatives. Novec 1230 (dodecafluoro-2-methylpentan-3-one) is the current favourite in hyperscale designs because of its low GWP of approximately 1 and its compatibility with occupied spaces at the design concentration of 4.2 to 5.9 percent. Inert gas systems using IG-55, IG-541 (Inergen), or pure nitrogen are gaining share for large halls because they avoid fluorinated compounds entirely, though they require larger cylinder banks (typical design concentration of 37 to 43 percent inert gas displaces oxygen to 12 to 14 percent) and introduce over-pressurisation risk that must be managed through engineered vent dampers.
Three specific underwriting issues arise with the shift to advanced cooling. First, false discharge risk has climbed. VESDA and optical flame detectors installed near liquid-cooled tanks can trigger on fluid vapour plumes, condensation mist from RDHx coils during humidity swings, or steam from an accidental facility water leak. A false Novec 1230 discharge at Indian refill rates of INR 1,400 to 1,900 per kilogram across a 600 kg bank costs INR 8 to 11 lakh for the agent alone, plus test and recommissioning. Operators are demanding false-discharge endorsements that insurers have historically resisted. Second, clean agent effectiveness inside a submerged tank is unproven. A fire originating inside an immersion tank, for instance from a battery swelling event in a BMC module with lithium cells, cannot be reliably extinguished by Novec 1230 flooding the room above the fluid surface. Third, interaction between fire suppression discharge and an active liquid cooling leak creates layered liability questions that policy wordings do not currently address.
Cyber-Physical Risk: When the OT Network Is the Cooling System
Advanced cooling systems are heavily instrumented. A hyperscale DLC deployment might have 4,000 to 8,000 sensors feeding a Building Management System (BMS) from Siemens Desigo, Honeywell Niagara, or Schneider EcoStruxure, with a parallel Data Center Infrastructure Management (DCIM) layer from Vertiv Trellis, Nlyte, or Sunbird. CDU pump speed, tank fluid levels, cold plate inlet temperatures, coolant pH, and leak detection cable continuity are all digital signals on an OT network that, in most Indian facilities, remains imperfectly segmented from the corporate IT network.
The loss scenarios insurers are starting to model follow the pattern established by the Ukraine power grid attacks and more recent OT incursions at Asian semiconductor and chemical facilities. An attacker with BMS access can manipulate setpoints to disable pumps, close isolation valves, mask alarms, or force CDU bypass modes. The physical consequence is thermal runaway. GPUs in a properly instrumented DLC pod will throttle and shut down within seconds of coolant loss, but the capital exposure in a hyperscale GPU hall is extreme: a single Nvidia GB200 NVL72 rack with 72 GPUs represents USD 3.5 to 4.0 million of hardware, and 500 racks in a building represent USD 1.75 to 2.0 billion of replaceable equipment (approximately INR 14,500 to 16,500 crore at INR 83 to the dollar).
Indian cyber insurance products do not reliably cover this cross-over risk. Most policies sold in India under the IRDAI (Cyber Insurance) framework respond to data breach, privacy liability, extortion, and business interruption triggered by IT network events. Physical property damage arising from an OT compromise, the CL380 Cyber Property Damage exclusion pattern adopted from the London market, and the Lloyd's 2023 war and cyber clarifications together leave a coverage gap that most hyperscale operators are not aware of until a loss occurs. Brokers advising Indian hyperscale clients are increasingly structuring manuscript endorsements, specific OT write-backs on property policies, and dedicated difference-in-conditions placements with Lloyd's syndicates to close this gap.
Insurer Appetite and Premium Impact: Hyperscale 100 MW+ vs Enterprise 10 to 50 MW
Indian insurer appetite for data centre risk has fragmented along capacity lines since early 2026. For enterprise and colocation facilities in the 10 to 50 MW range, primarily air-cooled with selective RDHx or rear-door cooling, the domestic market retains healthy appetite. ICICI Lombard, HDFC Ergo, Tata AIG, Bajaj Allianz, and SBI General are actively quoting EEI, M&BD, SFSP, and cyber packages. Net retentions on these accounts typically sit at INR 50 to 150 crore with treaty and facultative reinsurance placing the balance in London, Singapore, and Munich.
At the hyperscale end (100 MW and above, with significant DLC or immersion cooling penetration), the picture has shifted. GIC Re has revised its treaty terms for data centre cessions effective April 2026 to require facultative reinsurance above defined retention lines for any facility with more than 20 percent immersion-cooled capacity. Several Indian insurers have internally reduced their gross line capacity on EEI for immersion-cooled GPU clusters by 30 to 50 percent from 2025 levels, citing the lack of loss experience and the concentration risk in a single cooling hall. M&BD underwriters have tightened exclusions around coolant contamination, fluid degradation, and gradual wear on cold plates, and several are requiring specific engineer survey sign-offs before binding cover.
Premium impact has been material. Hyperscale GPU facilities with immersion cooling are currently pricing at EEI rates of 0.18 to 0.28 percent of sum insured, against 0.06 to 0.12 percent for comparable air-cooled facilities. M&BD rates for CDUs and cooling distribution systems are in the 0.22 to 0.35 percent range versus 0.09 to 0.15 percent for conventional CRAC infrastructure. Business interruption extensions with 24-month indemnity periods and 30-day deductibles are now typical on hyperscale placements, replacing the 12-month, 7-day structures that dominated the Indian market in 2022 to 2024.
A single rack cooling failure no longer means losing one rack. In a shared coolant loop configuration, a CDU failure can idle a pod of 32 to 72 racks, each hosting workloads under committed customer SLAs with liquidated damages. An enterprise AI training run worth INR 25 to 40 crore in compute cost and opportunity cost can be destroyed by a 48-hour cooling outage at the wrong moment in a model training cycle. Business interruption cover is now the most negotiated section of the placement, with sub-limits for SLA penalties, extra expense sub-limits for emergency mobile cooling trucks, and forensic accounting clauses that reference specific AI training workload indemnification methodology.
Underwriting and Risk Engineering for Indian Hyperscale Facilities
Underwriters and risk engineers entering the advanced cooling space in India should anchor on a short list of technical checkpoints that the older data centre survey playbook does not cover. First, verify fluid specification and supplier traceability. For immersion systems, confirm dielectric strength test history (ASTM D877 or D1816), resistivity trends across the life of the fluid, and the operator's sampling and analysis cadence with an independent lab. For DLC systems, confirm coolant chemistry against the OEM specification, with particular attention to inhibitor package condition and biocide dosing records.
Second, assess containment and leak detection. Engineered containment should capture the entire contents of any single tank or loop section without allowing ingress into electrical rooms, cable trenches, or floor penetrations. Leak detection should combine cable-based detection (TraceTek, Raychem) with floor-mounted spot sensors and, ideally, acoustic emission monitoring on manifold welds. Response procedures should include electrical isolation protocols that do not depend on the same OT network that may be compromised in a cyber event.
Third, review fire suppression interaction matrices. The operator should have a documented fire matrix that describes how VESDA pre-alarm, VESDA alarm, flame detector activation, and manual pull stations interact with clean agent release, mechanical ventilation dampers, CDU isolation, and UPS transfer logic. Many Indian hyperscale operators do not yet have this integrated testing completed at commissioning, and post-commissioning retrofits are expensive.
Fourth, demand OT network segmentation evidence. Proper ISA-95 level segregation, with a demilitarised zone between the corporate IT environment and the BMS/DCIM layer, is the baseline. Jump servers, multi-factor authentication on engineering workstations, and logged change control on setpoint modifications should be in place. CERT-In's April 2022 directions on cyber incident reporting apply, and DPDPA 2023 adds obligations where personal data is processed within the facility.
Fifth, structure insurance to match the technology. EEI alone is insufficient. Operators should expect to layer EEI, M&BD, SFSP, public liability, environmental impairment, cyber, and contingent business interruption, with manuscript endorsements that bridge the gaps between these product lines for the specific cooling technology deployed. The broker's role has moved from placement to risk architecture, and the insurer's role from rating to engineering.