
Direct-to-chip liquid cooling is often misunderstood around risk, complexity, and what it actually solves in the rack.
Concerns about leaks, operational complexity, and “going all-liquid” continue to slow adoption, even as rack power densities climb beyond what air cooling alone can reasonably support. In many cases, the perceived downsides of direct-to-chip cooling reflect outdated assumptions or overgeneralizations, frequently influenced by comparisons with other liquid-cooling technologies that operate differently.
The reality is more practical than the myths suggest.
Below are five of the most common misconceptions—and what data center teams should understand when evaluating whether direct-to-chip cooling fits their environment.
1 — “Direct-to-Chip Cooling Puts Water on the Hardware”
The term direct-to-chip cooling can naturally raise questions about where liquid actually goes inside the server.
In practice, direct-to-chip cooling refers to how heat is removed from specific components—not to liquid coming into contact with electronics. The approach uses cold plates mounted on high-heat devices such as CPUs and GPUs. Coolant circulates through sealed channels inside those plates, absorbs heat, and returns through a closed-loop system. The fluid remains contained throughout the process and does not touch the chips, circuit boards, or other components.
This design applies to both single-phase and two-phase direct-to-chip systems.
- In single-phase systems, the coolant remains in liquid form as it carries heat away from the cold plate. These systems typically use treated or engineered fluids and rely on low fluid volumes, robust hoses and manifolds, and dripless quick-disconnects to support safe operation and maintenance.
- In two-phase systems, specialized dielectric fluids absorb heat by changing phase inside the cold plate. These fluids are engineered to vaporize and re-condense within the loop, improving heat transfer while further limiting the chance of liquid accumulation near hardware.
In both cases, the coolant stays fully contained within the cold plate and tubing. From an operational perspective, this makes the system behave much more like a closed-loop hydronic circuit than an open liquid environment.
Understanding this containment model helps clarify how direct-to-chip cooling is designed to deliver higher thermal performance while maintaining controlled, predictable operation—particularly as rack power densities continue to rise.
These two cold plates from ZutaCore® demonstrate direct-to-chip cooling in practice, removing heat at the source while keeping liquid fully contained.
2 — “Direct-to-Chip Cooling is All or Nothing"
For many teams, “liquid cooling” still conjures images of immersion tanks, wholesale facility redesigns, and an all-or-nothing commitment that is hard to reverse. That perception makes liquid cooling feel risky and inflexible, even as rising densities and AI workloads make additional cooling capacity increasingly necessary.
Direct-to-chip cooling works differently.
Unlike immersion, direct-to-chip is inherently incremental: it can be deployed one cabinet or row at a time, focused on the racks where densities or TDPs are highest, and scaled as workloads evolve. Teams can start with a small number of liquid-enabled racks supporting the most demanding processors, validate performance and operations, and expand without disturbing the rest of the data hall layout.
This incremental adoption is one of direct-to-chip cooling’s biggest strengths. It allows operators to relieve near-term thermal constraints without committing to a liquid-only architecture or rebuilding existing facilities.
In many cases, direct-to-chip solutions can be retrofitted into live environments by adding rear-door manifolds, distribution units, and appropriately designed cabinets, rather than re-plumbing the entire building.
Direct-to-chip cooling is increasingly a pragmatic way to bridge the gap between air-only designs and future thermal demands. It provides a practical on ramp from today’s air-cooled deployments to the higher rack densities and component heat loads that AI and advanced compute will continue to drive, with scalability built into the rack and row.
3 — “Direct-to-Chip Cooling Replaces Air Cooling”
Once teams become comfortable with liquid cooling, a new assumption often emerges: that air cooling is no longer necessary.
Most real-world direct-to-chip deployments are intentionally hybrid, not liquid-only.
With direct-to-chip, liquid is used to capture the highest heat flux at the CPU and GPU packages—often 60–80% of the server thermal load—while air continues to cool the remaining 20–40% from memory, storage, power delivery, and other components in the chassis.
As a result, air cooling does not disappear; it becomes more effective, because it is no longer being asked to remove the most concentrated heat at the die. Airflow management at the cabinet and room level remains critical to overall performance and stability.
Liquid changes where and how peak heat is removed, not whether airflow matters.
This hybrid approach is what makes direct-to-chip cooling practical in existing data centers. By offloading the most challenging heat sources from air, operators can increase rack densities, reduce server fan speeds, and improve overall energy efficiency—while continuing to rely on familiar airflow strategies, containment methods, and room-level cooling infrastructure.
4 — “Only Hyperscalers Can Justify Liquid Cooling”
There is a persistent belief that liquid cooling only makes sense for hyperscalers running massive AI campuses, but that is no longer true.
As accelerator-heavy workloads move into enterprise AI, research computing, and multi-tenant data center (MTDC) environments, many operators are finding that direct-to-chip liquid cooling is often easier to justify at smaller scales, not harder.
This is because direct-to-chip cooling offers a targeted alternative to room-level redesign.
Rather than overbuilding the entire room to accommodate worst-case thermal loads, operators can apply liquid cooling precisely where it’s needed—at the rack and chip level—while leaving the broader environment unchanged. A limited number of direct-to-chip-enabled cabinets can support AI or HPC workloads that would otherwise require additional CRAHs/CRACs, aggressive airflow retrofits, or expanded white space.
This targeted approach has already been proven in real-world, non-hyperscale environments. In an independently validated deployment at the University of Chicago, researchers compared traditional rear-door heat exchangers against direct-to-chip liquid cooling integrated into existing cabinet infrastructure.
The results showed a reduction of more than 50°F in internal cabinet temperatures, a 37% decrease in power consumption, and sustained compute performance under GPU-intensive workloads—without redesigning the data hall or building a dedicated AI facility.
Read the full case study: "."
And for MTDC operators, this model enables high-density tenants to be supported without disrupting shared infrastructure or neighboring deployments. For enterprise and research teams, it provides a practical path to run accelerator-rich workloads in existing facilities—proving that direct-to-chip liquid cooling is no longer exclusive to hyperscale environments.
5 — “It's All About the Cooling Technology – Not Cabinet Design”
Direct-to-chip cooling is often evaluated purely at the server or component level. That narrow view is where many deployments run into complexity and performance gaps.
As rack densities rise, the cabinet itself becomes a critical part of the cooling system.
Liquid-cooled configurations introduce higher weight from servers, manifolds, and coolant; new routing requirements for liquid and power; and a greater need for disciplined airflow for components that remain air-cooled. Access, cable management, monitoring, and serviceability all become more important as thermal margins tighten.
Without a cabinet designed to support liquid integration—structurally, thermally, and operationally—even a well-engineered cold plate solution can underperform or be difficult to maintain. Direct-to-chip cooling is not just about what is attached to the processor; it is about how power, liquid, airflow, cable pathways, and instrumentation work together at the rack level.
As these misconceptions fade, one theme becomes clear: direct-to-chip cooling succeeds or fails at the cabinet level. Cabinets that are liquid-ready from the start—supporting manifolds, load, airflow management, and monitoring—turn direct-to-chip cooling into a repeatable, scalable part of the data center, rather than a one-off science project.
Example: Direct-to-Chip Cooling with the ZetaFrame® Cabinet System
For many teams, the challenge with direct-to-chip cooling isn’t understanding it conceptually—it’s visualizing how it actually fits into a real cabinet, alongside power, airflow, and service access.
Chatsworth Products’ integrates direct-to-chip cooling from ZutaCore, within a single, integrated rack architecture.
ZetaFrame brings together:
- A cabinet architecture engineered for hybrid cooling
- Structural support for high-density, liquid-cooled equipment
- Disciplined airflow management for remaining air-cooled components
- Integration with power, monitoring, and cable management
Rather than treating liquid cooling as an add-on, ZetaFrame enables teams to evaluate, deploy, and scale direct-to-chip cooling within a unified cabinet system.
Watch the video below to see how direct-to-chip cooling integrates into the ZetaFrame cabinet—and how hybrid cooling works in practice.
Get Guidance on What Makes Sense for Your Deployment
If you’re evaluating direct-to-chip cooling or planning for higher-density workloads, our team can help you think through how these approaches would apply in your facility—what to pilot, what to phase in, and what tradeoffs to consider.
👉 Explore the
👉 to discuss your deployment

