image_pdfimage_print

AI will be disruptive to enterprises, but how will it be disruptive to the enterprise IT architectures that support them? Recently, I joined Gestalt IT to talk about why—watch our discussion below, then read on for some post-pod reflections.

AI Is All about Data

Applications will always come along that make us rethink the systems to support them, but AI is an especially disruptive one.

“Everybody knew that something was going to be completely changing in terms of enterprise IT when we first saw ChatGPT last fall,” my co-panelist Allyson Klein, principal of the TechArena, noted. “But while we put a lot of focus on processing, it’s about data.”

AI may be a newer application, but its principles aren’t unfamiliar: the desire for faster decision-making based on the data a company has accumulated. However, what enterprises are building for AI is unlike anything they’ve built in the past. The closest thing may be an infrastructure for high-performance computing (HPC), Allyson noted, but that has rarely been in the domain of enterprise IT, usually staying within the confines of academia and research. 

“Most enterprises haven’t even dabbled in HPC,” she notes. Even for those that have, it doesn’t often mingle with other workflows; it’s treated like a silo and managed as a different beast. If AI’s key promise is to transform workflows throughout the enterprise by accessing all the data, we can learn from HPC solutions, but we can’t copy it.

Learn more: Toward a More Simple, Scalable HPC Storage Model

When You Can’t Repurpose, Re-architect

Most enterprise infrastructures inherently aren’t designed for AI, but that’s not the only challenge. My other co-panelist, Keith Townsend, principal of The CTO Advisor, pointed out that AI infrastructure isn’t just new, it in many ways runs counter to most enterprise IT strategies. That’s in part because the AI application lifecycle is more iterative than traditional enterprise applications.

The other challenge is that most datacenters designed for traditional IT were designed around physical and power constraints that AI has blown a hole in, with its potentially massive footprint and power consumption. Allyson noted many brownfield data centers simply weren’t designed to deliver power to these GPU clusters. I’ve seen this when a customer attempted to deploy AI workloads in a brownfield data center. They were only able to deploy two GPU servers per rack due to power constraints. This resulted in two-thirds of the rack going unused.

With data center square footage at a premium, and the ever-increasing cost of power, we have to think long and hard about how we address this problem sustainably.

AI and the “E” in ESG

“Our environments weren’t designed for these heavy workloads,” Keith notes, explaining that a typical rack in a colo gets 5-10kW of power per rack, but that could be just one server of an AI implementation. You’re then looking at 45-100kW of power per rack for this kind of workflow, leading to cooling challenges.

Clearly, the “E” in ESG kept coming up, with good reason:geopolitics, climate change, energy constraints, and sustainability goals help make the case for all-flash data centers, as the only logical path forward. 

The Cloud Conundrum: Can You Outsource AI without Compromise?

If most data centers are built for general-purpose computing and legacy can’t be retrofit for AI, that leaves enterprises with few options: make general-purpose architectures scalable and efficient enough for AI, leverage the cloud, or both.

The cloud can be great for some use cases such as AI, but we know it’s not a panacea. This brought up considerations (or cons) around data governance, visibility, and ESG. The cloud can mask power and cooling outputs enterprises need to report and creates yet another place for data to reside. Many organizations already struggle to know what data is where, and the cloud can exacerbate this issue. While it remains to be seen how many enterprises build out entire AI infrastructures versus leverage the cloud as their secret sauce, change is coming.

What Will New, AI-focused Infrastructures Look Like?

TL;DR: a next-gen general-purpose infrastructure.

No matter how new systems are designed, the one thing that will always tie all of them together is the data they consume and share. “AI is about data,” Allyson noted. “Storage innovation needs to come to the forefront to enable enterprises to take advantage of this technology.”

The consensus was that new architectures should be designed not for specialization but for flexibility and disaggregation. What they’ll be is less of a vertically integrated silo and more of a pool of resources optimized to solve enterprises’ biggest data challenges. That means deploying infrastructure that’s general purpose but for a broader set of use cases like sophisticated workloads and accelerators.

“[AI] is going to benefit from lots of data sharing internally. It’s going to benefit from the availability to feed it from all different data sets. That’s a big driver toward using more of a general-purpose architecture for AI processing.” -GestaltIT On-Prem Podcast

A small number of very scalable platforms can simplify the future for enterprise IT, across all workloads: analytics, file, object, and more. This will allow IT to spread out and extend general-purpose capabilities, making AI less disruptive to enterprise IT than we think it might be. 

Pure Storage: A Better Data Storage Platform for AI

With all the different data sets and the need for data sharing, there has to be consolidation so that all of the resources in this new composable infrastructure can take advantage of data. The alternative is too wasteful from a capacity standpoint or a data compliance and governance nightmare.

This is where a data storage platform built for AI like Pure Storage FlashBlade//S comes in. It’s already clear that legacy storage has no place in the next-gen, AI-driven data center.

Pure Storage has innovated to make this daunting task easy for the enterprise by building systems that shed legacy complexity and rise to any challenge within an AI data pipeline: from addressing high throughput and high performance file and object workloads to meeting large archival object store requirements with enterprise flash features at disk economics. 

Learn more about how Pure Storage is innovating for AI and read the whitepaper Toward a More Simple, Scalable HPC Storage Model to learn more.