Composable Infrastructure & the CPU’s New Groove

Technically Speaking with Chris Wright
00:01 — Chris Wright

With cloud native development we now have the ability to design software without having to worry about the constraints of hardware. But hardware is still bound by the laws of physics and Moore's Law. So how do we take inspiration from cloud native software development, breaking things down into small bits to do specific tasks, and apply that to the hardware architecture in our data centers?

00:21 — INTRO ANIMATION

00:30 — Chris Wright

In the past you defined your application architecture by hardware. We had a client-server model, which then moved to software defined networking or application centric infrastructure and now we're taking it to the next level with composable infrastructure. Composable infrastructure is a framework to increase resource utilization by disaggregating devices. Compute, storage and networking resources are abstracted and provisioned, or composed, as needed by software. This provides better resource utilization, faster deployments, and more flexibility. The DPU is a great example of composable infrastructure. We started with a CPU plus a network card, moved to a CPU and a SmartNIC where we could offload and accelerate very specific tasks. And now DPUs are taking it a step further by adding a general purpose CPU to an existing SmartNIC, enabling the offload and acceleration of arbitrary, software defined tasks. And joining me today from NVIDIA to give us more insight into DPUs, is Ami Badani. Hey Ami, thanks for joining me today.

01:36 — Ami Badani

Thanks, Chris. Happy to be here.

01:38 — Chris Wright

So I know we've seen tons of effort over the years around making computing chips smaller, faster, more efficient. I mean, Moore's law and single threaded computing speeds are really reaching their limits. So we can only build so many machines and data centers. How is the DPU gonna help us keep up with this growing demand for what I like to call data-centric computing?

02:03 — Ami Badani

Yeah, you know, that's a good question. I think, you know, if you look at so– you touched on Moore's law. I think the other thing that we're seeing is distributed computing as it relates to AI and how AI is really becoming the forefront of the data center. So as we have mounds and mounds of data, what's gonna happen is it's just gonna burden and overload the CPU. We're already seeing this, you see, you know, studies that 30% of the CPU load is really consumed by infrastructure heavy tasks. So what I think the real, you know, issue is, is that we'll have to re-architect the data center to really handle this problem. And what I mean by that is, we have specific processors available now to handle these specific workloads. And we like to think about it as really this trio effect. You have the CPU for single threaded applications, the GPU for parallel processing, accelerating parallel processing type applications like artificial intelligence. And then you have the DPU, and what the DPU is great for is really offloading, accelerating and isolating those infrastructure heavy tasks. So, you know, you think abou data intensive packet processing heavy tasks.

03:11 — Chris Wright

I feel like we're, we're really changing how we connect the system, and making this sort of programmable data path, and thinking about what opportunities that brings to us. I mean, clearly there's something in the packet processing where we can look for things like PII leakage. I know you're working on that, and just look at the general network payloads, and take action in real time based on, on a brain, on some intelligence that we've trained in the system. Am I thinking about that correctly?

03:42 — Ami Badani

Yeah, I mean, that's right. And I think the interesting thing about the DPU that you alluded to is that the DPU will really be, you know, in the data center for security and to make the data center more efficient. With the DPU, you have sort of the ability to do faster packet processing. You can not only understand the sender, but you can also understand the destination. So, what's in the packet, itself. You can understand how the traffic deviates from normal behavior. You can sort through traffic flows, recognize PII data in the payload, then send that data to a federated learning system, like an AI model to figure out if it's malicious. You know, we have an AI framework called Morpheus which is a cyber security framework. It's open source software and it basically has several pre-trained AI models specifically for security. You know, we look at fingerprints, we look at anomalous behavior, we look at sensitive information, and are able to detect that sensitive information in the packet itself. And then with the DPU we're able to kind of detect, you know, there's some anomalous behavior, drop specific packets. And because the firewall is running on the DPU itself you can actually take action immediately in real time. So instead of only analyzing kind of, a portion of the data, as you were kind of, in the past. Now with all of these different AI models, and with the DPU, you can actually have much more extensive view and visibility into the data and take real time on these packets because of the way that the DPU is architected.

05:18 — Chris Wright

I love this. And we're bringing data as this first class citizen in the Datacenter- aptly named. And we're processing more, and more of it faster, even moving it out to the edge. You could almost consider here, the DPU is in the edge of the server; offloading these cycles into the DPU, so that we can spend server cycles doing real work. And it feels like this application processing that we think of as the computer is connected through the network. I mean, it's a little cheesy, but I feel like I gotta say it, the network is the computer, right?

05:56 — Ami Badani

Yeah, I mean, I think the way that I like to think about it, is really the data center is the new computer, or really the new unit of compute, and an intelligent network sort of brings it all together. It sort of forms that core piece of infrastructure that interconnects everything in the data center, whether it's your applications, or your databases, users to the rest of the world. And ultimately you need it to be secure. So to your point, we are gonna see the network be the computer or become the computer in the future.

06:27 — Chris Wright

It's hard for me not to take this and sort of stretch it forward. And we're talking about connecting different devices, going from a world where we have this large-scale, homogeneous computing infrastructure, to a large-scale, heterogeneous computing infrastructure, which begs some interesting questions around workload placement. And I can start to imagine, well, as we enter quantum computing, it just takes this whole paradigm and, and, moves it forward to understanding what runs on the CPU, the classical CPU, and where you orchestrate and place tasks on other accelerators in this big, kind of, interesting network connected, heterogeneous data center. I love this vision. And thank you so much Ami, for joining me and having this conversation. I learned a lot today.

07:12 — Ami Badani

Yeah. Thanks Chris. Thanks for having me.

07:16 — Chris Wright

When we consider our future infrastructure, we're reimagining what a computer is. The data center is a flexible composition of heterogeneous computing devices and accelerators connected with an intelligent network. This new way of computing will enable us to process more data faster than ever before and truly make data a first class citizen.

07:38 — OUTRO ANIMATION

  • Keywords:
  • DevOps

Meet the guests

Ami Badani

Ami Badani

VP Marketing and Developer Products
NVIDIA

Keep exploring

How Red Hat and NVIDIA accelerate AI projects

AI/ML workloads are a top priority for many organizations, but deploying and managing AI-powered cloud-native applications can be complex. See how Red Hat OpenShift and NVIDIA are helping to enable and streamline MLOps.

Read the blog post

Simplifying GPU computing

Discover how NVIDIA helps customers run GPU-accelerated computing on Red Hat OpenShift.

Read the case study

More like this

Technically Speaking with Chris Wright

Get into GitOps

Is there more to GitOps than meets the eye? We ponder the future of continuous delivery and automation beyond Kubernetes.

Command Line Heroes

DevOps_Tear Down That Wall

As the race to deliver applications ramps up, the wall between development and operations comes crashing down. But what is DevOps, really?

Compiler

How Bad Is Betting Wrong On The Future?

We speak to experts in the DevOps space about betting wrong on the future, how development projects go awry, and what teams can do to get things back on track.

Share our shows

We are working hard to bring you new stories, ideas, and insights. Reach out to us on social media, use our show hashtags, and follow us for updates and announcements.

Presented by Red Hat

Sharing knowledge has defined Red Hat from the beginning–ever since co-founder Marc Ewing became known as “the helpful guy in the red hat.” Head over to the Red Hat Blog for expert insights and epic stories from the world of enterprise tech.