- US - English
- China - 简体中文
- India - English
- Japan - 日本語
- Malaysia - English
- Singapore - English
- Taiwan – 繁體中文
Micron Collaboration With Magnum IO GPUDirect Storage Brings Industry-Disruptive Innovation to AI & ML
An individual’s career is marked by many milestones. Some may be longevity milestones like celebrating 30 years with a company. Some may be changes in job roles to take on a new responsibility or a project. And some may even be changing companies for a new opportunity.
Occasionally, one’s career is defined by being part of a technology that fundamentally changes the industry. And even more rarely, one’s career is defined by being involved with that technology from the very beginning. Recently, my career has been imbued with both. I had the pleasure of initially supporting NVIDIA’s industry-disruptive innovation, Magnum IO GPUDirect Storage , which reaches version 1.0 today. This technology enables a direct path between the GPU and storage, which essentially provides a faster data path and lower CPU load.
What’s interesting about industry-disruptive innovation is that it takes tremendous effort and significant time to bring it to market. As I look back at my notes, I realize our first discussion with NVIDIA regarding GPUDirect Storage was on November 7, 2018, approximately two and a half years ago. I sat at NVIDIA’s headquarters with some of the smartest people in the industry related to these technologies – David Reed, Sandeep Joshi and CJ Newburn from NVIDIA and Currie Munce from Micron. NVIDIA shared their vision for this technology and asked if we would be interested in participating with them. As you can infer from this blog, we immediately embraced the vision and accepted the challenge of supporting this emerging new technology.
Why? Because it was obvious from the beginning that this technology would yield higher performance storage for GPU-hungry workloads in artificial intelligence (AI), machine learning (ML), deep learning and high-performance computing (HPC). These workloads often have huge datasets so the direct memory access (DMA) between GPU memory and storage delivers lower IO latency while decreasing load on the CPU. This delivers significantly faster time to insight, a major initiative for Micron, so that innovators can, for example, create vaccines faster, discover more fuel-efficient transportation, and develop more efficient food delivery to remote areas of the world.
Along our innovative path, we detailed our collaboration proving the effectiveness of this technology. Here are a few highlights you might be interested in.
- March 2019: Making GPU I/O Scream on Platforms of Today and Tomorrow
- August 2019: GPUDirect Storage: A Direct Path Between Storage and GPU Memory
- July 2020: Maximize Your Investment in Micron SSDs for AI/ML Workload With NVIDIA GPUDirect Storage
- November 2020: Analyzing the Effects of Storage on AI Workloads
- February 2021: Architecting to Overcome AI Storage Challenges
- March 2021: Barton Fiske & Wes Vaske on overcoming AI data bottlenecks with NVIDIA® GPUDirect Storage
Today, we’re happy to celebrate NVIDIA’s announcement of Magnum IO GPUDirect Storage version 1.0! As I look back at some notes from a meeting before the March 2019 NVIDIA blog, they show that NVIDIA asked if we would support that blog with a simple phrase like, “Micron is excited to be a part of this.” Our answer couldn’t be more emphatic - yes! Congratulations, NVIDIA, on creating and collaborating with Micron and the broader ecosystem on this industry-disruptive technology. Micron is excited to help you decrease the time it takes to extract valuable insights from data.