HBM3E
Industry’s fastest, highest-capacity HBM to advance generative AI innovation
Industry’s fastest, highest-capacity HBM to advance generative AI innovation
Generative AI opens a world for new forms of creativity and expression, like the image above, by using large language model (LLM) for training and inference. Utilization of compute and memory resources make the difference in time to deploy and response time. Micron HBM3E provides higher memory capacity that improves performance and reduces CPU offload for faster training and more responsive queries when inferencing LLMs such as ChatGPT™.
AI unlocks new possibilities for businesses, IT, engineering, science, medicine and more. As larger AI models are deployed to accelerate deep learning, maintaining compute and memory efficiency is important to address performance, costs and power to ensure benefits for all. Micron HBM3E improves memory performance while focusing on energy efficiency that increases performance per watt resulting in lower time to train LLMs such as GPT-4 and beyond.
Scientists, researchers, and engineers are challenged to discover solutions for climate modeling, curing cancer and renewable and sustainable energy resources. High-performance computing (HPC) propels time to discovery by executing very complex algorithms and advanced simulations that use large datasets. Micron HBM3E provides higher memory capacity and improves performance by reducing the need to distribute data across multiple nodes, accelerating the pace of innovation.
Micron extends industry-leading performance across our data center product portfolio with HBM3E. Delivering faster data rates, improved thermal response, and 50% higher monolithic die density within same package footprint as previous generation.
With advanced CMOS innovations and industry leading 1β process technology, Micron HBM3E provides higher memory bandwidth that exceeds 1.2TB/s.1
With 50% more memory capacity2 per 8-high 24GB cube, HBM3E enables training at higher precision and accuracy.
Micron designed an energy efficient data path that reduces thermal impedance, enables greater than 2.5x improvement in performance/watt3 compared to previous generation.
With increased memory bandwidth that improves system-level performance, HBM3E reduces training time by more than 30%4 and allows >50% more queries per day.5,6
Micron HBM3E is the fastest, highest capacity high bandwidth memory to advance AI innovation. An 8-high 24GB cube that delivers over 1.2TB/s bandwidth and superior power efficiency. Micron is your trusted partner for memory and storage innovation.
1.2 TB/s bandwidth, 8-high 24GB HBM3E from Micron delivers superior power efficiency enabled by advanced 1β process node.
Micron's Girish Cherussery, Sr. Director, High Performance Memory, sits down with Patrick Moorhead and Danial Newman from Six Five to discuss High Bandwidth Memory (HBM) and Micron's newest HBM3E product.
Micron is shipping the industry’s first DRAM manufactured on next-generation 1-beta process technology. It represents state-of-the-art innovation from Micron’s continued investment in R&D and process technology advancement. Micron’s 1-beta process technology allows development of memory products with increased performance, greater capacity, higher density, and lower relative power consumption than prior generations.
Learn more >