- US - English
- China - 简体中文
- India - English
- Japan - 日本語
- Malaysia - English
- Singapore - English
- Taiwan – 繁體中文
Ceph BlueStore vs. FileStore: Block performance comparison when leveraging Micron NVMe SSDs
BlueStore is the new storage engine for Ceph and is the default configuration in the community edition. BlueStore performance numbers are not included in our current Micron Accelerated Ceph Storage Solution reference architecture since it is currently not supported in Red Hat Ceph 3.0. I ran performance tests against the community edition of Ceph Luminous (12.2.4) on our Ceph reference architecture hardware and will compare the results to the FileStore performance we achieved in RHCS 3.0 in this blog.
4KB random write IOPS performance increases by 18%, average latency decreases by 15%, and 99.99% tail latency decreases by up to 80%. 4KB random read performance is better at higher queue depths with BlueStore.
Block Workloads | RHCS 3.0 File Store IOPS | Ceph 12.2.4 BlueStore IOPS | RHCS 3.0 FileStore Average Latency | Ceph 12.2.4 BlueStore Average Latency |
4KB Random Read | 2 Million | 2.1 Million | 1.6ms | 1.4ms |
4KB Random Write | 363K | 424K | 5.3ms | 4.5ms |
This solution is optimized for block performance. Random small block testing using the Rados Block Driver in Linux saturates platinum-level 8168 Intel Purley processors in a 2-socket storage node.
With 10 drives per storage node, this architecture has a usable storage capacity of 232TB that can be scaled out by adding additional 1U storage nodes.
Reference Design – Hardware
Test Results and Analysis
Ceph Test Methodology
Red Hat Ceph Storage 3.0 (12.2.1) is configured with FileStore with 2 OSDs per Micron 9200MAX NVMe SSD. A 20GB journal was used for each OSD.
Ceph Luminous Community (12.2.4) is configured with BlueStore with 2 OSDs per Micron 9200MAX NVMe SSD. RocksDB and WAL data are stored on the same partition as data.
In both configurations there are 10 drives per storage node and 2 OSDs per drive, 80 total OSDs with 232TB of usable capacity.
The Ceph storage pool tested was created with 8192 placement groups and 2x replication. Performance is tested with 100 RBD images at 75GB each, providing 7.5TB of data on a 2x replicated pool, 15TB of total data.
4KB random block performance was measured using FIO against the Rados Block Driver. We are CPU limited in all tests, even with 2x Intel 8168 CPUs per storage node.
RBD FIO 4KB Random Write Performance: FileStore vs. BlueStore
BlueStore provides a ~18% increase in IOPS and a ~15% decrease in average latency.
There is also a large decrease in the tail latency of Ceph at higher FIO client counts with BlueStore. At 100 clients, tail latency is decreased by 4.3X. At lower client counts, tail latency for BlueStore is higher than FileStore because BlueStore is pushing higher performance.
RBD FIO 4KB Random Read
4KB random read performance is similar between FileStore & BlueStore. There’s a 5% increase in IOPS at a queue depth of 32.
Tail latency is also similar up to queue depth 32, where BlueStore performs better.
Would You Like to Know More?
RHCS 3.0 + the Micron 9200 MAX NVMe SSD on the Intel Purley platform is super fast. The latest reference architecture for Micron Accelerated Ceph Storage Solutions is available now. My next blog post will discuss FileStore vs. BlueStore related to object performance. I presented details about the reference architecture and other Ceph tuning and performance topics during my session at OpenStack Summit 2018. A recording of my session is available here.
Have additional questions about our testing or methodology? Email us ssd@micron.com.