Introduction
Enterprise storage infrastructure and related technologies continue to evolve year
after year. In particular, as IoT, 5G, AI, and ML technologies are gaining attention,
the demand for Software-Defined Storage (SDS) solutions based on clustered storage servers is also increasing. Red Hat ® Ceph
Storage (Ceph) is a leading SDS solution that enables high performing workloads to run efficiently. The high throughput and low
latency features of modern storage devices are important factors that improve the overall performance of the Ceph cluster.
Adopting a Ceph cluster utilizing NVMe Solid State Drives (SSD) maximizes the overall application performance. Supermicro
designed Ceph clusters based on the Supermicro AS -2124US-TNRP storage server with the 3
rd
Gen AMD EPYC CPUs with all-
flash NVMe SSDs and then conducted various tests to design and deliver Ceph users with optimized Ceph configurations.
Red Hat Ceph Storage Description
Red Hat Ceph Storage is a production-ready implementation of Ceph. This open-source storage platform manages data on a
distributed computer cluster and provides interfaces for object-, block-, and file-level storage. Proven at web-scale, Red Hat
Ceph Storage offers the data protection, reliability, and availability required by demanding object storage workloads. This
solution is designed for modern workloads, such as AI, cloud infrastructures, and data analytics. Industry-standard
application programming interfaces (APIs) allow migration of and integration with your applications. Unlike traditional
storage, Red Hat Ceph Storage is optimized for large installations, typically a petabyte (PB) or larger.
TABLE OF CONTENTS
Introduction
...............................
1
Red Hat Ceph Storage Description
........
1
Supermicro Setup
.........................
2
Supermicro Hardware and Red Hat
Software Specifics
.........................
2
Baseline Test Results
......................
3
Benchmark Configurations and Results
...
4
Conclusion
................................
7
Image 1 - A+ Server 2124US-TNRP
2
© 2021 Copyright Super Micro Computer, Inc. All rights reserved March 2021
Supermicro Setup
Supermicro has run several performance tests with the following setup. Figure 1 shows the Supermicro architecture with
three monitor nodes, four Object Storage Daemon (OSD) nodes, and 10 RADOS BLOCK Device (RBD) loadgen client nodes.
AS-1114S-WN10RT
ADMIN
AS-2124US-TNRP
OSD(1-4)
AS -2014TP-HTR(A-D)
CL(1-10)
IPMI
BCM57416
MCX516A-CDAT
AS-1114S-WN10RT
MON(1-3)
IPMI
MCX516A-CDAT
IPMI
MCX516A-CDAT
Intel X710
IPMI
SIOM
P1 P2
P1 P2
P1 P2
P1 P2
BCM57416
P1 P2
P1 P2
P1 P2
MCX516A-CDAT
P1 P2
1G IPMI
10G MGMT
100G Public
100G Replication
Figure 1 - Supermicro Configuration Test Setup
Supermicro Hardware and Red Hat Software Specifics
The Red Hat Ceph Storage cluster is deployed on the Supermicro A+ servers containing the 3
rd
Gen AMD EPYC processors. The
software versions used were Red Hat Ceph Storage 4.2, Red Hat Enterprise Linux® 8.2, and Flexible I/O Tester (fio) 3.25
OSD
AS-2124US-TNRP
2 x AMD EPYC 7713 64-Core Processors
16x32GB DDR4-3200 2Rx8 (16Gb) LP ECC RDIMM (512GB)
2 x KIOXIA CM6 3.84TB NVMe PCIe 4x4 2.5" 15mm SIE 1DWP
22 x KIOXIA CM6 3.84TB NVMe PCIe 4x4 2.5" 15mm SIE 1DWP
1x Dual port 200GB AOC-653106A-HDAT
Table 1 - Specifics of the OSD Nodes
ADMIN/MON
System/Node
AS-1114S-WN10RT
CPU
1x AMD EPYC 7713 64-Core Processor
Memory
8x32GB DDR4-3200 2Rx8 (16Gb) LP ECC RDIMM (512GB)
HDD/SSD (OS)
KIOXIA CM6 3.84TB NVMe PCIe 4x4 2.5" 15mm SIE 1DWP
HDD (Data)
AOC
1x Dual port 100GB AOC-MCX516A-CDAT
Table 2 - Specifics of the ADMIN Nodes