AI WORKLOADSAT SCALE: Kubernetes Cluster with Supermicro's Systems with AMD EPYC™7002 Series processors White Paper

2022-02-23
●Executive Summary
■The Deep Learning (DL) benchmark results in the previous white paper clearly show that a DL workload in Docker containers performs the same as on the BareMetal. Building an on-prem Kubernetes cluster with GPU workers and AI framework-specific Docker containers can help an organization run projects or productions in a highly reliable and scalable platform. In this white paper, Supermicro AMD based WIO systems, AS-1114S-WTRT, are introduced as Kubernetes Admin and master nodes. Along with AS-2023US-TR4, we build an NVIDIA GPU capableKubernetes cluster that uses Cloud-native CEPH storage as persistent volumes and demonstrates how a DL workload can scale the Kubernetes cluster.

SUPERMICRO

AS-1114S-WTRT

More

Part#

serversWIO server

More

More

White Paper

More

More

Please see the document for details

More

More

English Chinese Chinese and English Japanese

March 2021

741 KB

- The full preview is over. If you want to read the whole 9 page document,please Sign in/Register -
  • +1 Like
  • Add to Favorites

Recommend

All reproduced articles on this site are for the purpose of conveying more information and clearly indicate the source. If media or individuals who do not want to be reproduced can contact us, which will be deleted.

Contact Us

Email: