Solutions
• Artificial intelligence and machine learning applications take advantage of
persistent memory to eliminate bottlenecks and accelerate performance
• Persistent Memory adds a fast access tier for storage applications.
NVDIMMs are at the same tier as DRAM
• NVDIMMs are used for write cache acceleration in All Flash Arrays. Many
AL and ML applications use All Flash Arrays with NVDIMMs
• NVDIMMs provide Instant, byte-level access for developing training
models used for machine learning data sets
• NVDIMMs provide very low latency tiering, caching, write buffering and
metadata storage capabilities for AI application acceleration
• NVDIMMs are also used by cloud data centers to reduce their OS server
crash recovery time
There’s been an explosion of data creation for use by Artificial
Intelligence (AI) and Machine Learning (ML) applications. Unfortunately,
traditional systems are not designed to address the challenge of
accessing large and small data sets with the key hurdles being reducing
the overall time to discovery and insight - data intensive ETL workloads
(Extract, Transform, Load), and checkpoint workloads. NVDIMMs, or
Persistent Memory, are an ideal solution to dramatically accelerate
system performance for AI and ML applications.
SMART NVDIMMs for AI and ML
NVDIMM Features
• DIMM Form Factor, 8GB, 16GB, 32GB,
DDR4-2666, 2933, 3200
• Throughput of 25.6GB/s
• Latency ~20ns (DRAM)
• AES 256 bit Encryption
• Autonomous Self Refresh
• Digitally Signed Firmware