IBM has introduced the new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet today’s data intensive and AI workload demands, and the latest offering in the IBM Storage for Data and AI portfolio.
The new IBM Storage Scale System 6000 seeks to build on IBM’s leadership position with an enhanced high performance parallel file system designed for data intensive use-cases. It provides up to 7M IOPs and up to 256GB/s throughput for read only workloads per system in a 4U (four rack units) footprint.
To leverage the economic value of both foundation and traditional AI models, businesses must focus on the data – their current capacity and growth forecasts, where the data resides, how it’s secured and accessed, and how to optimize future data storage investments.
“The potential of today’s new era of AI can only be fully realized, in my opinion, if organizations have a strategy to unify data from multiple sources in near real-time without creating numerous copies of data and going through constant iterations of data ingest,” said Denis Kennelly, general manager, IBM Storage. “IBM Storage Scale System 6000 gives clients the ability to do just that – brings together data from core, edge, and cloud into a single platform with optimized performance for GPU workloads.”
The IBM Storage Scale System 6000 is optimized for storing semi-structured and unstructured data including video, imagery, text, instrumentation data, etc., that is generated daily and accelerates an organization’s digital footprint across hybrid environments. With the IBM Storage Scale System clients can:
Expect greater data efficiencies and economies of scale with the addition of IBM FlashCore Modules (FCM), to be incorporated in 1H 2024:
- New maximum capacity NVMe FCM will provide capacity efficiency with 70% lower cost and 53% less energy per TB vs. IBM’s previous maximum capacity flash drives for IBM Storage Scale System. This can help clients realize the full performance of NVMe with the cost advantages of Quad-level Cell (QLC).
- Powerful inline hardware-accelerated data compression and encryption to help keep client data secured even in multi-user, multi-tenant environments.
- Storage Scale System 6000 with FCM will support 2.5x the amount of data in the same floor space than the previous generation system.
Accelerate the adoption and operationalization of AI workloads with IBM watsonx:
- Engineered with a new NVMeoF turbo tier, new parallel multi-tenant data isolation and IBM patented computational storage drives, this is designed to provide more performance security and efficiency for AI workloads.
- Storage Scale software, the global data platform for unstructured data that powers the Scale System 6000, connects data with an open ecosystem of multi-vendor storage options including AWS, Azure, IBM Cloud and other public clouds, in addition to IBM Storage Tape.
Gain faster access to data with over 2.5x the GB/s throughput and 2x IOPs performance of market leading competitors6:
- High-processing throughput and access speed with multiple concurrent AI and data-intensive workloads that can be run to meet a range of use cases.
Accelerating AI with the IBM Storage Scale System and NVIDIA Technology
The Storage Scale System 6000 has the ability to create an information supply chain from an NVIDIA AI solution to other AI workloads independent of where they are located. IBM’s new NVMeoF turbo tier has been engineered for small files like those collected from remote devices or to provide access to smaller transactions like data lake or lakehouse analytics so they can be integrated into an NVIDIA solution.
The Storage Scale System 6000 supports NVIDIA Magnum IOTM GPUDirect® Storage (GDS) with a direct path between GPU memory and storage. It has also been designed to increase performance with data movement IO when GDS is enabled. Utilizing NVIDIA ConnectX-7™ NICs, the Scale System 6000 supports up to 16 ports of 100Gb RDMA over Converged Ethernet (RoCE), 200Gb/s and/or 400Gb/s InfiniBand, or a combination of both to increase performance between nodes or directly to NVIDIA GPUs.