Mellanox Enables The New Storage Network

Data storage is undergoing multiple simultaneous revolutions. Faster storage media is overwhelming traditional 8, 16 Gb FC and 10GbE networks, while new cloud, mobile, social media, analytics, and high-performance computing (HPC) applications require new storage solutions. These solutions are increasingly adopting consolidated Ethernet and InfiniBand networks instead of Fibre Channel SANs. Mellanox offers the ideal storage interconnect portfolio at speeds of 10, 25, 40, 50, and 100Gb/s speeds, delivering the best efficiency, highest performance and lowest cost.

Faster Storage Requires Faster Networks

Flash storage has created a revolution in IT and is disrupting traditional designs and vendors. Local flash makes servers and distributed storage solutions faster, requiring high-speed connections between each server. All-flash arrays require fast front-end connections to the servers to enjoy the full performance of flash storage today, and many benefit from RDMA (Remote Direct Memory Access) to minimize latency.

The New Storage is Software-Defined and Scale-Out

Scale-out architectures—often software-defined—deliver more cost effective storage solutions and are displacing traditional Fibre Channel SANs. Scale-out storage leverages converged Ethernet or InfiniBand networks to deliver improved simplicity, cost, and efficiency; higher throughput; and lower latency. Mellanox networking provides the highest performance, efficiency, and value for these increasingly popular storage solutions.

Mellanox Storage Solutions

Big Data Storage (+)

Big Data Storage applications such as Hadoop, No-SQL databases, and analytics engines use distributed storage across many nodes and require a fast, reliable network for data distribution, analysis, and sharing. Modern servers require 25GbE performance, or 40/50GbE networking if using all-flash storage. The large and growing, number of nodes in the Big Data cluster mandate the use of fast, reliable, and non-blocking Ethernet switches.
Learn More

Ceph (+)

Ceph is the most popular storage for OpenStack deployments because it is scale-out, software-defined, open source, and supports block, object and file storage. It uses a cluster network for data replication and reconstruction, and testing proves that Ceph servers with >20 hard drives or >3 SSDs need 25, 40, 50 and 100Gb networks to support full performance. Mellanox works closely with Red Hat and the Ceph community to offer the highest-performing and most reliable Ceph networking at any speed, whether using spinning disk or flash (or both), and using TCP or RDMA communications.

Cloud Storage (+)

Cloud Storage both public and private, requires scalability, automated management, and efficient management—the Efficient Virtual Network (EVN). Mellanox leads the market with the best networking integration for VMware, Windows, and OpenStack environments, as well as the best price/performance. Mellanox solutions for the EVN include NVGRE/VXLAN/Geneve overlay offloads, leading support 25, 40, 50, 56, and 100Gb/s speeds on Ethernet and InfiniBand, and RDMA enables lower latencies and more efficient CPU utilization. Mellanox supports accelerated storage access using iSER for VMware and OpenStack and SMB Direct for Windows Storage Spaces Direct. Learn more about our cloud solutions and VMware/Windows storage.

Flash Storage (+)

Flash Storage supports higher performance than hard drives and requires a faster network. Individual SSDs can support up to 1GB/s(8Gb/s) of throughput and the next generation of SSDs will support 2-4GB/s (16-32Gb/s) and falling latencies as low as 15us for NAND flash and 1us for next generation non-volatile memory (NVM) technologies. Servers and storage with multiple SSDs benefit from faster networks running at 25, 40, 50 and 100Gb/s, and Mellanox networking enables higher performance to free the full performance and value of your flash storage, and fully supports the emerging NVMe Over Fabrics (NVMf) protocol, which requires a low latency fabric with RDMA.

Hyper-converged Infrastructure (HCI) (+)

Hyper-converged Infrastructure (HCI) combines compute and storage into the same layer to allow rapid deployment and simplify infrastructure management by eliminating a separate storage layer. HCI clusters several servers together to share storage and virtual machines, requiring 25GbE or faster networking for cluster management traffic and storage replication, especially when deployed with flash SSDs. Mellanox adapters, switches and cables enable a reliable, fast, and cost-effective hyper-converged deployment at any speed, and work with top HCI solutions including EMC ScaleIO, Maxta, Microsft Windows Storage Spaces Direct, Nutanix, and VMware VSAN.

High Performance Computing (HPC) Storage (+)

High Performance Computing (HPC) Storage innovation, research, and product development are driving larger and faster high-performance computing clusters at faster speeds. The use of more CPU cores and GPUs requires faster storage links, whether for standalone storage arrays or clustered file systems. Mellanox offers the fastest and most efficient interconnect solutions for both InfiniBand and Ethernet at speeds up to 100Gb/s. Mellanox adapters and switches also fully support RDMA for both HPC compute and HPC storage using SRP, iSER, and clustered file systems such as BeeGFS, Gluster, Lustre, and IBM Spectrum Scale (GPFS). Learn more about our HPC storage and more about our HPC solutions.

iSCSI and iSER (+)

iSCSI is increasing in popularity for block-based storage on all-flash arrays, hybrid storage, cloud storage, and software-defined storage. Large enterprises and cloud service providers enjoy the maturity and management of iSCSI with the high performance and low cost of Ethernet connections running at 10, 25, 40, or 50Gb/s speeds. In addition, iSER (iSCSI Extensions for RDMA) delivers higher IOPS, lower latency, and more efficient CPU utilization than standard iSCSI by leveraging RDMA (Remote Direct Memory Access) while still supporting all applications that can use standard iSCSI. Mellanox networking accelerates both iSCSI and iSER performance in all supported environments, and Mellanox is a leader in open-source iSCSI and iSER innovation and contributions.

Media and Entertainment Storage (+)

The broadcast and media market are moving to 4K and even 8K video, overwhelming older 8, 10, and 16Gb Fibre Channel and Ethernet networks. Mellanox Ethernet and InfiniBand networks enable faster access to uncompressed 4K/8K video for real-time ingest, editing, rendering, and playback at speeds of 25, 40, 50 and 100Gb/s, and supports RDMA and clustered file systems (such as IBM Spectrum Scale/GPFS).
Learn More

NAS and Object Storage (+)

NAS and Object Storage are growing rapidly because of the rapid deployment of video, social, and mobile content and applications, which create huge amounts of file and object data that need to be shared across applications, servers, or users. Medical imaging, geospatial mapping, and other technical computing applications also create huge repositories of file or object data. Mellanox supports and accelerates file and object storage using fast connections (25, 40, 50, or 100Gb/s) that work for NFS, CIFS/SMB, and object protocols such as S3 and Swift. Mellanox supports RDMA as well in protocols such as SMB Direct and NFS over RDMA, and erasure coding offloads for object storage.

Software-Defined Storage (SDS) (+)

Software-Defined Storage (SDS combines commodity servers with innovative software to create powerful, scalable, and flexible storage solutions. Cloud service providers, large enterprises, system integrators, and server vendors are also choosing SDS to enable greater flexibility and faster innovation with costs up to 50% lower than traditional enterprise storage. The typical SDS solution is scale-out and runs on an Ethernet or InfiniBand network. Mellanox fully supports popular SDS option such as Ceph, EMC ScaleIO, Gluster, Hadoop, IBM Spectrum Scale (GPFS), Lustre, Nexenta EDGE, Swift, and VMware VSAN.