Mellanox HPC Storage Solutions

As new, faster storage technologies have evolved, the storage bottleneck has moved from the storage media to the I/O interconnect. While interconnect technologies have sped up considerably, they commonly remain the limiting factor in increasing the performance of many data centers. Critical to removing this bottleneck is a new way of looking at storage interconnects. Speed counts, but the path data takes though the interconnect can drastically change the performance of a data center.

Mellanox Virtual Protocol Interconnect (VPI) and Storage Acceleration solutions eliminate the storage bottleneck and provide unprecedented storage infrastructure performance with lower costs and complexity compared to traditional storage networks. This translates to real-world customer advantages including optimized server utilization, increased application performance, reduced back up times, increased data center simplicity and consolidation, lower power consumption and lower total cost of ownership (TCO).

Increased Storage Performance: More Throughput, Less Latency

A typical data read/write operation can pass data through the CPU several times and copy it into multiple memory buffers before the data reaches its final destination. Each stop through a CPU or memory buffer adds latency time to the operation, holding up other operations and burning CPU cycles that could be used to run applications and process other data and traffic. The end result being slow data transfer rates and reduced data center efficiency.

The Remote Direct Memory Access (RDMA) protocol utilized in InfiniBand and RDMA over Converged Ethernet (RoCE) networks eliminates this networking bottleneck, bypassing the CPU of the target system and implementing zero-copy data transfers. Storage protocols such as iSER, SRP and NFSoRDMA move data directly between the memories of servers and storage devices. As a result, data transfer latencies can be reduced by over 90% and CPU efficiencies can by elevated up to 96%, leading to lower data center management, maintenance and operational costs than with traditional Fibre Channel solutions.

Mellanox VPI adapters, switches and gateways feature data rates of up to 56Gb/s, far greater than 1Gb/s Ethernet and 8Gb/s Fibre Channel.

Network Protocol Latency (µs)
InfiniBand (RDMA) 0.7
RDMA over Converged Ethernet (RoCE) 1.3
Ethernet TCP 6
Fibre Channel 20

Interconnect Simplification and Consolidation

Mellanox interconnect adapters, switches and gateways feature the flexibility of Virtual Protocol Interconnect (VPI) technology. VPI enables any standard networking, clustering, storage or management protocol to seamlessly operate over high-speed InfiniBand or Ethernet ports on the same piece of equipment. VPI simplifies network system design, making it easier for IT managers to dynamically deploy storage and compute infrastructure to meet their evolving data center needs.

The Quality-of-Service and channel-based I/O features of Mellanox VPI make it an ideal technology for data center network consolidation. Each type of network traffic has its own service property requirements for bandwidth, latency and reliability. VPI can provide the right priority and level of services to each traffic type through configuration of several virtual fabric channels on a single unified wire. Because there is no hypervisor overhead, network traffic travels at nearly the speed of the physical hardware, offering 96% IOPS efficiency.

Non-Blocking Switching

To ensure reliable bandwidth consistency, non-blocking bandwidth is provided in each of Mellanox’s VPI edge switches, eliminating traffic bottlenecks and maximizing throughput. Bandwidth into the switch is equal to bandwidth out of the switch, providing efficient scalability and up to 4Tb/s throughput.