Applications and Markets

A single integrated fabric is the optimal solution for not only high-performance computing environments, but also in mainstream enterprise and data center server, storage, and virtualized environments. Mellanox InfiniBand and Ethernet solutions deliver low-latency, high-performance, and network offloads that are critical to building an efficient networking fabric. This intelligent networking fabric includes hardware acceleration engines that improve application efficiency, by offloading data flow tasks from the CPU to the network. As a result the CPU becomes available to run more applications – meaning that by using a Mellanox intelligent network fabric, fewer servers are needed to run a given workload. This better server and storage efficiency, coupled with the economic benefits of consolidation, performance boosts, manageability, and network virtualization has helped end-customers build out their applications in the most cost-effective manner. Mellanox Ethernet and InfiniBand networking solutions are uniquely positioned to satisfy the demand for an intelligent networking fabric and deliver total infrastructure efficiency to the data center.

Learn how Mellanox's Ethernet and InfiniBand technology can take your solution to the next level of performance, power, and cost.

Data Center

Mellanox data center networking solutions based on Virtual Protocol Interconnect (VPI) technology enable seamless connectivity to 56/100Gb/s InfiniBand or 10/25/40/50/100 G Ethernet connections, or a mix of both, depending upon your networking requirements. Mellanox enables I/O infrastructure flexibility and future-proofing for data center computing environments. VPI technology facilitates all standard networking, clustering, storage, and management protocols to seamlessly operate over any converged network using the same software infrastructure. Mellanox solutions provide improved cost, power, latency, and CPU utilization for Ethernet-based solutions for blade and standard rack and tower environments. By utilizing InfiniBand or high performance Ethernet connections to consolidate I/O to a single wire, cloud providers and IT managers can deliver significantly higher application service levels while achieving their business goals of increased productivity and reduced CAPEX and OPEX related to technology I/O spending.

Federal Government

Mellanox has supported the government's IT networking needs for more than 10 years and established itself as a trusted leader in delivering high-performance connectivity solutions. Many federal agencies have high-performance networking requirements for complex projects requiring the processing of large amounts of data over distributed systems. Mellanox products are specified for secure, robust, and high speed storage networks and the clustering of processors, parallel file processing, GPUs and heterogeneous storage platforms.


As new, faster storage technologies have evolved, the storage bottleneck has moved from the storage media to the I/O interconnect. With new Flash SSD drives, the networking fabric throughput, latency, and storage acceleration have become the limiting factor in increasing the performance of many data centers. Critical to removing this bottleneck is a new way of looking at storage interconnects. Raw bandwidth counts, but the ability to minimize CPU processing and streamline the path data takes though the interconnect to the storage most drastically improves the performance of a data center.

Mellanox Ethernet interconnects can be leveraged to build an Ethernet Storage Fabric (ESF) to increase speed and flexibility, and to introduce the cost efficiencies of Ethernet into a storage environment. An ESF can provide the foundation for the fastest and most efficient way of networking storage while eliminating storage bottleneck and lowering costs and complexity compared to traditional storage networks. This translates to real-world customer advantages including optimized server utilization, increased application performance, reduced back up times, increased data center simplicity and consolidation, lower power consumption and lower total cost of ownership (TCO).

High-Performance Computing

With Mellanox InfiniBand and Ethernet interconnects proven scalability and efficiency, small and large clusters easily scale up to thousands of nodes. By providing low-latency, high-bandwidth, high message rate, transport offload to facilitate extremely low CPU overhead, Remote Direct Memory Access (RDMA), and advanced communications offloads, Mellanox interconnect solutions are the most deployed high-speed interconnect for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox's Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency, and performance for HPC systems today and in the future.


High-performance compute clusters require a high-performance interconnect technology that provides high bandwidth, low latency, and low CPU overhead, resulting in high CPU utilization for the application’s compute operations.