Mellanox completed the acquisition of EZchip, a leader in high-performance processing solutions for carrier and data center networks. The Company unveiled ConnectX®-4 Lx adapters, the world’s first 25 and 50Gb/s Ethernet single and multi-host adapters. The BlueField™ family of programmable processors for networking, security, and storage applications was introduced, addressing the industry need for higher levels of SoC (System-on-Chip) integration to simplify system design, lower total power, and reduced overall system cost. ConnectX-5 was also introduced, as the most advanced 10, 25, 40, 50, and 100Gb/s InfiniBand and Ethernet intelligent adapter on the market today. Mellanox interconnect solutions accelerated the world's fastest supercomputer, at the supercomputing center in Wuxi, China. InfiniBand continued to garner market share; InfiniBand solutions were chosen in nearly four times more end-user projects in 2016 versus Omni-Path and five times more end-user projects versus other proprietary offerings as shown in the November 2016 release of the TOP500 list. Mellanox continues to be the leading the high performance Ethernet NIC provider, garnering nearly 90 percent market share of the 25Gb/s and greater adapter market.
Mellanox announced Multi-Host™, an innovative technology that provides high flexibility and major savings in building next generation, scalable Cloud, Web 2.0 and high-performance data centers. The Company introduced the industry’s first 100 Gigabit Ethernet, Open Ethernet-based, non-blocking switch, Spectrum, the next generation of its Open Ethernet-based switch IC. With Spectrum, Mellanox was the first to offer end-to-end 10/25/40/50 and 100 Gigabit Ethernet connectivity. InfiniBand continued to garner market share from Ethernet and proprietary interconnects, and surpassed a milestone, connecting the majority of the TOP500 supercomputing list with 51.4 percent of supercomputers. InfiniBand connected systems grew 15.8 percent year-over-year, from June 2014 to July 2015. The shipment of Spectrum, combined with Mellanox’s ConnectX®-4 NICs, and LinkX™ fiber and copper cables, also ensured that Mellanox was the first to deliver comprehensive end-to-end 10, 25, 40, 50 and 100 Gigabit Ethernet data center connectivity solutions.
Mellanox released the world’s first 40 Gigabit Ethernet NIC based on Open Compute Project (OCP) designs. ConnectX-3 Pro 40GbE OCP-based NICs are built to OCP specifications and optimize the performance of scalable and virtualized environments by providing virtualization and overlay network offloads. The company introduced CloudX, a reference architecture for building efficient cloud platforms. CloudX is based on the Mellanox OpenCloud architecture which leverages off-the-shelf components of servers, storage, interconnect and software to form flexible and cost-effective public, private and hybrid clouds. Mellanox introduced LinkX, a comprehensive product portfolio of cables and transceivers supporting interconnect speeds up to 100Gb/s for both Ethernet and InfiniBand data center networks. The company announced that its Switch-IB family of EDR 100Gb/s InfiniBand switches achieved world-record port-to-port latency of less than 90ns. In addition, Mellanox announced the ConnectX-4 single/dual-port 100Gb/s InfiniBand and Ethernet adapter, the final piece to the industry’s first complete end-to-end EDR 100Gb/s InfiniBand interconnect solution. The future is very bright for Mellanox, and it’s because of all of our hard work, execution and passion for the company. Here’s to another 15 years and more!
Mellanox introduces the “Generation of Open Ethernet”—the first Open Switch initiative. With this new approach, Mellanox took Software Defined Networking (SDN) to the next level, opening the source code on top of its existing Ethernet switch hardware.
During the year, Mellanox’s Ethernet market share reached 19 Percent of Total 10GbE NIC, LOM, and Controller Market and propelling the company to one of the Top 3 Ethernet NIC providers.
Mellanox acquired two companies: Kotura and IPtronics. These acquisitions enhanced Mellanox’s ability to deliver complete end-to-end optical interconnect solutions at 100Gb/s and beyond.
By the end of the year, Mellanox growth had expanded to more than 1,400 employees worldwide.
Mellanox expanded the line of end-to-end FDR 56Gb/s InfiniBand interconnect solutions with new 18-port, 108-port, 216-port, and 324-port non-blocking fixed and modular switches. The Connect-IB adapter was announced in June, delivering the industry’s highest throughput of 100Gb/s on a single adapter card utilizing PCI Express 3.0 x16.
More key announcements followed in November including Unified Fabric Manager (UFM-SDN) Data Center Appliance and UFM software version 4.0, a comprehensive management solution for SDN and scalable compute and storage infrastructures, and the MetroX series, enabling native InfiniBand and RDMA reach to longer distances up to 80KM.
From June 2012 to November 2012, the number of FDR 56Gb/s InfiniBand systems increased by nearly 2.3X, including the top two ranked InfiniBand systems on the TOP500 list. Mellanox InfiniBand solutions connected 43 percent (10 systems) of all PetaScale based systems (23 systems).
This year marked the first time Mellanox’s annual revenues exceed the $500 million mark, highlighting the increased demand for its interconnect solutions.
By the end of the year, Mellanox growth had expanded to more than 1,260 employees worldwide.
Mellanox acquires Voltaire to expand its software and switch product offerings and strengthen its leadership position in providing end-to-end connectivity systems in the growing worldwide data center server and storage markets.
Key product announcements included the introduction of SwitchX, the industry’s first FDR 56Gb/s InfiniBand and 10/40 Gigabit Ethernet multi-protocol switches and ConnectX-3, the industry’s first FDR 56 Gb/s InfiniBand and 10/40 Gigabit Ethernet multi-protocol adapter. This move marked the first time Mellanox would begin selling an end-to-end 10/40GbE solution. Today, Mellanox is still the only provider of an end-to-end 40GbE solution.
Mellanox also began supporting Open Networking initiatives and joined the Open Networking Foundation, Open Virtualization Alliance and OpenStack as part of a continued commitment to next-generation data center technologies. The company also joined the Open Compute Project (OCP) and introduced the first 10GbE Mezzanine adapters for OCP servers.
By the end of 2011, there were more than 840 employees of Mellanox worldwide.
Mellanox would begin selling its own switch systems under the IS5000 brand, as well as its own branded copper and fiber cables making it the first company to provide a complete end-to-end InfiniBand solution.
InfiniBand momentum continued on the TOP500, InfiniBand-connected systems grew 18 percent year-over-year, representing more than 43 percent of the TOP500, or 215 systems, with more than 98 percent of all InfiniBand clusters leveraging Mellanox InfiniBand solutions.
Eyal Waldman named ‘CEO of the Year’ by the Israeli Center for Management. HPC in the Cloud, in its first annual Reader & Editor’s Choice Awards, named Mellanox as a ‘Cloud Network Innovator.’ During the Annual SuperComputing Conference, Mellanox was awarded ‘Best HPC Interconnect Product or Technology’ by HPCwire.
Oracle Corporation would announce a strategic investment in Mellanox technologies, acquiring 10.2% of Mellanox’s ordinary shares in the open market for investment purposes, to solidify the common interest in the future of InfiniBand.
Mellanox was the first to deliver Microsoft Logo qualified InfiniBand Adapters for Windows HPC Server 2008. Later that year, the company introduced ConnectX-2, a high-performance; low power connectivity solution along with the ConnectX-2 VPI adapter card to deliver flexibility to next generation virtualized and cloud data centers.
The industry and press took notice. Now with more 370 employees, Mellanox ranked Number 20 Fastest Growing Company in Israel on Deloitte’s 2009 Israel Technology Fast 50 Program. HPCWire honored the company with two Editor’s Choice Awards for Best Product and Best Government & Industry Collaboration.
Now representing nearly 37 percent of the TOP500, Mellanox InfiniBand was shown to enable the highest system efficiency and utilization (96 percent).
Named as the “Best of Interop 2008”, Mellanox won the Data Center & Storage category ConnectX® EN 10GbE server and storage I/O adapter.
The company announced the availability of the first QDR 40Gb/s InfiniBand Adapters and switch silicon devices, leap frogging the competition, and in record time (1 month after release to production) would quickly power one of the world’s fastest supercomputers. By the end of the year, nearly all top-tier OEMs would be reselling 40Gb/s InfiniBand in their server platforms.
Mellanox announced the Initial Public Offering on the NASDAQ in the US traded under the symbol “MLNX”. Later, the company would be listed on the Tel Aviv Stock Exchange (TASE), and added to the TASE TA-75, TA-100, Tel-Tech and Tel-Tech 1.
Mellanox would surpass the 2 Million InfiniBand port milestone. By June, the number of Supercomputers using the InfiniBand interconnects increased on the TOP500 list, with 132 supercomputers (26% of the list) connected with InfiniBand, 230% more than the 40 supercomputers on the June 2006 list, and 61% more than the 82 supercomputers reported on the November 2006 list. Mellanox was ranked Number 146 on the Fastest Growing Company in North America on Deloitte’s 2007 Technology Fast 500; Number 16 Fastest Growing Company in Israel on the 2007 Deloitte Israel Technology Fast 50 lists and Number 12 in Deloitte’s Technology Fast 50 Program for Silicon Valley Software and Information Technology Companies.
Key Mellanox product announcements during this year included the ConnectX EN Dual-Port 10 Gigabit Ethernet adapter chips and NICs, PCI Express® 2.0 20Gb/s InfiniBand and 10 Gigabit Ethernet Adapters and the Dual-Port 10GBase-T NIC.
Mellanox InfiniBand solutions would begin selling through HP for its c-class BladeSystem. This would begin a long tradition of development and joint offerings with HP, which would soon secure HP as one of Mellanox’s top customers.
Mellanox launched the ConnectX adapter architecture which would enable QDR 40Gb/s InfiniBand and 10GbE connectivity on a single adapter.
InfiniBand-based supercomputers would grow on the TOP500, increasing 105% since June 2006 and 173% since the previous year.
Eyal Waldman was named "CEO of the Year" by IMC. Mellanox was listed as #6 in Byte&Switch’s "Top 10 Private Companies: Spring 2006”. The company prepared for an initial public offering and filed a Registration Statement with the Security Exchange Commission. By the end of the year, Mellanox had nearly 170 employees.
Mellanox surpasses the 500K InfiniBand port milestone. More technology advances were announced as Mellanox was the first to ship DDR 20Gb/s InfiniBand adapters and switch silicon, making it the industry bandwidth leader in high-performance server-to-server and server-to-storage connectivity.
Industry and press accolades came from selection to the Red Herring Top 100 Europe Annual List of the most promising private technology companies along with selection by AlwaysOn as the Top 100 Private Company award winner and Mellanox named Globes Most Promising Start-up in 2005. Mellanox received the Editor’s Choice Award for Most Innovated HPC Networking Solutions from HPCWire. EDN named Mellanox’s InfiniHost III Lx in “Hot 100 in 2005” and Electronic Design magazine named InfiniScale III “Best of Embedded 2005: Hardware.”
Mellanox crosses the 200K ports sales milestone and introduced InfiniHost III Ex InfiniBand Host Channel Adapter, and the 3rd generation 144 port InfiniBand switch platform. Later that year, Mellanox showcased the world’s first single-chip 10Gb/s adapter with new “MemFree” InfiniBand Technology that enabled industry-leading price/performance, low power and small footprint.
In November, Mellanox announced InfiniHost III Lx, the world’s smallest 10Gb/s adapter. InfiniBand interconnect is now used on more than a dozen systems on the prestigious TOP500 list of the world's fastest computers, including two systems which achieved a top ten ranking on the TOP500 list.
At the beginning of the year, Mellanox announced that the company had shipped over 50K InfiniBand ports. Several key announcements were made including the architecture of a PCI Express enabled dual port 10Gb/s InfiniBand HCA device, a 96-port switch design for the High Performance Computing (HPC) market, and a 480Gb/s third generation InfiniBand switch device.
By June, Mellanox announced that the company had shipped over 100K InfiniBand ports, and had been selected by Virginia Tech University to create the world’s largest InfiniBand cluster. The company enabled Virginia Tech to build the world’s 3rd fastest supercomputer in a record time of less than four months.
Despite events surrounding the Dot-Com collapse and the 9/11 World Trade Center attacks in 2001, Mellanox is able to secure $56M in new funding, showcasing the confidence in the company’s leadership and product direction.
Mellanox announced the immediate availability of its Nitro InfiniBand technology based blade server and I/O chassis reference platform. The Nitro platform marked the first ever InfiniBand Architecture based server blade design and provided a framework that helped deliver the full benefits of server blades. Mellanox also demonstrated first ever InfiniBand Server Blade MPI Cluster.
During that year, Mellanox announced the availability of the Nitro II 4X 10Gb/s InfiniBand server blade reference platform and partnerships to advance industry standard, hardware independent, Open Source InfiniBand software interfaces.
Mellanox continued to receive acknowledgements from the industry and press including selection as Emerging Company of the Year at the Server I/O Conference and Tradeshow, recognition as one of the five "most promising" companies at the first annual, Semiconductor Venture Fair, and named to the Red Herring Magazine 100 for the second year in a row.
Mellanox shipped InfiniBridge 10Gb/s InfiniBand devices to customers for revenues, marking the industry’s first commercially available InfiniBand semiconductors supporting both 1X and 4X switches and channel adapters. Four new InfiniBridge Reference Platforms were released and were the first platforms to support 10Gb/s copper connections and small form factor pluggable (SFP) connectors. Mellanox then would introduce “InfiniPCI” technology, enabling transparent PCI to PCI bridging over InfiniBand switch fabrics.
Mellanox introduced InfiniScale switching devices that marked the first ever commercially available InfiniBand devices with integrated physical layer SerDes. Mellanox shipped over ten-thousand InfiniBand ports, and now had more than 200 employees worldwide.
First draft version of the InfiniBand specification and the InfiniBand Architecture 1.0 specification was released. During the year, Mellanox raised $25.7M in second round financing. Raza Venture Management and Intel Capital joined Sequoia Capital and US Venture Partners in funding Mellanox. During that same year, Agilent, Brocade, and EMC joined the InfiniBand Trade Association. This announcement signaled the storage industry’s support for the InfiniBand Architecture.
At this time, Mellanox had 39 employees, with business operations, sales, marketing, and customer service headquartered in Santa Clara, CA, with design, engineering, and quality and reliability operations based in Israel.
Mellanox was founded in 1999 by Eyal Waldman along with an experienced team of Engineers in Yokneam, Israel. The company was founded with the purpose of developing semiconductors for data center infrastructure based on the Next Generation I/O (NGIO) standard. Mellanox received first round funding of $7.6M from Sequoia Capital and US Venture Partners.
Mellanox began development of an NGIO to FutureIO bridge architecture. During that year, NGIO and Future I/O standards announced a merger. The new standard was temporarily called “ServerIO” and was envisioned to combine the best of class features from both technologies. The first developer’s conference for the newly merged groups was held in San Francisco and the name “InfiniBand” was introduced and the InfiniBand Trade Association was formed.