High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 and ConnectX-3 Pro network adapters for System x® servers deliver the I/O performance that meets these requirements.
Mellanox's ConnectX-3 and ConnectX-3 Pro ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. RDMA support extends to virtual servers when SR-IOV is enabled. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.
Two 10 Gigabit Ethernet ports
Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
SR-IOV support; 16 virtual functions supported by KVM and Hyper-V (OS dependant) up to a maximum of 127 virtual functions supported by the adapter
Enables Low Latency RDMA over Ethernet (supported with both non-virtualized and SR-IOV enabled virtualized servers) -- latency as low as 1 µs
TCP/UDP/IP stateless offload in hardware
Traffic steering across multiple cores
Intelligent interrupt coalescence
Industry-leading throughput and latency performance
Software compatible with standard TCP/UDP/IP stacks
Microsoft VMQ / VMware NetQueue support
Legacy and UEFI PXE network boot support
Supports iSCSI as a software iSCSI initiator in NIC mode with NIC driver
The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:
Two QSFP ports supporting FDR-14 InfiniBand or 40 Gb Ethernet
Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and 10/40 Gb Ethernet. Supports three configurations:
2 ports InfiniBand
2 ports Ethernet
1 port InfiniBand and 1 port Ethernet
Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV enabled virtualed servers) -- latency as low as 1 µs
Sub 1 µs InfiniBand MPI ping latency
Support for QSFP to SFP+ for 10 GbE support
Legacy and UEFI PXE network boot support (Ethernet mode only)
Specifications
Design
Certification
BSMI (EMC)/CSA C22.2 60950-1-07/EN 61000-3-2/EN 61000-3-3/EN55022/EN55024/EN60950-1/FCC Part 15 Class A/IEC 60950-1/IECS-003:2004 Issue 4/NZ AS3548 / C-tick/RRL for MIC (KCC)/UL 60950-1/VCCI
BSMI (EMC)/CSA C22.2 60950-1-07/EN 61000-3-2/EN 61000-3-3/EN55022/EN55024/EN60950-1/FCC Part 15 Class A/IEC 60950-1/IECS-003:2004 Issue 4/NZ AS3548 / C-tick/RRL for MIC (KCC)/UL 60950-1/VCCI