David Siegel

Jun 10, 20207 min

ixgbe failed to initialize because an unsupported sfp+ module type was detected (fix for Centos 8.1)

Now that I have a switch with 4x10GE ports in it (Aruba S2500), I figure I may as well get some 10GE interfaces up and running to the server I hand-built. I was able to find an Intel X520-DA2 on the cheap at Ebay, so I am off and running! Or am I?

Unlike a lot of other posts on the Internet about this issue, I'm going to cover a lot more information about what didn't work, and you might be surprised about what the issue turned out to be!

I already have several of these that I purchased to get a minimal 10G network up and running:

SFP+10GBASE-T Transceiver Copper RJ45 Module Compatible for Cisco SFP-10G-T-S, QNAP, D-Link, TP-Link, Unifi, Linksys, Supermicro, Reach 30m, for Data Center, Switch, Router

They worked just fine to bring up the Cisco UCS 2104XP Fabric Extenders to the 6100 Fabric Interconnect, and to connect the Fabric Interconnect into the Aruba S2500, but bringing up a 10G interface on the new Intel NIC is turning out to be a head ache.

Upon booting, we see the error in the title of this blog "ixgbe failed to initialize because an unsupported sfp+ module type was detected". The Internets tell us that there is a fix for this error, so let's see if it works.

# echo "options ixgbe allow_unsupported_sfp=1" >/etc/modprobe.d/ixgbe.conf
 
# rmmod ixgbe; modprobe ixgbe
 
# ip a | grep ens4
 
3: ens4f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
 
4: ens4f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN

This is certainly an improvement, so let's add this as a permanent feature:

# vi /etc/default/grub
 
# echo 'GRUB_CMDLINE_LINUX=”ixgbe.allow_unsupported_sfp=1"' > /etc/default/grub
 
# grub2-mkconfig -o /boot/grub/grub.cfg
 
# rmmod ixgbe && modprobe ixgbe
 

Then reboot, and make sure we're getting things loaded again by checking the boot log.

# dmesg | grep ixgbe
 
[ 0.000000] Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-147.8.1.el8_1.x86_64 root=/dev/mapper/cl-root ro ixgbe.allow_unsupported_sfp=1 crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet intel_iommu=pt igb.max_vfs=7
 
[ 0.000000] Kernel command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-147.8.1.el8_1.x86_64 root=/dev/mapper/cl-root ro ixgbe.allow_unsupported_sfp=1 crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet intel_iommu=pt igb.max_vfs=7
 
[ 8.646335] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k-rh8.1.0
 
[ 8.646337] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
 
[ 8.811137] ixgbe 0000:02:00.0: Multiqueue Enabled: Rx Queue count = 20, Tx Queue count = 20 XDP Queue count = 0
 
[ 8.811429] ixgbe 0000:02:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
 
[ 8.811514] ixgbe 0000:02:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: E68793-007
 
[ 8.811515] ixgbe 0000:02:00.0: 90:e2:ba:c5:e8:08
 
[ 8.812722] ixgbe 0000:02:00.0: Intel(R) 10 Gigabit Network Connection
 
[ 8.812792] libphy: ixgbe-mdio: probed
 
[ 8.976185] ixgbe 0000:02:00.1: Multiqueue Enabled: Rx Queue count = 20, Tx Queue count = 20 XDP Queue count = 0
 
[ 8.976472] ixgbe 0000:02:00.1: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
 
[ 8.976553] ixgbe 0000:02:00.1: MAC: 2, PHY: 14, SFP+: 4, PBA No: E68793-007
 
[ 8.976554] ixgbe 0000:02:00.1: 90:e2:ba:c5:e8:09
 
[ 8.977535] ixgbe 0000:02:00.1: Intel(R) 10 Gigabit Network Connection
 
[ 8.977554] libphy: ixgbe-mdio: probed
 
[ 9.773065] ixgbe 0000:02:00.0 ens4f0: renamed from eth0
 
[ 9.784270] ixgbe 0000:02:00.1 ens4f1: renamed from eth1
 
[ 17.463508] ixgbe 0000:02:00.0: registered PHC device on ens4f0
 
[ 17.629277] ixgbe 0000:02:00.0 ens4f0: detected SFP+: 3
 
[ 17.656053] ixgbe 0000:02:00.1: registered PHC device on ens4f1
 
[ 17.821339] ixgbe 0000:02:00.1 ens4f1: detected SFP+: 4
 
[ 1912.046794] ixgbe 0000:02:00.0 ens4f0: detected SFP+: 3
 

This allows the system to boot up and load the ixgbe driver, recognizing the SFP+'s, but is our problem solved?

I cannot get link to come up on the SFP+'s, so although we've forced the driver to load with a foreign SFP+ loaded, we are still not getting a link.

Let's run some additional commands and see if we can get some more information:

# lspci | grep Ethernet
 
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-V (rev 05)
 
02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
 
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
 

# ethtool ens4f0
 
Settings for ens4f0:
 
Supported ports: [ FIBRE ]
 
Supported link modes: 10000baseT/Full
 
Supported pause frame use: Symmetric
 
Supports auto-negotiation: No
 
Supported FEC modes: Not reported
 
Advertised link modes: 10000baseT/Full
 
Advertised pause frame use: Symmetric
 
Advertised auto-negotiation: No
 
Advertised FEC modes: Not reported
 
Speed: Unknown!
 
Duplex: Unknown! (255)
 
Port: Direct Attach Copper
 
PHYAD: 0
 
Transceiver: internal
 
Auto-negotiation: off
 
Supports Wake-on: d
 
Wake-on: d
 
Current message level: 0x00000007 (7)
 
drv probe link
 
Link detected: no

Next we try a Direct Access Cable, so we get a this 10Gtek SFP+ 10G SFP+ DAC Cable - 10GBASE-CU Passive Direct Attach Copper Twinax SFP Cable for Cisco SFP-H10GB-CU1M, Ubiquiti, D-link, Supermicro, Netgear, Mikrotik, 1-Meter(3.3ft)

The interesting thing about Direct Access Connect (DAC) Cables is that they are less than one quarter the price of getting a pair of 10Gbase-T SFP+'s, and you don't have to fuss with making a cat6a cable to connect them. For $18 we should be up and running.

Except it doesn't work!

With some more googling, I found some notes over at Intel about this card:
 

82599-Based Adapters
 
NOTES:
 
* If your 82599-based Intel(R) Network Adapter came with Intel SFP+ optics, or
 
is an Intel(R) Ethernet Server Adapter X520 type of adapter, then it only
 
supports Intel optics and/or the direct attach cables listed below.
 

 
Supplier Type Part Numbers
 
SR Modules
 
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
 
Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
 
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
 
LR Modules
 
Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
 
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
 
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
 
QSFP Modules
 
Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) E40GQSFPSR
 
QSFP+ 40G speed is not supported on 82599 based devices.
 

 
The following is a list of 3rd party SFP+ modules and direct attach cables that
 
have received some testing. Not all modules are applicable to all devices.
 

 
Supplier Type Part Numbers
 
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
 
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
 
Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
 

 
Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
 
Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
 
Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
 
Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
 

 
Finisar 1000BASE-T SFP FCLF8522P2BTL
 
Avago 1000BASE-T SFP ABCU-5710RZ
 
HP 1000BASE-SX SFP 453153-001
 

 
82599-Based SFP+ adapters support all passive and active limiting direct attach
 
cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
 

Okay, I get it now, these cards are picky. So we'll have to try something else.

With more hunting on amazon, we find a similar product from 10Gtek that is supposed to be made just for Intel.

10GTek 10G SFP+ DAC Cable – for Intel XDACBL1M 10GBASE-CU Passive Direct Attach Copper (DAC) SFP Twinax Cable, 1-Meter(3.3ft)
 

The Intel part number doesn't seem to match, but the Intel compatibility in the part description is the best indicator we have that this thing might work. Of course, it may not work on the switch side, so that's still an unknown. The Cisco-compatible DAC;s and 10GbaseT SFP+'s worked just fine on the Aruba S2500, but will this Intel-compatible SFP+ work?

Nope!

Now to try a troubleshooting step that I probably should have tested before...loopback's! My NIC has a pair of ports in it, so I can test each of these cables by plugging both ends into the same adapter, and of course I've got plenty of ports on the S2500 to do the same. In total, I have the following gear so far:

1) Aruba DAC that came with the S2500

2) Cisco-compatible SFP+ 10GbaseT by HiFiber

3) Cisco-compatible SFP+ DAC by 10GTek

4) Intel-compatible SFP+ DAC by 10GTek

Testing these cables in a loopback configuration on each device yielded the following results:

So the problem all along has been on the Aruba side. Based on this data, I feel pretty confident that an Aruba-compatible DAC will work in in the Intel NIC, so I pick up a couple of these SFP+'s from Macroreer.

https://www.amazon.com/gp/product/B078YRLXZQ/ref=ppx_yo_dt_b_asin_title_o03_s01?ie=UTF8&psc=1

I've got the two HPE-compatible Macroreer's in the first slot, an Intel-compatible 10Gtek DAC in the second slot, and a 10Gtek Cisco DAC in the 3rd slot.

If we look at the switch side, we see the following:

(sw1) #show interface transceivers
 

 
GE0/1/0
 
-------
 
Vendor Name : MACROREER
 
Vendor Serial Number : 200304230
 
Vendor Part Number : 10G-SFP+-CU1M
 
Aruba Certified : NO
 
Cable Type : 10GBASE-SR
 
Connector Type : LC
 
Wave Length : 850 nm
 

 
GE0/1/1
 
-------
 
Vendor Name : OEM
 
Vendor Serial Number : 191206025
 
Vendor Part Number : J9281B
 
Aruba Certified : NO
 
Cable Type : 10GBASE-SR
 
Connector Type : LC
 
Wave Length : 850 nm
 

 
GE0/1/2
 
-------
 
Vendor Name : Intel Corp
 
Vendor Serial Number : INS11J70065
 
Vendor Part Number : 821-24-011-02
 
Aruba Certified : NO
 
Cable Type : unknown
 
Connector Type : Copper Pigtail
 
Wave Length : 256 nm
 
Cable Length : 1m
 

 
GE0/1/3
 
-------
 
Vendor Name : OEM
 
Vendor Serial Number : S190101003386
 
Vendor Part Number : SFP-H10GB-CU1M
 
Aruba Certified : NO
 
Cable Type : unknown
 
Connector Type : Copper Pigtail
 
Wave Length : 256 nm
 
Cable Length : 1m

They are still showing as unsupported, but the new Macroreer SFP+'s are, in fact, working!!

[10629.170022] ixgbe 0000:02:00.1 ens4f1: WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules.
 
[10629.233487] ixgbe 0000:02:00.1 ens4f1: detected SFP+: 6
 
[10630.541517] ixgbe 0000:02:00.1 ens4f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
 
[10630.541616] IPv6: ADDRCONF(NETDEV_CHANGE): ens4f1: link becomes ready
 
[10711.676809] ixgbe 0000:02:00.0 ens4f0: WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules.
 
[10711.742430] ixgbe 0000:02:00.0 ens4f0: detected SFP+: 5
 
[10713.684408] ixgbe 0000:02:00.0 ens4f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
 
[10713.684511] IPv6: ADDRCONF(NETDEV_CHANGE): ens4f0: link becomes ready
 
[10713.703887] ixgbe 0000:02:00.0 ens4f0: NIC Link is Down
 
[10713.821324] ixgbe 0000:02:00.0 ens4f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

As an aside, if you have a newer Aruba switch (e.g. 3840), there is an undocumented command you can run to support more SFP+'s, but we are out of luck on the S2500 as it's running and older ArubaOS version. Just put the following command at the base level of your config, and this might solve your problem.

#(config) allow-unsupported-transceiver

#(config) write mem

And that's it, mystery solved! We now have a fully operational 10G network between the homemade server and our Cisco UCS platform. We also now have a bunch of cheap Cisco DACs that will be used to add more interfaces between our Cisco UCS 5108 Chassis and the 6100 Fabric Interconnect, and we can relegate our more expensive 10GbaseT transceivers to connect between the office and the garage over the Cat6a cables I ran, so we have 20G from each server platform into the Aruba S2500.

    49050
    2