OpenBSD, PF, bridging and 10gE
Im currently using an older sun fire x4100 with its integrated
1G network ports in a bridge and use PF to filter traffic to my 50 or
so machines in a server room. The 1G uplink to the internet is directly
connected in to the public side of the PF firewall.
I have used this setup for years and it works well.
But now we are considering upgrading the server room uplink from a 1G Cat5 cable to 10gigE Multimode fiber. If I do this and keep my same OpenBSD firewall, I am thinking that I will need to put a dual-port PCI-X 10gigE network card in there in order to bridge the 10gigE from the public
internet uplink in to the protected server room.
My question is this: Will the pci-X backplane be a bottleneck for acheiving
line rate 10gigE ? It has been suggested to me by my local network folks that my firewall may be able to handle bridging 1G traffice, but might not
be able to handle 10gE traffic... Im not sure how to gauge this.
I suppose I could, in my ignorance, throw a newer 1U system with PCI-e v2.0 and the newest dual port 10gigE card I can find at it... but it would be nice to understand what the constraints are...
Any pointers appreciated
You might find a discussion on 10Gb firewall performance from the misc@ mailing list helpful. It took place in January and February of last year, and begins here:
You don't mention the architecture you run with your Sunfire X4100 (AMD Opteron). For 4.8, the 10Gb NIC support list is the same for both i386 and amd64 (from the Project website), though capabilities differ -- the difference is in the footnotes. Review the hardware support documents for the details.
10Gb Ethernet Adapters * Intel 82597 PRO/10GbE based PCI adapters (ixgb) * Intel 82598 PRO/10GbE based PCI adapters (ix) * Neterion Xframe/Xframe-II based PCI adapters (xge) * Tehuti Networks 10Gb based PCI adapters (tht), (G)
10Gb Ethernet Adapters * Intel 82597 PRO/10GbE based PCI adapters (ixgb) (A) (B) (C) * Intel 82598 PRO/10GbE based PCI adapters (ix) (A) (B) (C) * Neterion Xframe/Xframe-II based PCI adapters (xge) (A) (B) (C) * Tehuti Networks 10Gb based PCI adapters (tht), (A) (B) (C)
Last edited by jggimi; 20th January 2011 at 08:16 PM. Reason: clarification
There are a few other drivers that show up in an apropos search as well, not sure if they're well supported yet.. or if OpenBSD's network stack is well optimized for 10G speeds.
che(4) - Chelsio Communications 10Gb Ethernet device
myx(4) - Myricom Myri-10G PCI Express 10Gb Ethernet device
You should see if you can replicate your setup and artificially generate the kind of traffic you'll be seeing, see how it fares.. and then gauge if a dedicated intelligent switch/bridge might perform better.
Thank you for the thread link and the prototyping suggestion...
is there any way to tell if a system running "PF" is "Maxxed out" - Would the load average spike or would pf or my NIC drop packets quietly?
is there a PF metric I can look at to see if my bridge is performing ok?
If you stay PCI-X ... seriously crack into your system's technical specifications. PCI-X is a linear bus; that is, cards on the same bus COMPETE with each other. If on the same bus, in a bridge mode, each 10GB card will CRUSH the other.
In all but the first-gen servers with PCI-X, the PCI-X chip set supports four (4) independent PCI-X buses. One bus (bus) necessarily connects to the NORTH/SOUTH bridge for CPU and MEMORY access. The remaining three buses are typically spread to -- (a) bus on-board chips (e.g. embedded SCSI controller, if it/they exists), and (b) the physical slots in the system.
If your PCI-X system includes legacy PCI-only slots, then one of these three buses is typically dedicated to these legacy PCI-only slots (bus). Consequently, your PCI-X slots are typically spread over the remaining bus.
You need to crack your mobo's technical spec/documention an find those two independent and non-contended slots for your 10GB cards. If you can't isolate to slots, then don't even try.
IMO, though, if you need something approaching wire- (or glass-) speed, then go, go, go PCI-e.
The speed through all the rest of the system will be faster too. You can get a supermicro mobo with the needed x8 (or x4) PCI-e slots for sub-CAD$200, or a genuine intel mobo for just over CAD$200. Drop a low cost E3-1200 series XEON ($200) and you're smoking.
Never argue with an idiot. They will bring you down to their level and beat you with experience.
I have secured a sun fire x4170 PCI-e based machine for my next firewall.
Have installed a 10gigE dual port Intel X520-DA2 adapter, which OBSD 4.9
is detecting fine in the AMD64 build.
I was thinking about trying the SolarFlare card, but it doesnt look like
they have a stable driver yet... theres really only a beta freeBSD driver so far.
So I'll stick with the Intel card for now.
I've plugged it in to the switch and am seeing traffic w/tcpdump on the 10gigE port. Havent set up the bridge yet, but am planning to put this baby between my Extreme Summit x650 24-port 10gigE switch and one of my file servers as a test.
Thanks for the responses here.... there is much deep tech knowledge on this board and I am grateful for the help. Would still enjoy hearing anyone elses' stories about setting up 10gigE bridges with openbsd. Will share more as I progress on this project.
I am up and running in production a bridging firewall with OpenBSD 5.0 on a sun fire x4170 PCI-e based machine. Im using the 10gigE dual port Intel X520-DA2 adapter.
I think it is worth mentioning in this post a note I got from the folks at Calomel:
---begin calomel comment
"Using ALTQ - packet queuing apparently cant work with 10gig yet, there isnt
enough bandwidth or there is a bug that doesnt let you set the
max bandwidth high enough...
if you want to support 10G you can not use Altq. The reason is
altq's bandwidth value is limited to a 32bit float value meaning you
can only go up to 4294Mb/sec.
Here is a link to the post we made on the Openbsd and FreeBSD mailing
list about this issue:
pf ALTQ bandwidth limited to a 32bit value (4294Mb)
No solutions were purposed from the group. The only idea the pf guys
had was to wait till "prio" queuing is done in pf. That will take up
to a year to finish though.
Secondly, ALTQ is a huge performance hit. When using anything more
then 4gbit/sec we notice heavy CPU usage."
--end Calomel comment
|10 gig e, 10ge, bottleneck, bridging, pf|
|Thread||Thread Starter||Forum||Replies||Last Post|
|requesting help with "New" way to do Bridging in OpenBSD 4.7||mbw||OpenBSD Installation and Upgrading||1||30th May 2010 01:06 AM|