In this howto i will describe how to add multiple bonding interface (teaming multiple ethernet interfaces) in LINUX. You can learn more about bonding from
vi /usr/share/doc/kernel-doc-2.6.18/Documentation/networking/bonding.txt
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical “bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
We have multiple bonding implementation using two or three ethernet card but with single bonding interface and it can be found on internet. Using multiple bonding interfaces are rear and all of them using different method to implement it because the documentation method isn’t working now. The document says to load multiple bonding modules for multiple bonding interfaces but using -o option within options in /etc/modprobe.conf and its not support by RHEL 5 and later releases. Therefore we will use BONDING_OPTS options in network scripts of bonding interfaces.
As we are implementing one of the biggest Oracle RAC implementation, we need load balancing with fault tolerance of network to access the storage without any point of failure. Therefore we are using bonding mode 2 (balance-xor), This mode provides load balancing and fault tolerance.
All of our servers are with 4 ethernet cards and we connect 2 ethernet interfaces (eth0 and eth1) with switch1 and 2 interfaces (eth2 and eth3) with switch2 to have multiple paths. We will create two bonding interfaces bond0 (eth0 and eth2) and bond1 (eth1, eth3). So if eth0 (from switch1) fails the bond will continue to use eth2 (from switch2) and load balance between two switches.
1) Load Bonding Driver/Module
We need to edit /etc/modprobe.conf to load bonding module into linux.
vi /etc/modprobe.conf
add following lines
alias bond0 bonding
alias bond1 bonding
2) Configure ethernet interfaces.
As we discussed we have 4 ethernet interfaces, so we need to put right configuration to team multiple cards in it to work as bond. You need to edit/add following as it is
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
vi /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
USERCTL=no
ONBOOT=yes
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
3) Configure bonding
Create bond0 and bond1 scripts under /etc/sysconfig/network-scripts same as ethernet interfaces.
touch /etc/sysconfig/network-scripts/ifcfg-bond0
touch /etc/sysconfig/network-scripts/ifcfg-bond1
vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
BONDING_OPTS=”max_bonds=2 miimon=100 mode=2 primary=eth0″
NETMASK=255.255.255.0
IPADDR=192.168.0.1
Change IPADDR and NETMASK as per your environment.
vi /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
BONDING_OPTS=”max_bonds=2 miimon=100 mode=2 primary=eth1″
NETMASK=255.255.255.0
IPADDR=192.168.0.2
Change IPADDR and NETMASK as per your environment. where
max_bonds = Specifies the number of bonding devices to create for this instance of the bonding driver
miimon = Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures.
mode = Specifies one of the bonding policies, and we are using 2 as defined above
primary = A string (eth0, eth2, etc) specifying which slave is the primary device. The specified device will always be the active slave while it is available. Only when the primary is off-line will alternate devices be used
4) Test
Restart network init script, it will first load bonding module and then setup bond0 and bond1 interfaces.
/etc/init.d/network restart
/etc/init.d/network restart
Shutting down interface bond0: [ OK ]
Shutting down interface bond1: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface bond0: [ OK ]
Bringing up interface bond1: [ OK ]
Check interfaces
ifconfig
bond0 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:77
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:27908 errors:0 dropped:0 overruns:0 frame:0
TX packets:18422 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:23044634 (21.9 MiB) TX bytes:2218142 (2.1 MiB)bond1 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:79
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:70 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15611 (15.2 KiB) TX bytes:3524 (3.4 KiB)eth0 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:77
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:27474 errors:0 dropped:0 overruns:0 frame:0
TX packets:18422 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:23013444 (21.9 MiB) TX bytes:2218142 (2.1 MiB)
Interrupt:106 Memory:d2000000-d2012800eth1 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:79
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:67 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15419 (15.0 KiB) TX bytes:3524 (3.4 KiB)
Interrupt:114 Memory:d4000000-d4012800eth2 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:77
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:434 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:31190 (30.4 KiB) TX bytes:0 (0.0 b)
Interrupt:122 Memory:d6000000-d6012800eth3 Link encap:Ethernet HWaddr 00:24:E8:4A:F8:79
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:192 (192.0 b) TX bytes:0 (0.0 b)
Interrupt:130 Memory:d8000000-d8012800
Verify bonding actually works
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:e8:4a:f5:ccSlave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:e8:4a:f5:d0
cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth1
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:e8:4a:f5:ceSlave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:e8:4a:f5:d2
If you have any question please use comments.
BONDING_OPTS=”max_bonds=2 miimon=100 mode=1 primary=eth0″
The use of ethernet bonding in linux is very interesting and very simple to set up.
Hi, Assalam-o-Alaikum,
I have 3 NICs. which are eth0, eth1 and eth2. in eth0 there is a supernet connection, in eth1 fascom and eth2 is our LAN respectively.
i want to divide load on these two links. is bonding is useful for me..?? if, then plz guide me about necessary changes in the above procedure according to my scenario
Sohail I have configured bond0 and bond1 as per your configuration, but I want to assign same ip on both bonding interface. to full fill this I have configure bridge interface and add this both bond interface on that bridge and assign ip 192.168.0.1 to bridge interface.
after this configuration, linux show me this error
bond0: received packet with own address as source
bond1: received packet with own address as source
and after few sec. my linux server goes down.
kindly help me to resolve this sing ip solution.
regards
very easy to understand.
Thanks Alot
Question about the output of cat /proc/net/bonding/bond0 and bond1 for Bonding Mode: fault-tolerance (active-backup).
Is this right when we setup BONDING_OPTS=”max_bonds=2 miimon=100 mode=2 primary=eth0″ and BONDING_OPTS=”max_bonds=2 miimon=100 mode=2 primary=eth1″ in both /etc/sysconfig/network-scripts/ifcfg-bond0 and /etc/sysconfig/network-scripts/ifcfg-bond1.
The mode=2 is balabce-xor why in this example the Bonding Mode: fault-tolerance (active-backup).
Should it be Bonding Mode: fault-tolerance (balance-xor) when the mode=2?
Please give some of advice.
Thank you,
Steven
@Steven: Sorry for late reply as busy with work everyday. Linux dispaly it as fault-tolerance (active-backup) mode but to define all mode we called in balance-xor, definition is written below.
Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR’d with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.
can we make bond0 and bond1 to bond3 where bond3 is combination of bond1 andbond2
bond3=bond0+bond2
Please reply, if yes then how can we do?
@NASIM: It will be better to create single bond out of 4 or more interfaces you are using to create two bonds. I never encounter a setup i need to do this.