My Retain Tip/X Series

Bonding configuration 레드헷 티밍구성방법

엔지니어-FIXER 2010. 11. 26. 08:23

If you are not using SEA failover feature but still have two VIO servers for high availability each of them providing a bridge to the outside network you will need to assign virtual adapters in the VIO servers to different VLANs.
Following picture illustrates this scenario (no matte if Linux or IBM VIO is used):

Configuration

LPAR setup

Assign two virtual network adapters to every client LPAR, set VLAN ID for the first adapter to mach the VLAN ID of the Bridge on VIO1 and the VLAN ID for the second adapter to match the VLAN ID of the VI02.

Operating system: network configuration

We are going to use network bonding in active backup mode with arrping. Because our networ switch is virtual the link is always up, we can not use MII status. We will need to arpping any address in the network in order to determine if the slave device is functional or not.

On Red Hat we need following configuration files:

All the bonding options are set during the module load, so we add them to the /etc/modprobe.conf:

  • /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=active-backup arp_interval=1000 arp_ip_target=9.156.175.1,9.156.175.8 primary=eth0
alias eth0 ibmveth
alias eth1 ibmveth
alias scsi_hostadapter ibmvscsic

the network configuration is in /etc/sysconfig/network-scripts

  • /etc/sysconfig/network-scripts/ifcfg-bond0
    DEVICE=bond0
    BOOTPROTO=static
    BROADCAST=9.156.175.255
    IPADDR=9.156.175.246
    NETMASK=255.255.255.0
    NETWORK=9.156.175.0
    ONBOOT=yes
    GATEWAY=9.156.175.1
    
  • /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    
  • /etc/sysconfig/network-scripts/ifcfg-eth1
    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    

Monitoring and troubleshooting

Monitoring

You can see and monitor the status in /proc/net/bonding/bond0

[root@op720-1-client2 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v2.6.1 (October 29, 2004)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: ba:d3:f0:00:40:02

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: ba:d3:f0:00:40:03

You will also see messages in dmesg and /var/log/messages if you active slave fails:

Mar  1 16:00:57 op720-1-client2 kernel: bonding: bond0: link status down for active interface eth0, disabling it
Mar  1 16:00:57 op720-1-client2 kernel: bonding: bond0: making interface eth1 the new active one.

Troubleshooting

  • Make sure your ibmveth driver has Version 1.05:
# modinfo ibmveth
filename:       /lib/modules/2.6.9-22.EL.root/kernel/drivers/net/ibmveth.ko
author:         Santiago Leon <santil@us.ibm.com>
description:    IBM i/pSeries Virtual Ethernet Driver
license:        GPL
version:        1.05 5BA78B6CA35178D5AFE35E8
vermagic:       2.6.9-22.EL SMP gcc-3.4
  • Check that only bond0 has an IP, slave devices should be up but without any ip-configuration
  • Check routing; routes should be set only for bond0 not for slave devices
  • If you can not ping, try to configure eth0 and eth1 manually and ping using each device to make sure your SEA setup works.
    ____
    See http://linux-net.osdl.org/index.php/Bonding for more information

Bonding configuration