BLOG main image
분류 전체보기 (781)
LENOVO (19)
HP (3)
DELL (7)
IBM (32)
My Retain Tip (697)
자료실 (0)
Firmware (5)
기타서버관련소식 (5)
IBM Server Picture (3)
STUDY (2)
설정세팅 (0)
Visitors up to today!
Today hit, Yesterday hit
daisy rss
tistory 티스토리 가입하기!
2010. 11. 26. 08:23
Notice!!
IBM
본 블로그의 모든내용은 ibm.com에 있는 내용이며 본 블로그의 내용은 증빙내용으로는 활용 불가능합니다
주의! 모든 방문자는 공지를 반드시 읽어본 후 글을 읽으시기 바랍니다.:*필독공지*

딴지 거실분들은 절대 읽지 마시고 100% 정확한 정보가 있는 ibm.com 에서 확인하시기 바랍니다.

If you are not using SEA failover feature but still have two VIO servers for high availability each of them providing a bridge to the outside network you will need to assign virtual adapters in the VIO servers to different VLANs.
Following picture illustrates this scenario (no matte if Linux or IBM VIO is used):

Configuration

LPAR setup

Assign two virtual network adapters to every client LPAR, set VLAN ID for the first adapter to mach the VLAN ID of the Bridge on VIO1 and the VLAN ID for the second adapter to match the VLAN ID of the VI02.

Operating system: network configuration

We are going to use network bonding in active backup mode with arrping. Because our networ switch is virtual the link is always up, we can not use MII status. We will need to arpping any address in the network in order to determine if the slave device is functional or not.

On Red Hat we need following configuration files:

All the bonding options are set during the module load, so we add them to the /etc/modprobe.conf:

  • /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=active-backup arp_interval=1000 arp_ip_target=9.156.175.1,9.156.175.8 primary=eth0
alias eth0 ibmveth
alias eth1 ibmveth
alias scsi_hostadapter ibmvscsic

the network configuration is in /etc/sysconfig/network-scripts

  • /etc/sysconfig/network-scripts/ifcfg-bond0
    DEVICE=bond0
    BOOTPROTO=static
    BROADCAST=9.156.175.255
    IPADDR=9.156.175.246
    NETMASK=255.255.255.0
    NETWORK=9.156.175.0
    ONBOOT=yes
    GATEWAY=9.156.175.1
    
  • /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    
  • /etc/sysconfig/network-scripts/ifcfg-eth1
    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    

Monitoring and troubleshooting

Monitoring

You can see and monitor the status in /proc/net/bonding/bond0

[root@op720-1-client2 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v2.6.1 (October 29, 2004)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: ba:d3:f0:00:40:02

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: ba:d3:f0:00:40:03

You will also see messages in dmesg and /var/log/messages if you active slave fails:

Mar  1 16:00:57 op720-1-client2 kernel: bonding: bond0: link status down for active interface eth0, disabling it
Mar  1 16:00:57 op720-1-client2 kernel: bonding: bond0: making interface eth1 the new active one.

Troubleshooting

  • Make sure your ibmveth driver has Version 1.05:
# modinfo ibmveth
filename:       /lib/modules/2.6.9-22.EL.root/kernel/drivers/net/ibmveth.ko
author:         Santiago Leon <santil@us.ibm.com>
description:    IBM i/pSeries Virtual Ethernet Driver
license:        GPL
version:        1.05 5BA78B6CA35178D5AFE35E8
vermagic:       2.6.9-22.EL SMP gcc-3.4
  • Check that only bond0 has an IP, slave devices should be up but without any ip-configuration
  • Check routing; routes should be set only for bond0 not for slave devices
  • If you can not ping, try to configure eth0 and eth1 manually and ping using each device to make sure your SEA setup works.
    ____
    See http://linux-net.osdl.org/index.php/Bonding for more information

Bonding configuration

About
IBM 최신서버에 관련된 경험과 지식을 공유하는 블로그이며 블로그에 처음오신분은 *필독공지*를 꼭 먼저 읽어보시기 바랍니다.