let's say i had
. 4 nodes with two network interfaces,
. 1 network-addressable power switch
. 1 addressable switch.
If i channel bond the interfaces, i'll get the improved bandwidth,
sure. But now accesses to the "accessories" ( power switch, network
switch) will be very very erratic. With only one NIC on the
accessories, they have to be on one half of the split switch or the
other, and they won't see half the packets sent to them.
sure, packets will *eventually* get through, thanks to tcp's efforts
to be reliable, but it's nice to be able to use the web interfaces to
the switch for example without lengthy delays.
my first thought was to alias an interface and use that to talk to the
non-bonded entities on the network. The scheme was to do something
like
bond0, eth0, eth1: 192.168.1.0/255.255.255.0
eth0:0 : 192.168.2.0/255.255.255.0
my theory was that if the power and network switches were on the same
partition of the switch as the eth0 nic, linux would send packets to
those devices over the correct card and everything would be happy.
shows what i know :>. it looks to me like the bonding tricks hold on
pretty tight to the slave interfaces and won't let me trick it into
sending trafic over just one nic when i want to.
since channel bonding is a pretty popular topic on this list ( or used
to be...), how have you guys solved the problem of keeping access to
the other entities in the cluster while channel bonding the compute
nodes? Oh, and while i know it would work, i'd like to avoid using
another nic just to talk to the power switch.
thanks
==rob
--
[ Rob Latham <rlatham at plogic.com> Developer, Admin, Alchemist ]
[ Paralogic Inc. - www.plogic.com ]
[ ]
[ EAE8 DE90 85BB 526F 3181 1FCF 51C4 B6CB 08CC 0897 ]