If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below. ** If you are logged in, most ads will not be displayed. **

TCP failover doesn't work as expected

Hi community.
I don't know if this mailing list is still alive, but will try to find
answers here.

I'm trying to build up a tcp failover cluster.
I'm trying to save and restore active tcp sessions (that belong to
master side) on the salve side, when master experience the failover. So
that, I would have needed tcp sockets opened on slave side (that were
indeed started on the master side).

The main goal is to make it working for kamailio (SER) daemon. I'm
trying to reach real-time HA cluster for calls that are being on the
line and save them when master experience the failure.

where 10.100.100.28 - master and 10.100.100.29 -
salve.
The same config file is stored on the slave side, but addresses in UDP
section are swapped.
I tried to use Address Ignore block, where I made an effort to add ip
addresses belong to the node, but with this one it didn't work at all -
there was no exchange of conntrackd traffic between cluster nodes. So I
leaved it empty.

where primary-backup.sh is a script, that is provided with conntrackd
libraries.
You will ask me, why I don't use dedicated link for conntrackd? I used
it for a while, but as matter of fact it didn't change anything, so I
simplified the assignment for myself and made it deprecated.

How the process of failover looks like for the current moment:

1. I use telnet/ssh/ftp to connect to VIP address located (for current
moment) on master side;
1.1. Master side experience a fail - I bring down the eth0 link;
2. Backup node see the problem and execute:
/etc/conntrackd/primary-backup.sh primary
so the following sequence of conntrackd command are executed:
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -c
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -f
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -R
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -B

3. I can see the needed telnet/ssh/ftp session on the backup node by
command: conntrackd -i
It has state - ESTABLISHED state (I'm confident that this is
session I need, cuz I remember the client's port was used for connection
on master node).

4. But when I try to send packets (commands) from my client, server
resets the TCP session with [R] flag. Tcpdump output on the backups node
shows only 2 rows:

As you can see, firewall accepts the traffic (in INPUT and FORWARD
chains),this means that session exists on the backup internal
cache/kernel table (otherwise iptables would drop the packet), but it
then resets it, why?
I tried to test it with ssh, telnet and ftp. No success at all.
I also tried to remove flushing command, so that sequence was changed to :
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -c
/usr/sbin/conntrackd -C /etc/conntrackd/conntrackd.conf -R

and it also didn't work.

So if someone has the needed experience, please don't be apathetic, help
a bit.
At least I need a hint where to look for a problem.