Yes and thats not where the problem is IMO.. If you broadcast your translated address ( say 1.2.3.4, a public ip ) , nodes outside your VPN'd network will have no problems connecting as long as they can route to this address ( which they should ), but
any other nodes on the local net ( e.g. 10.0.1.2 ) won't be able to connect/route to their neighbor who's telling them to open the return socket to 1.2.3.4

Am i getting this right? At least this is what i have experienced not so long ago:

DC1 nodes

a) 10.0.1.1 translated to 1.2.3.4 on NAT

b) 10.0.1.2 translated to 1.2.3.5 on NAT

DC2 nodes

a) 10.0.2.1 translated to 1.2.4.4 on NAT

b) 10.0.2.2 translated to 1.2.4.5 on NAT

Let's assume DC2 nodes' broadcast_addresses are their public addresses.

if, DC1:a and DC1:b broadcast their public address, 1.2.3.4 and 1.2.3.5, they are advertising an address that is not routable on their network ( loopback ) but DC2:a and DC2:b can connect/route to them just fine. Nodetool ring on any DC1 node says the
others in DC1 are down, everything else is up . Nodetool ring on any DC2 node says everything is up.

if DC1:a and DC1:b broadcast their private address, they can connect to each other fine, but DC2:a and DC2:b will have no chance to route to them. Nodetool ring on any DC1 node says everything is up. Nodetool ring on any DC2 node says DC1 nodes are down.

regards,

Andras

On 27 Jun 2012, at 11:29, aaron morton wrote:

Setting up a Cassandra ring across NAT ( without a VPN ) is impossible in my experience.

I am not using a VPN. The system has been running successfully in this configuration for a couple of weeks until I noticed the repair is not working.

What happens is that I configure the IP Tables of the machine on each Cassandra node to forward packets that are sent to any of the IPs in the other DC (on ports 7000, 9160 and 7199) to be sent to the gateway IP. The gateway does the NAT sending the packets
on the other side to the real destination IP, having replaced the source IP with the initial sender's IP (at least in my understanding of it).

The DCs are communicating over a gateway where I do NAT for ports 7000, 9160 and 7199.

Ah, that sounds familiar. You don't mention if you are VPN'd or not. I'll assume you are not.

So, your nodes are behind network address translation - is that to say they advertise ( broadcast ) their internal or translated/forwarded IP to each other? Setting up a Cassandra ring across NAT ( without a VPN ) is impossible in my experience. Either
the nodes on your local network won't be able to communicate with each other, because they broadcast their translated ( public ) address which is normally ( router configuration ) not routable from within the local network, or the nodes broadcast their internal
IP, in which case the "outside" nodes are helpless in trying to connect to a local net. On DC2 nodes/the node you issue the repair on, check for any sockets being opened to the internal addresses of the nodes in DC1.

regards,

Andras

On 25 Jun 2012, at 11:57, Alexandru Sicoe wrote:

Hello everyone,

I have a 2 DC (DC1:3 and DC2:6) Cassandra1.0.7 setup. I have about 300GB/node in the DC2.

The DCs are communicating over a gateway where I do NAT for ports 7000, 9160 and 7199.

I did a "nodetool repair" on a node in DC2 without any external load on the system.

It took 5 hrs to finish the Merkle tree calculations (which is fine for me) but then in the streaming phase nothing happens (0% seen in "nodetool netstats") and stays like that forever. Note: it has to stream to/from nodes in DC1!

Questions:
1) How can I make sure that the JIRA issue above is my real problem? (I see no errors or warns in the logs; no other activity)
2) What should I do to make the repairs work? (If the JIRA issue is the problem, then I see there is a fix for it in Version 1.0.11 which is not released yet)