{{i18n_entry|Español|High Performance Firewall/Nat with iptables and VLANs and iproute2 (Español)}}

+

−

{{i18n_entry|Italiano|High Performance Firewall/Nat with iptables and VLANs and iproute2 (Italiano)}}

+

−

{{i18n_links_end}}

+

Imagine this, you have more than two networks separated by Virtual Lans protocols (IEEE 802.1q) or VLANs, carried to you by an intelligent/manageable switch on one troncal line 10/100/1000 MB HD/FD (naturally the best is 1000 MB FD).

Imagine this, you have more than two networks separated by Virtual Lans protocols (IEEE 802.1q) or VLANs, carried to you by an intelligent/manageable switch on one troncal line 10/100/1000 MB HD/FD (naturally the best is 1000 MB FD).

Line 13:

Line 14:

The second one is what I did. The history of how this begin is related to a some emergency/burn/crash/out of a group of Cisco PIXs. I won't go too deeply into that.

The second one is what I did. The history of how this begin is related to a some emergency/burn/crash/out of a group of Cisco PIXs. I won't go too deeply into that.

−

==The work==

===VLAN support===

===VLAN support===

−

The first thing we have to do is give the kernel the capacity to work with Jumbo Frames. This is done by adding the 8021q module to the kernel

+

The first thing we have to do is give the kernel the capacity to work with Jumbo Frames. This is done by adding the 8021q module to the kernel modules configuration files in {{ic|/etc/modules-load.d/}}.

−

# modprobe 8021q

+

−

and/or put in modules in /etc/rc.conf

+

Next we have to create the virtuals NICs. Let's suppose we have vlans 20 and 30 in our core network.

+

# ip link add link ethX name ethX.20 type vlan id 20

+

# ip link add link ethX name ethX.30 type vlan id 30

+

Where ''ethX'' is the trunk NIC

−

Next we have to create the virtuals NICs with this command ''vconfig''. Let's suppose we have vlans 20,30,40 and 50 working in our core network.

−

# vconfig add ethX 20

−

# vconfig add ethX 30

−

...

−

# vconfig add ethX 50

−

Where ''ethX'' is the troncal NIC

Now, if we want to see the interfaces just put '''ifconfig -a''' and we will get a list.

Now, if we want to see the interfaces just put '''ifconfig -a''' and we will get a list.

Line 31:

Line 28:

# ifconfig eth1.20 192.168.0.1 netmask 255.255.248.0

# ifconfig eth1.20 192.168.0.1 netmask 255.255.248.0

# ifconfig eth1.30 192.168.8.1 netmask 255.255.248.0

# ifconfig eth1.30 192.168.8.1 netmask 255.255.248.0

−

...

−

etc.

−

I will not explain the number of host neither the mask....

−

I do a kind of daemon with this, a vlan.conf file in /etc and a vland in /etc/rc.d... I can share this if you want?

====The round robin NAT====

====The round robin NAT====

Line 90:

Line 83:

To the first issue...

To the first issue...

−

I get some error messages in the logs relative to this, I'm really sorry, I lost these logs and don't remember what they said. But the answer is this, increase the threshold memory to the neighbours.

+

I get some error messages in the logs relative to this, I'm really sorry, I lost these logs and do not remember what they said. But the answer is this, increase the threshold memory to the neighbours.

Type this and read:

Type this and read:

Line 123:

Line 116:

And do the ''sysctl -p'' command

And do the ''sysctl -p'' command

−

In my case is the same number, that means that I have 1 connection for bucket!!!! I don't need more!!!! by default NetFilter put rate of 1:8. I.E. 8 conections per bucket!! (I think, not remember well)..

+

In my case is the same number, that means that I have 1 connection for bucket!!!! I do not need more!!!! by default NetFilter put rate of 1:8. I.E. 8 conections per bucket!! (I think, not remember well)..

In our case we get about 600.000 simultaneous connections in 2 1Giga NICs cards, You can see this with the next command

In our case we get about 600.000 simultaneous connections in 2 1Giga NICs cards, You can see this with the next command

# cat /proc/sys/net/netfilter/nf_conntrack_count

# cat /proc/sys/net/netfilter/nf_conntrack_count

And put this in a snmpd agent to get and graph it in a MRTG/cacti server ..... uuuuuuu homework

And put this in a snmpd agent to get and graph it in a MRTG/cacti server ..... uuuuuuu homework

It's recommended but not necessary put the local interfaces to each table. If you don't put the next few lines you will get not answer of ping in the local network, but you will be able to pass trough.

+

It's recommended but not necessary put the local interfaces to each table. If you do not put the next few lines you will get not answer of ping in the local network, but you will be able to pass trough.

# ip route add 192.168.0.0/21 via 192.168.0.1 table PRO_1

# ip route add 192.168.0.0/21 via 192.168.0.1 table PRO_1

# ip route add 192.168.8.0/21 via 192.168.8.1 table PRO_1

# ip route add 192.168.8.0/21 via 192.168.8.1 table PRO_1

Line 161:

Line 150:

For example we want to give only a one class C to outgoing to PRO_3

For example we want to give only a one class C to outgoing to PRO_3

# ip rule add from 192.168.1.0/24 table PRO_3

# ip rule add from 192.168.1.0/24 table PRO_3

−

Put this before the <NET>/21

+

Put this before the <NET>/21

−

+

−

and then TEST IT!!!!

+

−

+

−

pick a WinPC in one of the private networks and make a tracert to somewhere!!!

+

−

+

−

Before this, you can browse to a some MYIPsite like www.whatismyip.com and get your "RightNow" address, test it later and get some other address ETC...

Imagine this, you have more than two networks separated by Virtual Lans protocols (IEEE 802.1q) or VLANs, carried to you by an intelligent/manageable switch on one troncal line 10/100/1000 MB HD/FD (naturally the best is 1000 MB FD).

You have to share internet to a really BIG numbers of hosts, and maintain a good performance. The first choice is to separate the networks into a equal numbers of ports and maybe a more numbers of firewalls machines. This is not really cost effective, but works.

The second one is what I did. The history of how this begin is related to a some emergency/burn/crash/out of a group of Cisco PIXs. I won't go too deeply into that.

The round robin NAT

Let's suppose we have a one ip: 200.aaa.bbb.6 and our gateway is 200.aaa.bbb.1. we can safely put these parameters by default in our configuration. It will not get participation at all in our firewall.

I say I have 3 groups of 10 IPs each to play...... we'll define the NEXT in our firewall script:

The High Performance

In our run to get a really big number of hosts running through our machine we miss some things

We forget that is just one NICs to potentially more than 8000 Mac Addresses. The card shared memory is not prepare for this!!!!!

By default iptables is not prepared to make this number of connections simultaneously !!!!!!

So...

To the first issue...
I get some error messages in the logs relative to this, I'm really sorry, I lost these logs and do not remember what they said. But the answer is this, increase the threshold memory to the neighbours.
Type this and read:

and make sysctl -p to increase to the double!!! (no reboot needed) with this I get no errors!!!!!

The next part will need some comprehension about buckets and conntracks and hashsize (the way how iptables manage the nat connections).
There is a very good document about this at here. Read it!!!!
Some thing are change since IPtables is know as Netfiler.

The last ones is just to avoid some problems that we have with ftp connections (I thing this is not necessary anymore).
The 'nf_conntrack hashsize=1048576' increase the numbers of the hashsize (increase the kernel memory designated to NAT connections) (need reboot or reload module :-) see with dmesg | grep conntrack)

And the next is put some similar to the /etc/sysctrl.conf file

...
net.netfilter.nf_conntrack_max = 1048576
...

And do the sysctl -p command

In my case is the same number, that means that I have 1 connection for bucket!!!! I do not need more!!!! by default NetFilter put rate of 1:8. I.E. 8 conections per bucket!! (I think, not remember well)..

In our case we get about 600.000 simultaneous connections in 2 1Giga NICs cards, You can see this with the next command

# cat /proc/sys/net/netfilter/nf_conntrack_count

And put this in a snmpd agent to get and graph it in a MRTG/cacti server ..... uuuuuuu homework

The iproute2

We have 3 big access to Internet!!! This is because we manage 3 class C groups of IPs (some restrictions of BGP) in this firewall. So, we have 3 incoming traffics that we can manage, but only one outgoing!!! Our default gateway.
This can easily fill our outgoing quote, so we have to spare it.

It's recommended but not necessary put the local interfaces to each table. If you do not put the next few lines you will get not answer of ping in the local network, but you will be able to pass trough.