Our network is currently isolated from the outside by a Linux box, running iptables and Squid to control web access.

By default we deny all outgoing traffic from all IP's, to force Web traffic through the proxy, and deny the rest. However, we need to allow FTP for some users but not all.

The previous admin used to manually add corresponding entries for those that are allowed to FTP in the routing table, however, as we get more and more requests for people to FTP out of the network, I was wondering if it wouldn't be easier to force FTP traffic through the proxy, and manage users that are allowed to FTP out through the proxy's ACLs ?

Would it be considered less secure than manually managing allowed IP's (which in my opinion is plain stupid since the old admin regularly forgot to remove the appropriate routes when people left the company and IP's were reassigner...) ?

1 Answer
1

If the proxy is good enough for your HTTP traffic, then it should be good enough for your FTP traffic.

In theory making a system easier to administer shouldn't make it more secure, since you should be doing the existing, horrible admin tasks properly. But as you point out, in the real world, the easier it is to stay in policy the more likely you are to stay in policy.

(One minor caveat: while every web browser is really easy to set up to use a proxy server, I've found some ftp clients to be a lot fiddlier.)