Squid 3.1 Caching Proxy with SSL

Hello, hello! Recently I posted a two part article on creating a Guest wireless network using OpenWRT, VLANs, and Firewall rules. Now we left things kinda open from a security standpoint. WE gave our Guest users full Internet access with no restrictions on sites, bandwidth usage, or ports!! Yikes! For this article I am going to walk you through the steps to close those gaps. We are going to first configure a Web Proxy server that will proxy outbound Internet connections. This allows us to check where and what are Guests are trying to get their hands on. Good and bad. We will also force Guests to connect to this Web Proxy server transparently. What I mean by that is the Guests will not be required to do anything on their side to connect, our firewall will take care of that. And lastly, I want only allow limited bandwidth of HTTP traffic. You will see later on how we can accomplish this. I’ve expanded upon this article of mine that uses squid proxy to filter Ads.
Phew, let’s get started.

Installing Squid Proxy

Install dependencies

The easiest way to install dependencies on Ubuntu or another Debian based Linux server is to use the apt-get build-dep and apt-get source commands. We are installing Squid from source, because the default Squid package in the Ubuntu repositories doesn’t have configuration items we need to make our project work.

NOTICE:We just installed the essential build tools, along with any Squid dependencies, and the source files.

You should now have a squid3-3.1.19 folder.

Modify the build script

We need to modify the build script. This will make it so when we go to configure the source files it will include SSL support.

1

vi squid3-3.1.19/debian/rules

Add –enable-ssl under the DEB_CONFIGURE_EXTRA_FLAGS section.

1

2

3

4

5

6

7

8

9

10

...

DEB_CONFIGURE_EXTRA_FLAGS:=--datadir=/usr/share/squid3

--sysconfdir=/etc/squid3

--mandir=/usr/share/man

--with-cppunit-basedir=/usr

--enable-inline

--enable-ssl

...

Don’t forget to save!

Configure, Make, Make Install

1

2

cd squid3-3.1.19/

debuild-us-uc-b

NOTICE:Here we use debuild that will automatically Configure, Make, and create an installable DEB package.

After this has been completed a DEB package will appear in the parent directory. Mine was called squid3_3.1.19-1ubuntu3.12.04.2_amd64.deb, squid3-common_3.1.19-1ubuntu3.12.04.2_all.deb
and squid3-dbg_3.1.19-1ubuntu3.12.04.2_amd64.deb

acl = this tells squid which IP address and/or hosts to assign a certain Access List. For example home_network is any IP sourcing from 192.168.0.0/24 network.

Safe_ports = tells squid which ports are allowed through the proxy, we have defined only 80 and 443.

SSL_ports = tells squid which port is allowed when making an SSL connection

http_access = defines which Access Lists (acl) are allowed to connect to the proxy.

http_port = binds an IP and port of the Proxy server to listen for requests. We have two because one will be used for the Transparent Proxy, the other for browsers who explicit configure their browsers to connect to the proxy server.

intercept = is required for transparency to work.

cache_dir = defines where squid should store cached static files, how much space should it consume, and for how long. ufs is the type of storage system, /home/user/squidcache/ is the directory to use, 2048 defines 2048MB of capacity, 16 defines number of first-level subdirectories, and 128 defines second-level subdirectory. For more info, see here.

cache_mem = defines how much memory should be allocated for Squid caching.

Testing

With the previous rules in place, any client on either the 192.168.0.0/24 or 192.168.1.0/24 networks should be transparently redirected to the proxy server. To test, reconfigure any proxy settings you may have in your browser and then try to connect to an internet site directly over http:// then over https://

Throttling and Filtering Traffic

It is imperative that you verify the functionality of the previous steps before continuing. In this section we are going to limit the bandwidth consumed by only our Guest Network clients, and also do some basic filtering of requests, such as blocking known malicious sites. We will use the delay_pool feature of Squid to perform this throttling and SquidGuard to perform filtering.

Adding a Delay Pool

A delay pool is a feature of Squid that will allow you to delay outbound requests from users based on conditions. For our purposes I wanted to limit Guest Network users to a flat rate to 100Kb/s. This would ensure that any Guest Network user would not be able to completely saturate my totally bandwidth from my ISP.

Let’s edit our squid.conf file:

1

vi/etc/squid3/squid.conf

Add the following lines:

1

2

3

4

5

6

7

8

#delay pools

delay_pools1# how many delay pools will be defined

delay_class12

delay_access1allow guestNet

delay_parameters11048576/1045876102400/102400

delay_access1deny all

Let’s walk through this…

delay_pools 1 — Denotes how many delay pools we will define in the squid.conf

delay_class 1 2 — This matches a delay pool class to a delay pool. The 1 is our delay pool number to match, and the number 2 is the type of delay class. See here for more info on delay classes.

delay_access 1 — This is a standard access list. It defines which ACL from the top of our squid.conf will be associated with this delay pool. 1 signifies the delay_pool to associate the ACL with.

delay_parameters 1 — Here is were we define the parameters for the delay class, specifically reducing bandwidth consumption to a flat 100Kb/s. The units are in bytes. The first part (1048576/1045876 1mb) denotes the max bandwidth allocated to this delay pool. The second part(102400/102400 or 100kb) is max bandwidth for each client within the ACL. This will help prevent one user form hogging all the bandwith from the rest of our users.

delay_access 1 — The last part here says to deny all other ACLs to this delay pool 1

NOTICE: Don’t forget to restart Squid!

SquidGuard Filtering

1

apt-get install squidGuard-y

Once squidGuard is installed we need to tell Squid to use it. Once again, edit the squid.conf file:

1

vi/etc/squid3/squid.conf

Add the following lines:

1

2

3

4

5

#rewrite program squidGuard

url_rewrite_program/usr/bin/squidGuard-c/etc/squid/squidGuard.conf

url_rewrite_children20#threads

url_rewrite_concurrency0#jobs per threads

url_rewrite_program — Defines the rewrite program we will use, in this case squidGuard with the -c means to use this squidGuard.conf file.

url_rewrite_children 20 — Defines how many child processes or threads to open. This varies on how many users you have as well as the restriction of the proxy server itself.

url_rewrite_concurrency –This tells how many squidGuard jobs can run per thread. Be careful with this as it will increase by a factore of the previous paramenter

Adding blocklists:
Create a folder were you will keep the blocked information files.

1

2

mkdir~/blocklists/

vi blocked-domains

1

2

3

4

...ommitted...

{bad domain name}

...omitted...

1

vi blocked-url

1

2

3

4

5

6

...omitted...

{bad ip site}

{bad url}

...omitted..

1

chown proxy.*

NOTICE: We just created two block files. One containing domain names, such as yahoo.com, facebook.com. The other containing URLs, such as 1.2.3.4, or 5.6.7.8/badstuff. Then we had to change the ownership to the proxy user so Squid and SquidGuard can read it.

Next:
You may have noticed the /etc/squid/squidGuard.conf from the above step. Lets create/edit that file with squidGuard specific options.

1

vi/etc/squid/squidGuard.conf

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

dbhome/home/usr/blocklists/

#define src

srcguests{

ip192.168.1.0/24

}

#define category 'deny'

destbadsites{

domainlist domains

urllist urls

expressionlist expressions

}

acl{

guests{

#allow all except badsites

pass!badsites all

#redirect

redirect http://{webserver}/deny.html

}

Lastly:
Initialize the block lists

1

squidguard-Call

NOTICE: You will have to run “squidguard -C all” each time you modify the files. This will update the .db files squidguard creates.

Further notes:The biggest issue with squidguard is it is very picky bout the blocklist file. Each item should be on a new line without leading or trailing spaces. And make sure both the blocklist and the blocklist.db files are readable by Squid and SquidGuard. Also, I believe there is an issue with the current SquidGuard build while trying to filter based on source IP with Transparent setups. It seems, each request being handed to squidGuard from Squid would go to the default option in the squidGuard.conf file.

Optional:Adding SSL Interception/Inspection Support

This next section allows your Squid proxy server to intercept SSL connection made from your clients. Warning! Doing so will most likely look like a man-in-the-middle attack. Clients will be connecting to your proxy server when trying to go to SSL protected sites, thus violating the SSL transaction. For example, a client opens up a connection to https://mail.google.com. This connection will be intercepted by the proxy server, which does not contain googles private SSL key. A untrusted mismatch will occur. I would like to also note that you should consider the behaviour you are trying to achieve with having SSL connections proxy’d through Squid. The nature of SSL does not allow us to easily perform Proxy featu3res, such as caching, content filter, content manipulation, etc. Therefore, if you are setting up SSL pass-through with squid, then you are effectively doing the same thing that a router would. In conclusion, the only reasons I can think of for enabling SSL Interception would be for auditing and monitoring purposes. For example, you are willing to allow the use of 3rd party Web Email (GMAIL, YAHOO) to your employees, but you would require that the users are monitored to prevent data leakage, etc.

For my purposes of a guest network, this was okay behaviour. In an enterprise, you would need additional steps to install trust between you and your users.

NOTICE:Users will get a certificate miss match with the SSL enabled sites they try to visit. They will have to add exceptions to trust the self-signed cert from above for each site. Again this may not be desired behavior. Consult with you PKI engineer for ways to do this in an enterprise setting where you may have an authoritative CA that can vouch for your clients.

Final Thoughts..

There are still some issues if attempting to deploy this in a production environment. Transparent NAT security issues, issues with filtering by Source IP, SSL requiring SSL-Bump, etc. I will post another article soon when I have fine tuned these.

Sir I wan’t to try your tutorial. But I have some questions. You said that there still some issues, including transaparent NAT. What if we used WPAD and PAC? and I think you forgot to enable ip v4 forwarding. I got this idea to other tutorials. Also the iptables above, are we going to run those using shell? or add them in /etc/iptables.up.rules? What’s the difference? Please enlighten us. Thanks

I see the screenshot you are getting, what is the certificate being presented? Is it not trusted? You have to local install the untrusted cert onto your clients…or if a windows domain, thru the domain certificate authority.

Hi thejimmahknows,
my English is so poor but I hope you understand me. Before I thank you for you good tutorial

I had installed a debian wheezy and installed package like isc-dhcp and configure it, NAT.
so far everything is ok. cuz my local network accessed the internet easily and rapidly.
When I had installed Squid 3 and made it transparent. my local network accessed some website and for others like : facebook.com, youtube.com, nba.com. it’s very slow and take perharps 2 or 3 minutes to displayed and sometimes the page show Connection time out : 100

according to you,

a) What is the cause of the problem and what is the solution

b) I think that you had tested your tutorial. I would like to askt you, it’s your local network is very slow or it’s fast, I mean it’s your local network works very well.

thank for the good tutorial , my squid3 proxy ubuntu work ok but if my client open games facebook it not caching from my proxy because most of them are open in https .is it work for those ? sorry my english is bad thank for the reply ………

When I access google or https: // from the client, the error message:
There is a problem with this website’s security certificate.
The security certificate presented by this website was not issued by a trusted certificate authority.
The security certificate presented by this website was issued for a different website address.
Security certificate problems may indicate an attempt to fool you or intercept data you send to the server.
We recommend that you close this webpage and do not continue to this website.
Recommended icon Click here to close this webpage.
Not recommended icon Continue to this website
More information

Hi, Great Post. I’m very new to all this stuff but now have a working Squid3 (v3.4.8) proxy with SSL support running on my Raspberry Pi which I got for Christmas and very good it is too. I added –enable-ssl-crtd to my build script and then configured my squid.conf for Dynamic SSL Certificate Generation as per the web reference at the bottom of your post, generated certificates and imported the .der certificate into my browser. This gets explicit proxying running without any of the certificate problem warnings. I’m not interested in filtering and so disabled icap. I also picked some bones out of this: http://sichent.wordpress.com/2014/01/02/web-filtering-https-traffic-on-raspberry-pi/ including modifying the scripts to use “jessie” throughout.

I found my fix. With Squid 3.3 “ssl_bump server-first” was implemented and enables HTTPS to work transparently, ie with “intercept”. I’m running 3.4.8. I just changed the line in my squid.conf from “ssl_bump allow all” to “ssl_bump server-first”.
If you are running 3.3 or above “ssl_bump server-first” is the recommended option. See http://www.squid-cache.org/Doc/config/ssl_bump/

I still don’t get the difference between DNAT and REDIRECT in this case however. 🙂