Turbo charge your web servers using Squid HTTP Intercept mode

Now, this is not of course something new. But, I thought of sharing basic configurations of Squid proxy caching server to cache web requests. There are of course a lot more you can do with Squid, one of them is load balancing web servers. One thing to remember though, if your web content is dynamic i.e contains stuffs that change over a short time, you may not gain that much performance boost. Also, this does not wok with Secure HTTP i.e HTTPS as it was designed to avoid Man In the Middle attacks.

So here it is:

eth0: 172.16.1.11/24 => client facing interface
eth1: 10.0.1.11/24 => web server facing client

1. Install Squid

yum install squid

2. Before we start the service we have to do few changes in the config mode to tun on the intercept mode. Open the /etc/squid/squid.conf file

http_port 3128 intercept

It listens on tcp port 3128 and enables the intercept mode specifically designed to facilitate web traffic.

3. Next, we have to set the cache directory and it’s size

cache_dir aufs /var/spool/squid 90 16 256

aufs is better Advanced UNIX file system mode and is better than the ufs mode in terms of file operations. Or else, you may specify diskd mode too which is almost similar but runs as a separate daemon and requires a litlle bit extra fine tuning. 90 MB is the size of the cache. 16 i the number of directories in the cache dir and 256 is the number of the directories under each directory. You may double the numbers depending on the load of the web server.

To have maximum performance, I have mounted /var/spool/squid as tmpfs to keep all of its contents in memory rather than on hard disk

My /etc/fstab has an entry like this

tmpfs /var/spool/squid tmpfs size=100m,rw,rootcontext=”system_u:object_r:squid_cache_t:s0″ 0 0                    

                           
4. Set the cache memory size to be used

cache_mem 50 MB

Squid is intelligent enough to decide which content should go to cache memory and what should be kept in the cache disk.

5. Next, you have to set your router to route traffic for the web server to the squid server so that the web queries can take advantage of squid. I used my squid server as the router as well (this is called taking most out of it ;)).

So here are the iptables settings I had to do

A. Enable routing:
 
sysctl -w net.ipv4.ip_forward=1 >>/etc/sysctl.conf

B. To avoid looping, we have to tell iptables to accept any port 80 traffic which came from our squid server’s IP  

iptables -t nat -A PREROUTING -s 10.0.1.11/32 -p tcp -m tcp –dport 80 -j ACCEPT

Then, redirect any traffic for port 80 to localhost tcp port 3128 (which is squid server)

iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp –dport 80 -j DNAT –to-destination 127.0.0.1:3128

While sending out web traffic to the web server, pose as if we are the client

iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

Enable forwarding

iptables -A FORWARD -i eth0 -j ACCEPT

service iptables save

6. Start the squid service now

service squid start

7. Now, we have to point our clients and the web server to the squid server as default gateway and test the network connectiong using traceroute and ping commands.

Everything should be fine now and all web queries should go through squid.

Advertisements

About admin_xor

Un*x/Linux junkie, loves to program, automate, automate and automate
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s