If you have a website that gets millions of page requests per day each starting a session and you wish to get the last ounce of performance from your server and you’re still using file-based sessions then by all means give this a try. The more volume of dynamic content you’re serving, the more you’ll benefit in terms of load average and site-speed.

One of my website gets about 8 million page views per day almost all needing a session to be started. Using memcached-based sessions helped reduce the load at least 6-folds and sped-up the site considerably.

Since I came to know about it rather late in my career I say it’s one of least known but much simple and effective way to solve performance problems when you have lots of traffic. Remember high-traffic is a must to see any considerable effect.

You can also set the above variables in the php.ini file for activating it globally. You must have memcache and memcache PHP extension installed for this to work.

You’ll have to estimate the amount of memory sessions will require and set memcache with a higher value than that. You can check the /tmp directory for the session files and see how much memory is being taken by sess_ files. If you give less memory session data might clear automatically (FIFO by memcached) and users may get logged out etc. depending on what you’re using sessions for.

You can also use redis in place of memcached if you absolutely don’t want the session data to get cleared in any way (other than by the GC).

Living Room (Anaglyphic 3D)

For a long time I’d been seeing high server load on the server MyWapBlog.com is hosted. I thought the increasing traffic was the cause. MySQL, I knew was the causing the load, but I thought it was all natural for a dynamic site with NO caching.

One day, just when I was seriously thinking about implementing some kind of cache, I found about mtop – small tool like top to display running MySQL queries in real-time, running it I found that every second there were many queries (same query but made by different requests) that got stuck in the “PREPARING” state and sometimes took as long as 2 secs. to complete. The evil query was:

SELECT c.post_id FROM category_post_relationship c
WHERE (c.cat_id IN
(SELECT c2.id AS c2__id FROM categories c2
WHERE (c2.user_id = ?)))

Thinking what it does? Let me make it easier – it fetches “post ids” of all the posts of a user that are categorized.

After the conversion, MySQL can use the pushed-down equality to limit the number of rows that it must examine when evaluating the subquery.

Though I didn’t read the whole page thoroughly (I’m lazy and since I got the job done by some other technique) still I’m sure that this “optimized” query only “optimizes” the subquery while our biggest problem is that the outer table (with about 50000 rows) is getting evaluated.

I felt, this was one query you’d rather not “optimize” and be happy with two separate queries.

But, this is not to say it can’t be optimized, it can be very easily – by NOT using subqueries at all. I used JOINS:

Created this data validation class a couple of days back for validating some forms. Thought it might be useful for others. This one’s very basic and light-weight but still fully working with many pre-defined rules.