update data in DB:If we delete data in memcached when we update data in DB, it will cause “cache stampedes”. It is if there are many clients find out the entry isn’t in memcached at the same time, all of they will try to query from DB, then it will cause performance issue. So we need to create a lock in memcached, every client will try to check whether the lock is exist. If it is not exist, it can query and delete it after finish, other clients will wait a period time to wait cache.

data fragment and for each item: We need to make experienment for choosing cache data!

Now for this simple application, coding something like this is simply overkill, so let’s look at a more practical example: a photo sharing site that contains lists of photographs with links to detail pages. Many such sites display paged lists, so 100 pictures in a list might be displayed as thumbnails with basic information, ten per page. Based on all of the coding we’ve done, it might seem reasonable to go get a list of one page of data from the database, cache the resulting list and then get the second page and cache that list and so on. But there’s another way to handle that.

Start by getting the list of all 100 items that make up the list and cache that. Build the paged lists from that information, rather than going back to the database for every page. Now the same list serves someone who only wants to see 5 items per page as well as some-one who wants to see 20, which is better than having to query the database 20 times for one person and another 5 times for the other, when all we’re doing is showing the same data paged differently.

The most important thing to remember is that this is not an all-or-nothing approach. You may have data that lends itself to stor-ing nothing except lists of pointers, and then store the individual data items separately. But you might just as easily have data where it makes more sense to store lists that contain data. This is one of those things you need to experiment with and see which makes more sense for your application.

multiget hole: Adding new memcached server didn’t help when it’s CPU bound. What it can happens is when you add more servers is that the number of requests is not reduced, only the number of keys in each request is reduced. Ex: If you have send 50 requests to 2 memcached servers, and each request use multi_get to get 100 entries. So each memcached server will contain 50 entries. After you add one memcached server, the requests are still 50, but only each memcached contain 33 entries. Requests are not reduced. As a result we’ve done absolutely nothing to reduce the usage of our scarce resource which is CPU! http://highscalability.com/blog/2009/10/26/facebooks-memcached-multiget-hole-more-machines-more-capacit.html

Now I just don’t have that mood for birthday…I just don’t know what’s the birthday meaning for me now. As a result, I don’t expect a birthday party or something. But I will remember whatever you do for me, thanks forever