Synchronizing a HashMap records

Hi all,
I have here a situation which I think is pretty common, but that I never met in practice so I want to ask your opinion about this. Is it correct? Is it optimal?

I have the class below (i removed some methods like remove for the sake of clarity). It's important to know that the hash map will have over 10,000 records and that get operations will be done much more frequently than add or delete, and that the timer runs quite often. The get, add and remove operations are made from multiple threads (generated by the http calls from a web application).

So basically, my main concern is if the whole thing is properly synchronized and if my solution for locking only some rows of the HashMap is optimal.

For the record, i did looked into Java's ConcurrentHashMap but from what i understand, it only assures correctness while only one thread writes in it, and i think i might have several (thow i can renounce that if neccesar.

No, this isn't right. It's not the individual values that need locking, it's the map itself. These per-key locks aren't doing anything. Adding to or removing from a HashMap can cause the whole thing to be re-hashed, with all the Map.Entries moved around, so any time an add() or remove() may occur simultaneously with any other operation, the whole HashMap needs to be locked.

HashMap is awfully fast; why not just use Collections.synchronizedMap(new HashMap()) to protect the whole thing for now, and only worry about concurrent performance if it turns out to be an issue in practice? Most likely, nothing more will be necessary.

Thanks Ernest! I completely forgot that the map might rehash. Unfortunately I did some testing and the results i get with the whole map synchronized are not in the parameters i am required to achieve.

However, after reading this article, i begining to think that ConcurrentHashMap might do the work right. Reads don't even require locking (perfect for my high read frequency) and iterators don't throw errors if you remove an item during an iteration (i wasn't aware of this).

Do you think it should work OK for my situation? Or maybe you have another idea?

Do you think it should work OK for my situation? Or maybe you have another idea?

In all the performance projects that I have worked on so far, I have not found a single case where the concurrent hashmap didn't work as well, or better than the collections wrapped hashmap. In many cases, the concurrent hashmap scaled to many times over the collections hashmap. IMO, I say to just use the concurrent hashmap instead of the collections wrapped hashmap -- in every case.

(there seems to be a grammer issue with the last paragraph that annoying me)

However, after reading this article, i begining to think that ConcurrentHashMap might do the work right. Reads don't even require locking (perfect for my high read frequency) and iterators don't throw errors if you remove an item during an iteration (i wasn't aware of this).

Read does require locking. The CHM uses a reader writer lock, so that reads can go in parallel. It also uses segmentation, so that some writes can go in parallel, as well.