The reality is much darker, because this issue hasn't been paid enough attentiomn in VM implementations.
I just finished such a chache and I can tell you that every possible VM gc eats references like there's no tomorrow.

So the performance boost one would expect is only moderate.

Still, due to language limitations and other considerations, this is the best quick solution for the problem.

I am just imagining it as an academic exercise, but consider the following:

A cache manager object that maintains a map of available resources and associated reference counts. When a resource is requested, the reference count is incremented and a dynamic proxy object wrapping the actual resource is returned. The proxy object forwards all calls to the original resource, but in its finalize() method it decrements the reference count stored in the cache manager.

The cache manager can then manually implement a garbage collector (perhaps storing values for a fixed time after their release...).

If performance is critical and generality is not so important, a proxy object specifically tailored to the payload could be used (to avoid the overhead of using invokeMethod to redispatch the calls).

This is interesting ... I've never benchmarked the difference between weak and soft. You should assume for future VMs, however, that soft is what you want for a cache, since it's defined to implement LRU-type strategies.

From the JavaDoc: "Otherwise no constraints are placed upon the time at which a soft reference will be cleared or the order in which a set of such references to different objects will be cleared. Virtual machine implementations are, however, encouraged to bias against clearing recently-created or recently-used soft references."

Nathan,
Upon reading your source code,now I understand your idea
better.
One thing I don't feel comfortable is:
You depend on VM's GC to call the proxy's finalize() method,
in turn, notify the CacheManager.Yes, finalize() will be called,but when? It is beyond your control.
Generally,a program should not depend on VM's behavior to carry on its application logic: finalize() in Java is not
equal to destructor in C++.
In this case,you had better use explict request/return
method and do your own reference counting.
Just my 2 cents.

Nathan,
Upon reading your source code,now I understand your idea
better.
One thing I don't feel comfortable is:
You depend on VM's GC to call the proxy's finalize() method,
in turn, notify the CacheManager.Yes, finalize() will be called,but when? It is beyond your control.
Generally,a program should not depend on VM's behavior to carry on its application logic: finalize() in Java is not
equal to destructor in C++.
In this case,you had better use explict request/return
method and do your own reference counting.
Just my 2 cents.

You are correct that you shouldn't rely on finalize being called at any particular point if you are relying on it to release a valuable resource (like a DB connection or a file handle), but here the only resource is memory, which won't be released until finalise is called anyway!

The WeakReference is only useful when you combine it with a strong (regular) reference. This way two parties can access the object but one party controls the life cycle.

A typical use is to allow the object be reclaimed after the client has released their reference (strong), while allowing both the client and the server (with a WeakReference) to continue accessing the object.

I suggested the cache manager as a possible solution to the original question, not as an example of soft/weak references (although I chose to use a WeakReference internally).

The functionality attempted is to:
- take some sort of a key (perhaps a file name);
- construct an expensive value from it (perhaps reading and processing the file from disk);
- continue to reuse the object on successive requests for the same key;
- reclaim the value object _after everyone is done using it_ plus a timeout.

A simpler timeout policy would require less sophistication and work just as well for most situations. For example, if the timeout period started when the item was requested then the "get" method could just record the time and schedule/reschedule its removal from the cache. The more complicated code is only for if you want to allow the cache timeout to start after garbage collection.

At the other end of the scale if the cache expiration simply occurs after the last client (if the cache manager is the server) reference is collected, then the cache manager could just store a Weak/SoftReference to the value object and let the normal garbage collector handle reclamation. This was the intent of the original poster I believe.

It is only when you mix the two reclamation requirements that you need a proxy object. The proxy object is garbage collected, but it is cheap, and on its finalization it starts the cache-specific reclamation process. If the value is requested again before it is reclaimed, a new proxy can be created easily.

Is it worth the overhead of using a proxy object to lock values in the cache? It depends on your situation, but probably not.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.