It is not done on the fly, there is a daemon which checks for new nodes and adds them to a database. This is the only way to do it without hammering perlmonks.org. This can mean that the nodes get out of date or end up in the wrong section. Hopefully I'll find a solution to this soon.

As for adding the code etc I'm keen to keep it all qute simple and to send people back here for the actual nodes. I'm not aiming to replace perlmonks, just to add features that make it more useful.

In my ideal world, there would be such a link between the RSS feeds and PM that they'd always be within a few minutes of up-to-date, if not actually always up-to-date. And there'd be a link in the header of the html code which Firefox could use to find out about the RSS feed.

In my slightly less ideal world, we'd just get the link in the html code to this RSS feed (the one for the current node only - whatever that current node is, although some supernodes may not need it - such as the Comment on node. This would still require a bit of work from the PM developers, although I would hope not much. Something like:

Good stuff - well done. I have added this as a live bookmark straight into Firefox.

You mentioned that your process used a daemon that checks for new nodes. This means that firstly, you need to run a daemon and secondly, you are interrogating PM on a regular basis.

You could simplify the model by caching the RSS for a particular page and interrogating the cache each time you wanted to serve a page. A cached page could time out after a short period of time (e.g. 10 mins). A cache miss (or timed out page) would initiate a request to the monastery. The result would be cached for next time. This means that when nobody was using the feed, PM wouldn't be hit.

Caching can be implemented using a simple file cache with timestamp checking or something more involved using a database. Either way, you periodically need to clean the cache of expired documents. You would also want to guard against an attack where a malicious user tried to access every node as a feed and therefore used up lot's of cache space.

This article may be of interest with respect to the database solution.

I've added a page that describes (quickly) what is going on here. As you can see I am caching stuff already. It turns out that it is not possible to get the latest nodes or perlmonks on demand as it is not possible to know which nodes are needed for things like generating RSS feeds for threads.