I maintain a couple of small "planet" sites. If you are not familiar with planets,
they are sites that aggregate RSS/Atom feeds for a group of people related
somehow. It makes for a nice, single, thematic feed.

Recently, when changing them from one server to another, everything broke.
Old posts were new, feeds that had not been updated in 2 years were
always with all its posts on top... a disaster.

I could have gone to the old server, and started debugging why rawdog was
doing that, or switch to planet, or look for other software, or use an
online aggregator.

Instead, I started thinking... I had written a few RSS aggregators in the past...
Feedparser is again under active development... rawdog and planet seem to be
pretty much abandoned... how hard could it be to implement the minimal
planet software?

Well, not all that hard, that's how hard it was. Like it took me 4 hours, and was
not even difficult.

One reason why this was easier than what planet and rawdog achieved is that I am
not doing a static site generator, because I already have one
so all I need this program (I called it Smiljan) to do is:

Parse a list of feeds and store it in a database if needed.

Download those feeds (respecting etag and modified-since).

Parse those feeds looking for entries (feedparser does that).

Load those entries (or rather, a tiny subset of their data) in the database.

I implemented Smiljan as 3 doit tasks, which makes it very easy to integrate with Nikola
(if you know Nikola: add "from smiljan import *" in your dodo.py and a feeds file with the
feed list in rawdog format) and voilá, running this updates the planet:

doit load_feeds update_feeds generate_posts deploy

Here is the code for smiljan.py, currently at the "gross hack that kinda works" stage. Enjoy!