I kind of surprised myself when I realized I hadn’t blogged about this yet. I talked about it with Max, I talked about it with folks in #fedora-infrastructure, and I’m giving a talk at SELF that circles around this very project.

There’s just one problem with that: a lot of the actual raw data isn’t publicly available.

Of course, we don’t want to go about publishing raw httpd access logs to public locations. We don’t want everybody to be able to see the IP addresses that visit fedoraproject.org. But we do want people to be able to come up with a number for themselves that answers questions like “how many distinct IP addresses visited fedoraproject.org between January 4 at 4:32 a.m. and February 2 and 6:28 p.m.?” without giving access to our log servers to everybody.

Or, even if the data is publicly available, it’s difficult to get that data because the application doesn’t provide an API of sorts (Mailman, for example). Writing a screen scraper for Mailman is non-trivial.

What if there was a central API that held raw data about the everyday activity of the Fedora community?

I plan to write that. And it shall be called “datanommer.” It’ll use the TG2 stack, at the request of Infrastructure, and, although it will be designed around Fedora’s existing infrastructure, will be agnostic so that other free software projects can use it right out of the box.

Here’s a quick summary of how it’ll work.

Applications that already make log files will have those transferred to our log servers by normal means. Applications that don’t already make log files will either use an extension, module or the like to write a log file, or an external script will create a log file, which will then be transferred to the log servers.

A cron job will populate a database used for datanommer based on those log entries.

The TG2 front end of datanommer will provide a RESTful API to access the data in the database. Applications that provide data and what data they provide to datanommer will be automatically documented for maximum usability.

At first glance, this may seem like a lot of hoops just to get some data. But here’s some reasons we’re doing it this way, specifically:

Less load on the app servers. If we programmed datanommer to collect data from each application about once per hour, the app servers and databases would be under somewhat heavy load while that data is generated.

If datanommer is down for some reason, it doesn’t matter, because data entry is done directly to the database.

If the database is down for some reason, it doesn’t matter. The cron job will just wait another hour to populate the databases.

If the log servers are down for some reason, it doesn’t matter. Logs are generated locally on each app server, much like httpd. The log servers will go through and pick up the logs when they get around to it.

If the applications are down for some reason, they won’t be generating any data anyway, so it doesn’t matter. :)

For the end-user, accessing the data will be extremely easy. Since a REST API is just based on query parameters, you don’t have to be an expert to download data. It’ll be encoded in JSON so it’s easy to use in any language (especially Python, the lingua franca of Fedora Infrastructure.)

Of course, your thoughts about this process are definitely wanted. You can comment on this blog post to leave your suggestions.

After talking with a few people recently and doing some self-analysis, I feel like it’s time to make a major shift in what I do within the Fedora Project. My Fedora résumé so far has consisted mostly of wiki czaring,1 package maintenance and other odds-and-ends jobs others kindly ask me to do.

I’m presently concerned with the second in that list — a combination of increased stress and decreased time available due to school and the speed of discussion on package maintenance and release engineering is a losing game. In the next few weeks, I’ll be checking all of my packages and determining which ones have dead or slow upstreams or bugs that I can’t resolve on my own. Those packages will likely be orphaned, and if nobody wants to care for them, so be it.

The two others? Wiki czaring is fine, but I need to improve on it a bit (see the footnote), and I always enjoy the random problems that I can help quickly solve for people. This being said, development on mw, supybot-fedora and other convenient software is (hopefully) Not Going Away™ any time soon.

With the pushing away of my first Fedora love, package maintenance, I’ve found something new to focus on. Through my internship with Red Hat last year, I discovered that there is a large deficit of good statistics about our community. There’s a large deficit of good statistics about most free software communities, according to some random Google keywords I just tried, apart from “this is how many times our product has been downloaded.” I really loved the opportunity to combine my self-proclaimed mad Python skillz with answering other people’s questions, such as:

How many contributors does Fedora really have? And according to these standards/filters?

How often is the wiki edited and when?

How many “things” has this random dude over here done? Do we consider that “active”?

How many vague statistically-related questions can we come up with on devel@l.fp.o or during a marketing meeting?

Some of these, obviously, have no answer. Yet.

When I finally graduate from high school, I’ll be pushing full swing into answering these sorts of things. Until then, you can help me make Fedora a better place by simply telling us what you want to see tallied up. I asked this about 9 months ago and I got a lot of responses — thank you. But with recent discussions about the future of Fedora and a lot of claims about our user and contributor bases not being backed up (not pointing fingers), I think there are even more questions that can be answered. Please add your statistically-inclined questions to [[Statistics 2.0]] and I’ll do my best in the near future to get them answered with statistics on our community.

Quick summary: Maintaining packages is a drag (for me) right now. I like taking questions and answering with numbers. I graduate soon. Ask questions.

1 While writing this I decided to Google for “fedora wiki czar“. What I found was a mysterious character who was appointed as such in a community touting full transparency. Mel brought this to my attention the other day — I really suck at providing transparency into the process of administering the wiki. It’s pretty much on a whim. It shouldn’t be this way.

During an extremely long hackfest today at FUDCon Toronto 2009, I planned to work on resurrecting fuse-mediawiki from its 15-month slumber.

I failed.

After talking with Jesus M. Rodriguez for an hour or so, we both determined that FUSE is not the right way to go about this for what I want to accomplish. The only thing we were planning to use FUSE for so far was downloading the wiki pages; everything else would be done with helper scripts.

We discussed things like “pull” and “commit”. It started to sound like a bastardized VCS. So we wrote a bastardized VCS. :)

Introducing mw: a command-line program with subcommands like “fetch” and “commit” to work with MediaWiki installations. I spent all day creating the framework for commands and all sorts of things, and ended up creating the init and fetch commands to start a mw repo and fetch some pages.

Currently: useless. Future: promising. I’m hoping that I can get the committing portion ready to roll within the week, and have fetch get all the pages of wikis and categories soonish.

Some key awesomeness: attempts to merge instead of just giving up (haha, you suck, MediaWiki), unified diffs, logs, and anything you really feel like doing.

Clone it now and read the README and HACKING:

git clone git://github.com/ianweller/mw.git

Edit: If you want to discuss this with me at FUDCon tomorrow, by all means do. Ping me on IRC to see where I’m at. :)

Needless to say, it didn’t get done. :) But it did get a healthy start, and even though the last couple of months I haven’t been extremely active in Fedora, it’s still alive and well.

This week, I started working on a research paper for my independent study at my high school. This independent study just happens to be continuing work on the project that I started a couple of months ago. The paper will include mostly primary sources of what people have said on Stats 2.0’s discussion page on the wiki, but I would love to talk with people on IRC about what they think is important to track so we can analyze not only the growth of the Fedora, but the growth of the community.

It doesn’t end with the one-semester independent study. I am presenting on this subject at UTOSC 2009. In this presentation I will discuss many of the variables of a free software community that can be tracked, and even provide example code and where to get started on automatically tracking them.

So, there’s the state of the Stats 2.0. Would you like to speak with me on IRC sometime about what you think is important to be tracked?