For one of my classes, the final project is to work on a personal project for four weeks. Since I have never done mobile programming before, I decided I would use this project as a chance to teach myself the basics of Android development. Paul Tagliamonte recently blogged about a Google Summer of Code project idea for a Debian Android application. It was this post that motivated me to create a Debian Android application for my final project. We are only in the middle of week one, but my plan is to have at least a basic BTS and PTS interface done by the end of the project. Then, depending on a few things, I hope to turn it into a Google Summer of Code project or simply continue developing it on my own.

As I mentioned, I am only a few days into the project. One of the first hurdles I encountered was interacting with the SOAP interfaces available for the PTS and BTS. There are a few examples available on the wiki, but there are not any for Java. A Google search also failed to return any clear and simple results. As a result, I decided I would create a Python script that would accept requests from the Java Android application via a REST API and pass them on to the SOAP API before returning the result as JSON. Before I could finish writing the script, Paul mentioned that he had already written a similar program a few months ago. I immediately stopped working on my script and began preparing a patch for Paul's program to add support for some missing PTS SOAP functions as well as the entire BTS. After a few hours, I had the ability to query the PTS and BTS via a REST API from Java.

While I normally prefer to use VIM for my programming, I decided to follow Google's recommendation and use Eclipse. I was able to follow their guide to get the SDK setup and a simple demo application running. One of the few hurdles I encountered was when I wanted to make web requests. Due to the ability for such requests to stall, you are required to perform them in a separate thread. This only becomes aparant when you try to test your application on an actual device; the emulator will happily run the process in the main thread. Regardless, I was eventually able to modify the demo application to allow me to make requests to my Python REST server and then parse and display the result.

Here are some screenshots of the application running.

If you have any feature suggestions, questions, or comments, feel free to send me an email (nhandler@debian.org). The source code for the Android application is currently not available, but I plan to put it up in a git repository sometime in the near future. The source code for the Python script that provides a REST API to access the PTS and BTS SOAP interface is available on GitHub.

Today, I officially got approved by the Debian Account Managers as a Debian Developer (still waiting on keyring-maint and DSA). Over the years, I have seen many people complain about the New Member Process. The most common complaint was with regards to the (usually) long amount of time the process can take to complete. I am writing this blog post to provide one more perspective on this process. Hopefully, it will prove useful to people considering starting the New Member Process.

The most difficult part about the New Member Process for me had to do with getting my GPG key signed by Debian Developers. I have not been able to attend any large conferences, which are great places to get your key signed. I also have not been able to meet up with the few Debian Developers living in/around Chicago. As a result, I was forced to patiently wait to start the NM process. This waiting period lasted a couple of years. It wasn't until this October, at the [Association for Computing Machinery at Urbana-Champaign's Reflections | Projections Conference], that this changed. Stefano Zacchiroli was present to give a talk about Debian. Asheesh Laroia was also present to lead an OpenHatch Workshop about contributing to open source projects. Both of these Developers were more than willing to sign my key when I asked. If you look at my key, you will see that these signatures were made on October 7 and October 9, 2012.

With the signatures out of the way, the next step in the process was to actually apply. Since I did not already have an account in the system, I had to send an email to the Front Desk and have them enter my information into the system. Details on this step, along with a sample email are available here.

Once I was in the system, the next step was to get some Debian Developers to serve as my advocates. Advocates should be Debian Developers you have worked closely with, and usually include your sponsor(s). If these people believe you are ready to become a Debian Developer, they write a message describing the work you have been doing with them and why they feel you are ready. Paul Tagliamonte had helped review and sponsor a number of my uploads. I had been working with him for a number of years, and he really helped encourage and help me to reach this milestone. He served as my first advocate. Gregor Herrmann is responsible for getting me started in contributing to Debian. When I first tried to get involved, I had a hard time finding sponsors for my uploads and bugs to work on. Eventually, I discovered the Debian Perl Group. This team collectively maintains most of the Perl modules that are included in the Debian repositories. Gregor and the other Debian Developers on the team were really good about reviewing and sponsoring uploads in a very timely manner. This allowed me to learn quickly and make a number of contributions to Debian. He served as my second advocate.

With my advocations taken care of, the next step in the process was for the Front Desk to assign me an Application Manager and for the Application Manager to accept the appointment. Thijs Kinkhorst was appointed as my Application Manager. He also agreed to take on this task. For those of you who might not know, the Application Manager is in charge of asking the applicant questions, collecting information, and ultimately making a recommendation to the Debian Account Managers about whether or not they should accept the applicant as a Developer. They can go about this in a variety of ways, but most choose to utilize a set of template questions that are adjusted slightly on a per-applicant basis.

Remember that period of waiting to get my GPG key signed? I had used that time to go through and prepare answers to most of the template questions. This served two purposes. First, it allowed me to prove to myself that I had the knowledge to become a Debian Developer. Second, it helped to greatly speed up the New Member process once I actually applied. There were some questions that were added/removed/modified, but by answering the template questions befrehoand, I had become quite familiar with documents such as the Debian Policy and the Debian Developer's Rerference. These documents are the basis for almost all questions that are asked. After several rounds of questions, testing my knowledge of Debian's philosophy and procedures as well as my technical knowledge and skills, some of my uploads were reviewed. This is a pretty standard step. Be prepared to explain any changes you made (or chose not to make) in your uploads. If you have any outstanding bugs or issues with your packages, you might also be asked to resolve them. Eventually, once your Application Manager has collected enough information to ensure you are capable of becoming a Debian Developer, they will send their recommendation and a brief biography about you to the debian-newmaint mailing list and forward all information and emails from you to the Debian Account Managers (after the Front Desk checks and confirms that all of the important information is present).

The Debian Account Managers have the actual authority to approve new Debian Developers. They will review all information sent to them and reach their own decision. If they approve your application, they will submit requests for your GPG key to be added to the Debian Keyring and for an account to be created for you in the Debian systems. At this point, the New Member process is complete. For me, it took exactly 1 month from the time I officially applied to become a Debian Developer until the time of my application being approved by the Debian Account Managers. Hopefully, it will not be long until my GPG key is added to the keyring and my account is created. I feel the entire process went by very quickly and was pain-free. Hopefully, this blog post will help to encourage more people to apply to become Debian Developers.

While going through the NM Process, I spent a lot of time on https://nm.debian.org. At the bottom of the page, there is a TODO list of some new features they would like to implement. One item on the list jumped out at me as something I was capable of completing: IRC bot to update stats in the #debian-newmaint channel topic. I immediately reached out to Enrico Zini who was very supportive of the idea. He also explained how he wanted to expand on that idea and have the bot send updates to the channel whenever the progress of an applicant changes. Thanks to Paul Tagliamonte, I was able to get my hands on a copy of the nm.debian.org database (with private information replaced with dummy data). This allowed me to create some code to parse the database and create counts for the number of people waiting in the various queues. I also created a list of events that occurred in the last n days. Enrico then took this code and modified it to integrate it into the website. You can specify the number of days to show events for, and even have the information produced in JSON format. This information is generated upon requesting the page, so it is always up-to-date. It took a couple of rounds of revisions to ensure that the website was exposing all of the necessary information in the JSON export. At this stage, I converted the code to be an IRC bot. Based on prior experience, I decided to implement this as an irssi5 script. The bot is currently running as nmbot in #debian-newmaint on OFTC. Every minute, it fetches the JSON statistics to see if any new events have occurred. If they have, it updates the topic and sends announcements as necessary.

While the bot is running, there are still a few more things on its TODO list. First, we need to move it to a stable and permanent home. Running it off of my personal laptop is fine for testing, but it is not a long term solution. Luckily, Joerg Jaspert (Ganneff) has graciously offerred to host the bot. He also made the suggestion of converting the bot to a Supybot plugin so that it could be integrated into the existing ftpmaster bot (dak). The bot's code is currently pretty simple, so I do not expect too much difficulty in converting it to Python/supybot. One last item on the list is something that Enrico is working on implementing. He is going to have the website generate static versions of the JSON output whenever an applicant's progress changes. The bot could then fetch this file, which would reduce the number of times the site needs to generate the JSON. The code for the bot is available in a public git repository, and feedback/suggestions are always appreciated.

Today in #debian-mentors, a brief discussion about screencasts came up. Some people felt that a series of short screencasts might be a nice addition to the existing Debian Developer documentation. After seeing the popularity of of Ubuntu's various video series over the years, I tend to agree that video documentation can be a nice way to supplement traditional text-based documentation. No, this is not designed to replace the existing documentation; it is merely designed to provide an alternative method for people to learn the material.

From past experience, I know that if I were to start making screencasts by myself, I would end up making one or two before getting bored and moving on to a new project. That is why I am posing a challenge to the Debian community: I will make one screencast for every screencast made by someone in the Debian community. These screencasts can and should be short (only a couple of minutes in length) and focus on accomplishing one small task or explaining a single concept related to Debian Development. I discovered a Debian Screencasts page on the Debian wiki that links to Ubuntu's guide to creating screencasts. While I have done some traditional documentation work in the past, I have never really tried doing screencasts. As a result, this will be a bit of a learning experience for me at first.

If you are interested in taking me up on this challenge, simply send me an email (nhandler [at] ubuntu [dot] com) or poke me on IRC (nhandler@{freenode,oftc}). This will allow us to coordinate and plan the screencasts. As I mentioned earlier, I have no experience creating screencasts, so if you are familar with the Debian Development processes and think this challenge sounds interesting, feel free to contact me.

Over the years, I have had many different blogs. I mainly use them to post updates regarding my work in the FOSS community, with most of these relating to Ubuntu/Debian. My most recent blog was a Wordpress instance I ran on a VPS provided by a friend. This worked pretty well, but for various reasons, I no longer have access to that box.

Earlier today, I started thinking about launching a new blog. The first problem I had to overcome was deciding where to host it. Many of the groups I am involved with provide members with a ~/public_html directory that can be used to host simple HTML+Javascript+CSS websites. However, most of these sites do not have the ability to use PHP, CGI, or access databases. I realized rather quickly that if I were to find a blogging platform that I could run from a ~/public_html directory on one of those machines, I would never have the problem of not having a host for my blog ever again. There is a near infinite number of free web hosts that can handle simple HTML sites.

One key feature of any blog is the ability for readers to subscribe to it. Usually, the blogging platform handles automatically updating the feed when a new post is published. However, my simple HTML blog would not be able to do that. Since I knew I did not want to manually update the feeds, I started looking for a way to generate the feed locally on my personal machine and then push it to the remote site that is hosting my blog.

About this time, I started compiling a list of features I would need to make this blogging platform usable. In no particular order, they included:

Produce a blog that only uses HTML, CSS, and Javascript that can be run from a ~/public_html directory

Output an RSS feed for the site and per-tag RSS feeds that people can subscribe to

Display an index of the most recent blog posts and allow people to browse an archive of older posts

Allow formatting blog posts through HTML or Markdown

Be maintainable in a VCS

After some searching, I found an application developed by Steve Kemp called Chronicle. Chronicle supports all of the features I described plus many more. It is also very quick and easy to setup. If you are running Debian, Chronicle is available from the repositories.

Start by installing Chronicle. Note that these steps should all be done on your personal machine, not the machine that is going to host your blog.

sudo apt-get install chronicle

There are a few other packages that you can install to extend the functionality of Chronicle

Next, create a few folders to hold our blog. We will store our raw blog posts in ~/blog/input and have the generated HTML versions go in ~/blog/output.

mkdir -p ~/blog/{input,output}

Now, we need to configure some preferences. We will base our preferences on the global configuration file shipped with the package.

cp /etc/chroniclerc ~/.chroniclerc

Open up the new ~/.chroniclerc file in your favorite editor. It is pretty well commented, but I will still explain the changes I made.

input is the path to where your blog posts can be found. These are the raw blog posts, not the generated HTML versions of the posts. In our example, this should be set to ~/blog/input.

pattern tells Chronicle which files in the input directory are blog posts. We will use a .txt extension to denote these files. Therefore, set pattern to *.txt.

output is the directory where Chronicle should store the generated HTML versions of the blog posts. This should be ~/blog/output

theme-dir is the directory that holds all of the themes. Chronicle ships with a couple of sample themes that you can use. Set this to /usr/share/chronicle/themes

theme is the name of the theme you want to use. We will just use the default them, so set this to default

entry-count specifies how many blog posts should be displayed on the HTML index page for the blog. I like to keep this fairly small, so I set mine to 10.

rss-count is similar to entry-count, but it specifies the number of blog posts to include in the RSS feed. I also set this to 10

format specifies the format used in the raw blog posts. I write mine in markdown, but multimarkdown, textile, and html are also supported.

Since most ~/public_html directories do not support CGI. We will disable comments by setting no-comments to 1

suffix gets appended to the filenames of all of the generated blog posts. Set this to .html

url_prefix is the first part of the URL for the blog. If the index.html file for the blog is going to be available at http://example.com/blog/ , set this to *http://example.com/~user/blog/

blog_title is whatever you want your blog to be called. The title is displayed at the top of the site. Set it to something like Your Name's Blog

post-build is a very useful setting. It allows us to have Chronicle perform some action after it finishes running. We will use this to copy our blog to the remote host's public_html directory. Set this field to something like scp -r output/* user@example.com:~/public_html/blog. You will probably want to setup SSH so that you do not need to enter a password each time.

Finally, date-archive-path will allow you to get the nice year/month directories to sort your posts. Set this to 1.

Notice the metadata stored at the top? This is required for every post. Chronicle uses it to figure out the post title, tags, and date. It has some logic to try and guess this information if missing, but it is probably safest if you specify it. The empty line between the metadata and the post content is also required.

Remember, we configured Chronicle to treat these as Markdown files, so feel free to utilize any Markdown formatting you wish. Finally, run chronicle from a terminal. If all goes well, it should generate your blog and copy the files to the remote host. You can now go to http://example.com/~user/blog to view it.

One final suggestion would be to store the entire blog in a git repository. This will allow you to utilize all of the features of a VCS, such as reverting any accidental changes or allowing multiple people to collaborate on a blog post at the same time.