Archive for the ‘Project’ Category

FSCONS 2011 is now over, but fear not, FSCONS 2012 is only about a year away.

All of the participants; volunteers, speakers and visitors alike, whom I’ve had the pleasure of speaking with had only good things to say.

The overall feeling is that this was the best FSCONS yet. I am inclined to agree—but of course I am biased—due to the very low amount of incidents at all.

There were some, which is to be expected, but nothing really major, and nothing showstopping.

There were some close calls, but—and this is one of the many GREAT things about FSCONS: the visitors—in most of the close calls, visitors stepped up, graciously lending their own equipment and thereby saving the day.

And this is what I love about FSCONS. Everyone participating, no matter who they are, what they do, all bring their very best.

That, and getting to meet people I’ve only otherwise known through emails.

Finally, rest assured that I have a list of all the small things I observed to be in need of improvement.

I found myself needing to synchronize a folder between two systems. I.e. new files added in the “source” system needed to be added to the “destination” which is a simple$ rsync -av /path/to/source/directory/ user@remote:/path/to/destination/directory/
(please note the trailing slashes in BOTH paths, these in the source path, which tell rsync to copy the files INSIDE the directory, not including the directory itself).

However, for the first time I also needed that all removed files in source should be removed at the destination as well. I found this blogpost which gave me the information I needed.

This is… not trickier, but… not something you’d want to frakk up.

So, the flag which will delete on the removed files is simply --delete i.e.:

BEFORE you do this you REALLY should check that the operation will perform correctly by also attaching the flag --dry-run (which will simulate the real deal, without doing any changes on the remote end). Very nice.

Last week was rather eventful, the largest thing being the one thing I naturally forgot to write about (go figure…), my appointment as deputy coordinator of FSFE Sweden. This is nice

That has however meant that this week hasn’t seemed as eventful, and I don’t know, for some reason I got off to a really slow start of the week, the only worthwhile things to write about started happening this Thursday.

nginx and password-protected directories

My father asked me for help in getting a bunch of files in his possession, over to some friends of his (who are, to the best of my knowledge, as computer illiterate as he is).

This meant that my first idea, to just set up an FTP-account on my server and have them log into that and download the files, wouldn’t work. I would need something simpler, but still with restricted access.

Preferably they’d just surf to some place, enter a password, and download a zip-archive (since all Windows versions since XP handles zip-archives like compressed folders, this should fall into the realm of what a computer user should be able to handle).

Something like Apache’s htpasswd stuff. And I wanted to do it with nginx, because I really want to get better at using and working with it.

The first task, obviously, was to check if nginx had that capability at all (it has), and if so, how it works.

A note here though: I first tried to set a password containing Swedish characters (åäö) and this didn’t work at all.

ticket

I have been wrestling with the question of how I would manage to create a database which individual users can read from and write to, but which they shouldn’t be able to remove from the filesystem (I know, a DROP or DELETE command can be just as devastating, so I must continue thinking about this).

alamar at StackOverflow solved this for me. The solution is to let the file be read and writeable, but have the parent directory not be writable.

This however makes it impossible to add new files to the directory. But since I am working with the idea that there should be a “ticket” user with a corresponding “ticket” group, and that every individual who should have access to the tracker will be in that ticket-group, the directory could disallow writing for group and other, leaving the ticket-user free to create more databases…

Although I now realize that this would make it easy for anyone in the ticket-group to screw around with any ticket database (insert, update, delete).

This clearly needs more design thought put behind it.

ArchLinux and MySQL client binaries

I needed to interact with a MySQL database on another server, but MySQL (the server) wasn’t installed on my desktop, and I didn’t really want to have to install the entire server just to get hold of the mysql client binary so that I could interact with the remote server.

Turns out that in ArchLinux, themysql binaries are split into a clients and a server package, perfect for when you wish to interact with MySQL databases, but not have the entire frakking server installed on your machine.

Accessibility, HTML and myConf

Since FSCONS is striving to be accessible, and the little “myConf” technology demonstrator I wrote the other week was intended for FSCONS, I have been trying to figure out how to make that as accessible as it can be (first of all, I have no idea what so ever, if a screen reader even parses javascript, and as the myConf demonstrator is mostly implemented in jQuery that might present itself a showstopper).

But given the assumption that a screen reader can parse javascript, and will output that big ‘ol table which is created, how do I make an html table accessible? Since a screen reader makes use of the html code, and even a sighted person could get tripped up trying to parse the markup of a table, this looks like a worthwhile venture.

Sadly, like all documents from w3.org, they just leave me more confused and without any questions answered than when I began, but luckily, there seems to be other resources more knowledgeable, and with more understandable wording/examples, although I haven’t had the time to read through them all yet (I’m mostly just dumping them here so that I’ll be able to find the pages again once I again have the time to look into it):

There are at least three features I feel is currently lacking in my timetrack suite, and two of them should be more easily added than the third.

Monthly breakdown and tagging

Soonish there should be another add-on, presenting hours but broken down on a per month basis.

This would however necessitate an update of the timetrack storage format (I am leaning towards using SQLite).

Tagging is the other simple feature I feel is missing, and again, it would be a much simpler feat to accomplish if stored using SQLite.

The downside to this, of course, would be the dependency on SQLite. I really don’t like to introduce more dependencies than is necessary.

I am, unfortunately, not smart enough to figure out a better (plaintext) format that would be able to accommodate tags, at least not without making the parsing a bloody mess.

Automatic session detection

In addition to that, my introductory post to timetrack yielded a comment from archie which got me thinking. It really would be nice if the sessions started on their own.

I am thinking that for actual coding sessions, this shouldn’t be all that impossible.

For planning and design work (I am thinking mental modelling and time spent just grasping the concepts) it would be harder (and if I do go down the route with SQLite I suspect I’d need to create another script just for simple addition into the database after the fact.

However, for file-bound operations one could try to see if a similar approach to what fsniper is doing couldn’t be used. The technical details of fsniper is described as:

Most suggested uses of fsniper has always rotated around doing something to the file that was just modified, but from what I can tell, there shouldn’t be an reason that one couldn’t just execute any old script, doing just about anything, for any purpose, when a file in the correct directory has been modified.

This would all hinge on one small detail though: That there is some event in inotify similar to the one fsniper listens for, but for when a file is opened for writing. (This might however be indistinguishable from a file being opened for just reading, and then it would trigger on just about anything…)

Of course, this would also mean that we need some way of graphically asking the user if a session should be started (the script won’t be executed from a visible shell), and for that I am thinking about Zenity for that.

But the best thing about this is that this solution, with inotify, something fsniper-ish and Zenity would represent optional dependencies (iff/when I manage to get some working code for it)

Albeit already having mentioned timetrack in at least two posts, I feel it is time it got the introduction it deserves.

Timetrack is a suite of two (shell-)scripts (timetrack and timesummer) I hacked together an evening to help myself keeping track of the time I spend on various projects.

There are a couple of pre-existing softwares I could have used instead, most notably Hamster but two things got in the way of that:

I don’t like the idea of using a full blown GUI for something so simple, and

I don’t run Gnome

There are others, I am sure, and I did search the repositories for a CLI time-tracker, finding nothing. So I built a simple one myself.

The timetracker script does three things, and only these three things:

If there is no active (open) session, create a new session

If there is an active (open) session, close it down

If a session has just been closed, ask the user what was done, calculate the length of the session, and document the session length, along with the user input

In short, timetrack tracks time

Once I had finished up timetrack, I realized that there was no way I was ever going to be energetic enough to sift through the timetrack file and calculate the combined number of hours and minutes, and that is how timesummer came to be. It sums up the times documented in the timetrack file.

Then I got to thinking, for some of these projects, I actually get paid, so for some of these projects, it would make sense to add in a calculation about how much I should charge people.

But as this isn’t something that would/should be done for every project, I got to thinking about creating an add-ons system, and I remembered an old blog post I’d read a couple of years back. It gave me some ideas, and after having consulted google about bash and introspection, I found out about compgen.

Now, to be fair, there are a whole host of limitations imposed on this solution (like how to name functions, both in order to find them, but also to avoid collisions, but if one instead considers it a rather barren API, it sortof makes sense.

With the first implementation of the add-on functionality, there really wasn’t a whole lot one could do to extend the script, as I didn’t use introspection, and just sourced every script in a given directory after having performed the bulk of the work in timesummer already. With the new approach however, there are five distinct phases, during each of which the add-on may chose to include a function for doing some work.

The phases, in order, are:

initialization,

pre-processing,

processing,

post-processing, and

presentation

Phase #3 assumes that there is a loop in the master script which does some type of processing upon a list of items, all of which one might also want to do some other processing on, via add-ons.

One interesting add-on I might look into writing soon would be to differentiate between time spend programming, and time spent designing, and perhaps time spent in meetings, etc.

Going down the statistics road, gives me an idea for a better name instead of timesummer: timestats.

Of course, this would mean some type of change to timetrack, in order to confine a session down to a defined and quantifiable topic (i.e., it might be good to have each session tagged), and let the add-on work on tags instead of on human language (the user input).

This blog post has taken an interesting turn. Instead of me (just) announcing things, I have gotten ideas, as I write this, about what will come next in timetrack’s development. Pretty neat!

I wrote about timetrack / timesummer last week as well (and I still haven’t come up with a better name for timesummer) but as I added some new stuff to the script and wanted to give link love to the blog that gave me the means to add the features I’m writing about it again. So basically timetrack is all fine and dandy, it does its one thing and it does it rather well.

timesummer however works on a little higher level. While timetrack deals with one session at a time, and knows only of that one session while working with it, timesummer, which adds up time spent overall, could potentially do more.

However, because I was stupid enough to write that, the only example of potential expansion I can come up with is the one example add-on which I have implemented.

It is a simple script which adds a cost calculation to the output. In any case, the way to do this is with generic shell hooks and this has proven quite nifty, so I will be sure to work with these more in the future.

I have come to realize that this is too simplistic an approach, because it is adding limitations to what an add-on could do. So I am working on a slightly more complex solution (which fortunately, it seems right now, won’t make the add-ons all that more complex. Expect an update within the next week.

Links

The makers schedule
In theory, this seems like a pretty sound idea, and I am thinking about trying it out now while I am still unconstrained enough to do so.

html5 web workers explained
A simple four step guide on how to understand html5 web workers (i.e. wrap your head around multi-threaded Javascripts in four steps)

I have no formal education within the field of IT security, and there may, unbeknownst to me, be millions of ways to circumvent the security this suite offers.

Naturally I have tried to make it as safe as I can since I am using it myself, that said, I offer no guarantees that a determined aggressor couldn’t make short work of the protection offered.

If you know that there are threats aimed at you, you should probably also know that this software is not for you.

This is meant to be used by ordinary people like myself, who’d just like to improve the security of their various accounts and services by using unique, and probably longer and stronger, passwords for each and every service they subscribe or otherwise have access to.

passtore has worked well for me over the last 6+ months I have been using it, but mind you, to the best of my knowledge there are no determined efforts by an aggressor to compromise my security.

Behind the scenes passtore uses GPG to store passwords in a file ~/.gnupg/passwords.gpg, and optionally depends on xclip (for copying a password to the clipboard) and pwgen (for generating strong (long and full of entropy) random (well, as random as a deterministic system can make them) passwords).

As it is a CLI-based suite, it is also rather easily scriptable (not to the point of allowing full automation, the user will need to input the GPG privkey passphrase, but it has been successfully been plugged into other applications such as mutt, msmtp and offlineimap.

There are a couple of gotchas that one needs to be aware of for a moderately safe operation of these scripts:

The protection offered is not stronger than the strength of the passphrase securing your GPG private key

If the aggressor gets hold of ~/.gnupg/passwords.gpg and your GPG private key s/he could potentially brute-force it open offline in their own good time

If the aggressor can modify the scripts ({add,get,mod,del}pass) or the ~/.passtorerc s/he can compromise your security

If the user could modify your ~/.gnupg/passwords.gpg file, s/he can lock you out of all the places with passwords protected by passtore

If the aggressor could modify your ~/.passtorerc file, s/he could add another (unauthorized) recipient to the ~/.gnupg/passwords.gpg file

If the optional dependency xclip is used (getpass -c <host>) the password will be stored in the X clipboard until overwritten by something else

While unencrupted in the clipboard, there is a minute risk that swapping occurs, pushing the password onto the swap space; passtore does not perform any sort of harddisk or RAM scrubbing

If you forget the passphrase for your GPG private key, you won’t be able to unlock the ~/.gnupg/passwords.gpg file… ever

If either your GPG private key, or the ~/.gnupg/passwords.gpg file is corrupted, you are truly out of luck

Some services will seem to accept a long, special-charactered password, up until after you have actually changed it, and try to login, at which point you are locked out; morale of the story? MAKE SURE THAT THE EMAIL ADDRESS YOU PROVIDED IS A REAL ONE SO YOU CAN RESET THE PASSWORD!

Most of these issues can be handled by common sense and sane file permissions (0700 for the scripts, 0600 for the files), and also to not allow untrusted people onto your account.

Nevertheless, security is a hard topic to get right, so please do not use this software if your life could depend upon the correct and secure operation of it.

My previous way of handling passwords were thinking up a “base password” which I then modified slightly for each and every service.

Think along these lines: if “pizza” was my base password, “hotpizza” would be my hotmail password, while “goopizza” would be my google password. (In reality I used a longer base password than that.)

The primary problem with this was that if someone ever were to learn of the base password, they’d have the keys to my kingdom.

Since I am not in the business of divulging that sort of thing to anyone, you might incorrectly think that this is a safe way of doing it. You’d be wrong.

What would happen if I had been lured into signing up for an account with a new service which seemed legit, but which in reality was nothing more than a honeypot for username, email addresses and passwords?

Do you use different usernames on different services? Most of us don’t, and there may even be some value in not doing it (recognition/reputation of sorts from other services).

So even with my previous password system (it would of course have been a total bust if I used the same password everywhere) an aggressor could have figured out how to reverse engineer the base password and reconstruct it for other services.

Of course, given the amount of people who just use the same password everywhere, I don’t think they’d have bothered with my password at all, unless they were specifically targeting me, which is wholly unlikely as well.

But with passtore, I don’t even need to care or worry. If the site admin is a sleazebag, or incompetent/unlucky enough to have the database stolen by aggressors, or a “friend” tries to compromise an account, that’s as far as they’ll come.

Obtaining one password for one service gives them control over that service, nothing more (with the one obvious exception; if someone were to gain access to my email account password, they could reset the password on every service registered with that email address).

Be paranoid about your email passwords people! It is unfathomable to me how easily people hand over their usernames and passwords to their email accounts to sites like LinkedIn and Facebook.

Sure, they are “only” scanning your contacts for already present friends and any service that went beyond that would very quickly be found out and get a bad rep, and in all probability criminal charges brought up against them.

With that said, who knows if Facebook or LinkedIn, or any of the other social media sites out there who want you to divulge your email password to them in the name of contact building, stores you password, and if so for how long, and for what purpose.

passtore will let me use different passwords for different services, without making it hard on my memory. In doing so, it mitigates the effects it will have on my life if a single service is compromised.

passtore will keep my passwords safe from nosy siblings, friends and partners, and, depending on the strength of my GPG privkey passphrase, it would keep them safe from most determined aggressors as well.

Could Google bruteforce their way in? Probably.
A government funded agency? Definitely.

As I am not facing that type of opposition, and the only threat to me is to inadvertently entrust a service with a password, which the service providers may try to abuse, passtore works well for me.

The usual disclaimers apply, I assume no responsibility for any damages you might incur, if you lock up a whole host of passwords and have either your passwords.gpg file or your GPG private key corrupted, that is truly unfortunate, but I designed it to be as secure as I could make it. It is not meant to be recoverable or decryptable without these files, so please make sure that you have backups of them somewhere safe.

Modernizr is a javascript library designed to detect what html5 capabilities a visiting browser has. This enables a kind of “progressive enhancement” which I find very appealing.

Using this one could first design a site which works with most browsers (I consider MSIE6.0 a lost cause) and then extend the capabilities of the site for those browsers that can handle it.

timetrack and timesummer

I recently started working on a small project aimed to help me keep track of the hours I put into various (other) projects, and the result is two scripts, timetrack and timesummer (I am desperately trying to find a better name for the last one, suggestions welcome). I promise to have it in a public repository soonish.timetrack can now be found at bitbucket

timetrack stores current date and time in a “timetrack file” whenever it is called, and at the same time determines if the current invocation will close an ongoing session, or start a new one.

If it is determined that the script is closing the session, it will also ask that I briefly describe what I have been working on. The script then calculates how long the session was and writes this to the file as well along with the brief session summary.

timesummer simply reads the same timetrack file, and sums up the hours from all the sessions, and prints it to STDOUT.

It is multi-user capable-ish, since each file is created and stored in the format “.timetrack.$USER”. All in all it serves me pretty well.

switch-hosts.sh

Another project of mine is switch-hosts.sh, a script created to live in /etc/wicd/scripts/postconnect/ and copy /etc/hosts-home or /etc/hosts-not-home into /etc/hosts depending on my location (inside or outside of my home network).

Why I do this is a long-ish kind of story, but if you have ever cloned a mercurial repository from inside a private network and then tried to access it from outside the network, you should be able to figure it out.

The script stopped working. That’s twice now this has happened, but sufficiently far apart that I couldn’t remember why it happened without investigating it.

It all boiled down to me using GET (found in perl-libwww package) to fetch my external IP-address so that I could determine if I am inside my network, or outside it.

GET (and POST and HEAD) doesn’t live in /usr/bin or /usr/local/bin or some place nice like that. No, GET lives in /usr/bin/vendor_perl (or at least it does now, Before a system upgrade it lived somewhere else…

I don’t know why someone (package maintainer, the perl community, whoever…) felt it was necessary to move it (twice now), but I guess they had their reasons, and since I used absolute paths in switch-hosts.sh so that I wouldn’t need to worry about what environment variables had been set when the script was executed, renaming the directory GET lived in meant breakage…

This isn’t me passive aggressively blaming anyone, but it did kind of irk me that the same thing has happened twice now.

Plz to be can haz makingz up of mindz nao, plz, kthxbai.

I love GET and HEAD, and will continue using them, manually. For the script, the obvious solution was to switch to something which by default lives in /usr/bin and doesn’t move, something like… curl.

I have found myself working with PHP again. To my great surprise it is also rather pleasant. I have however found myself in need of a templating system, and I am not in control of the server the project is going to be deployed on, and so cannot move outside the document root.

From what I gather, that disqualifies Smarty, which was my first thought. Then I found Savant, and although I am sure that Savant doesn’t sport nearly all the bells and whistles that Smarty does, for the time being, it seems to be just enough for me.

I do not enjoy bashing well-meaning projects, especially not projects I know I could benefit from myself, but after reading the material on the unhosted site, I remain sceptically unconvinced.

The idea is great, have your data encrypted and stored in a trusted silo controlled by you or someone you trust enough to host it, henceforth called “the storage host”.

Then an “application host” provides javascripts which in turn requests access to your data, which you either grant, and then the application code does something for you, and you see that it is good, and all is well, or you don’t grant access and you go on your merry way.

The idea is that since everything is executed on the client side, the user can verify that the code isn’t doing anything naughty with your data. Like storing it unencrypted somewhere else to sell to advertisers or the like.

For me, this premise is sound, because I am a developer, a code monkey. I can (with time) decipher what most javascripts do.

Problem: the majority of people aren’t developers (well that is not a problem, they shouldn’t have to be), but what I’m saying is that of all people only a subset knows that there exist a language called javascript, and it is only a subset of that subset which can actually read javascript (i.e. in perspective VERY FEW).

For me personally, this concept rocks! I could use this and feel confident in it. But requiring the end user to the first, last and only line of defense against malicious application providers… (well, of course, the situation right now is at least as bad) isn’t going to fly.

One could experiment with code-signing, and perhaps a browser add-on, and make a “fool-proof” user interface, hiding away the underlying public key cryptography that would be needed, but somewhere along the line the user would still need to know someone who could read the code, could sign it, and then act as a trusted verifier.

My thoughts on what would be easier to teach the end user; public key cryptography or javascript? Neither…

Links

Finally, a random assortment of links I found in various places during the week: