One problem for preserving software is that the original hardware that the software did run on might not survive very long. Some people are still keeping some old machines like C64, Apple ][ and others running, but at some point there won't be many left as the original ones wear out or get damaged, and other hardware might not be usable at any more already at this point. And for sure, those machines are not available broadly to the public. Ideally, we'd have the hardware and recreate the full experience, e.g. how you connected the machine to your own TV in the living room and played or worked with it there - but that is pretty unlikely or at least hard to do, esp. with the hardware being less and less available, as I mentioned.

But there's one way to bring at least part of the experience to users: We can emulate the old machines and let the preserved software run within that emulator. That doesn't give us the living-room-TV experience, but there's a better chance in both preserving that way of running the old pieces of software for a long time and making the experience broadly available. Now, it's not always easy to get emulators running well, but there are a number of projects out there, and we heard about a few interesting solutions in the preserving software event at the LoC, but one was particularly appealing to us as Mozillians.

Since the event in May, a lot of work has been flowing into JSMESS, and as Jason has blogged about, there are a thousand cartriges available now in the Historical Software Collection of The Internet Archive, and performance is pretty decent within the browser now.

With that, a whole lot of old software is available for everyone, at any time, to try and experience within their own browser!

That's a powerful way to preserve software for the current world and upcoming generations, isn't it?

Jason talked about multiple efforts he's involved in, including his early (and ongoing) work on textfiles.com, collecting writing from the time when people first got online, and some other initiatives I'll mention at the end of this blog, but the main focus was on The Internet Archive, the non-profit he is working for nowadays and which has public collections of historical digital content as its main mission.

The site and organization are probably best known for the Wayback Machine, which has archived "over 240 billion web pages" going back more then 15 years, see e.g. a Mozilla homepage from around the time when I first encountered the project. But next to that, they have tons of other digital content archived - video, audio, texts, and more. Jason said they are basically seeking to store everything available in digital format that could be of any historical use at some point - preferably first making sure it's store and worrying about legal questions only as they arise, as it's better to have something but take it down than to be able to publish it but not having lost it to history. He went as far as to say they want to be "the hard drive of the Internet" and store everything anyone gives to them, be it personal documents, software that was published at some point, or other digital content. For example their software collection contains collections of entire FTP servers of the past as well as CD images and terabytes (!) of software and firmware for old systems to run in emulators.

And there's an "Upload" button on the site as well, inviting me, you, and everyone else to contribute content that they can archive! So, if you have old digital content lying around, go to archive.org and make it available to the public, including the kids of the future, before it gets covered with enough dust or otherwise degrade in a way that the media can't be cleanly read any more.

If you have really important pieces of history that are on media that you fear is too dusty and old to still be read cleanly, or where it's hard to find any drive to read that media any more, or you know of such things that might otherwise be hard to recover, you might be interested in another project that Justin Scott is involved in: the Archive Team. That group is dedicated to rescue old digital contents where it's not easy, and to save history before it's actually lost. They have specialized equipment to read even aged disks and tapes, and they are building up communities to save sites before they die - they even archived most of Geocities before it died! A quite awesome story is also how they helped to recover the original "Prince of Persia" source code.

And then, there's one more of Jason's projects that Mozilla folks will probably like: JAVASCRIPT MESS! Jason used EmScripten to port the MESS emulator into JavaScript and run it from a browser. Yes, you can run Atari 2600 or Sega Genesis games in the browser! This is only a beta right now, but it shows how the "browser" (or should I say "web runtime"?) can help us enable to make software history available to future generations!

All those projects can profit from your help, so if you have anything you can contribute, please do so!

One thing I found interesting on the software preservation summit was that some collectors told us that people investigating preserved software, e.g. for university studies or for museum exhibits, are often not interested in getting the software itself from the collectors they contact, as very often they already could get that via other channels, esp. when it's software that had been wide-spread at one point - a often-mentioned example that apparently is the corner-stone of all software preservation efforts is DOOM.

What many people of those writing works on preserved software or museums doing exhibits on it do want from collectors in those cases is artifacts, or if you want "meta-materials", around the software itself - packaging, guides, brochures, ads, posters, magazine reviews, and whatnot. With those pieces, any papers or exhibits on the software becomes way more interesting and can also deliver some of the culture around the software.

And that made me wonder somewhat - I know we are preserving all binaries we ever shipped and all code at Mozilla, even our website code, but how much of physical objects related to our software are we preserving? Well, we don't have packaging, but we had CDs for some stuff (I remember one for Mozilla 1.0), we did T-shirts, stickers, etc. - and there's surely magazine articles, the NY Times ad, and similar items. What of all that do we still have preserved? Do we have some kind of archive at Mozilla for that?

Here's a part of my "personal collection" of Mozilla artifacts: I hope we have a better collection of those things somewhere at Mozilla headquarters or so.

A larger problem for preservation is if you want to preserve the environment and culture that the software was running in, e.g. how it was when you connected the C64 to the TV in your family's home, or even when you ran Altavista (which just has been shut down) for Internet search. At this level, preserving, reproducing or even emulating the environment and experience of older software is becoming really hard - but an interesting challenge esp. for museums trying to educate new generations about our history.

Another, connected, topic is metadata of the software itself - from product names/versions and writers/vendors via info on installation media/packages to file names, checksums and settings of the installed software there is a lot of metadata one can collect along with the preserved binaries and/or code.

For example, NIST's National Software Reference Library (NSRL) - see also this interview by the LoC - is collecting a lot of information about the installed software, and also what it leaves behind when uninstalled (as their original cause is to help the FBI find out what was installed on investigated computers). And this metadata collection might actually provide us with an opportunity: Knowing the names and checksums of libraries installed with valid software can help us identify at least some of the libraries we see correlated with crashes. For that reason, we recently did get the Dragnet tool online that is intended to help us there, and it would be great if metadata from NSRL or similar efforts can be connected to that and help us in our own investigations there. So, here's a way that software preservation efforts can directly play back into our current work on understanding current software and improving future releases of Firefox!

Instead, I'll do multiple short posts on my impressions and thoughts of the event and the subject, probably over the next few weeks.

The attendance consistent mostly of people from the existing software preservation community in the US, the majority of those people knew (of) each other already, apparently. In addition, we had some people from the software creation community - Microsoft's (sole) archivist probably belongs to both the preservation and software communities, then we had a guy from GitHub, and finally, Otto and me from Mozilla.

One thing that I learned with regard to the preservation community is that there are basically three types of projects they operate: museums, archives, and libraries.

Museums only collect a small collection of large milestones in history, but try to get as much on those as possible so they can build up a great exhibit for the public to learn about our and their past. Archives build up large collections of items with the main intent of preserving them as ideally as possible and usually without any intent to provide them to the public, the items are only available to sporadic researchers. There may be metadata collected on the items that may be available to a larger public, though. Libraries are somewhat in between: They build up larger collections of items and try to preserve them, but with the intent of some public to have regular access to them, often in a very controlled manner, e.g. via reading rooms.

On this software preservation summit, we had a number of representatives of all three kinds of projects: Museums such as the Computer History Museum, the Museum of Modern Art or the MIT Museum, archives such as Microsoft's, NIST's NSRL (National Software Reference Library - yes, "Library" is a bit of a misnomer there) or the Internet Archive, and libraries such as the Astrophysics Source Code Library, university libraries or, of course, the Library of Congress.

In terms of software preservation, we found that those different organizations and those doing different kinds of collections, can not just learn from each other, they can also help each other: Not every one of them wants every piece of software coming in, depending on what exactly they collect, so it may make sense to forward some pieces to other projects.

It was interesting for us as outsiders to the preservation community to see what those people are doing and how they are organized. In future posts, I'll get more into how and where we as software producers can work with them.

As Digital Preservation is part of the agenda of the US Library of Congress, they're doing a workshop on Software Preservation next week, and Mozilla was invited as an expert group. Otto de Voogd and myself are in the delegation going there (I'll be roughly in the Washington, DC, area from Saturday until June 2) for Mozilla - and the text below is a guest post by Otto with questions that we would like some feedback on so we can represent the Mozilla community as well as possible:

On the 20th and 21st of May the Library of Congress holds a workshop on the topic of preserving software. Otto de Voogd and Robert Kaiser will be representing Mozilla, putting forward our viewpoint as custodians of a codebase with a significant heritage and importance.

Many questions and thoughts arise. Here's an overview of ours; we look forward to feedback.

- Should archivists keep source codes or executables or both?

Executables and source code are both valuable. Executables are valuable because the source code is sometimes not available, or perhaps the build tools are not, and setting up a build environment for older code can be a difficult and complex thing.

Source is valuable to determine how a program works. It also makes it possible to reuse code and algorithms, especially, but not only, in the case of open source software.

- Preserving documentation.

Preserving documentation that goes with software, seems logical. Would this need to go as far as preserving discussion threads and entries in bug trackers?

- Preserving environments/platforms.

It seems obvious that without preserving an environment in which the software can run, it is going to be impossible to experience the software. Preserving such an environment should therefor be part of the software preservation effort.

To avoid the physical constraints imposed by preserving old hardware (which would be a preservation effort in its own right), a solution would be to build virtual machines and emulators. As hardware capacity constantly grows, running virtual versions of older hardware should generally be feasible.

To fully recreate an environment we'd also need to preserve the operating systems and other software tools that the preserved software needs to run. Those being software themselves would logical already be included in any software preservation effort.

Preserving documentation concerning environments, would also be required. To build virtual machines and emulators it would be helpful for hardware makers to make technical specifications available. One could envision this to become a legal requirement at least for older hardware.

Can we imagine a world where web based emulators would allow an online digital library to serve users worldwide? Users who would be able to run old software in emulators running in their browsers...

- Is everything worth preserving, if not how does one go about selecting what is worth preserving?

Does one need to preserve every version of software, just the last version or all major releases? What about preserving software that has not spread widely. Would there be some threshold, or some other criteria?

- How does one index software and search the library?

There will be a need to gather meta data about software and the preservation of documentation as we already mentioned. This meta data and documentation could serve to populate an index enabling for instance the search for particular features.

- Can software preservation help in making code reusable?

If there are good ways to actually find relevant and useful code, this could lead to more reuse not only of actual code, but also of algorithms and concepts. It may also become a valuable source for students who wish to learn about actual implementations of software solutions.

At the very least a minimum of meta data, such publication dates, copyright owners and licenses should be available to determine how certain code can be reused. In particular for open source software we believe that software libraries should strive make it available without restrictions.

- Preserving data formats.

The software preservation effort should also include an effort to preserve data formats. Including technical descriptions of those formats and the tools to read, write and edit those formats.

- Can software preservation help in the discovery of prior art?

We believe it can, and as such preserving old code could be a great tool in preventing the repatenting of existing software concepts.

Of course we believe that software patents shouldn't exist in the first place, as software is already covered by copyrights, but at the very least prior art is a good avenue to prevent some of the worst abuse of software patents.

- How do copyrights affect software libraries?

A lot of software is licensed to be used on a particular piece of hardware or only available via subscription. How does this affect software libraries? Should there be exceptions like there are for traditional libraries?

In the life cycle of software, the commercially exploitable time is limited, likely anything older than 10 years no longer has any commercial value. Maybe copyrights on software should be significantly reduced to something like 10 years, which is more than enough to cover the commercially exploitable timeframe of the software life cycle.

Such a limit would greatly enhance the work of software libraries, increasing availability and ease of access as well as removing a lot of the red tape involving requests for permission to keep copies.

- What about software as a service?

And what about software as a service, where neither the source code nor the executables are ever published? How can something like Gmail be preserved, when neither the service's code nor the environment is available to the public?

- Preserving "illegal" or cracked copies?

What if a copy of a piece of software comes from an illegal source? A cracked version with modifications maybe? They have value in themselves as they are a cultural expression.

What if such an illegal copy is the only copy still available? Would it make sense to preserve that too?

It all started on March 31, 1998. Just a few days off from 15 years ago.

Netscape open-sourced the code to its "Communicator" Internet suite, using its own long-standing internal code name as a label for that project: Mozilla.

I always liked the sub-line of a lot of the marketing material for this time - under the Mozilla star/lizard logo and a huge-font "hack", the material said "This technology could fall into the right hands". And so it did, even if that took time. You can learn a lot about that time by watching the Code Rush movie, which is available under a Creative Commons license nowadays. And our "Chief Lizard Wrangler" and project leader Mitchell Baker also summarized a lot of the following history of Mozilla in a talk that was recorded a couple of years ago.

Just about a year later, in May 1999, so 14 years ago, I filed my first bug after I had downloaded one of the early experimental builds of the Mozilla suite, building on the brand-new Gecko rendering engine. This one and most I filed back then were rendering issues with that new engine, mostly with my pretty new and primitive first personal homepage I had set up on my university account. After some experiments with CSS-based theming of the Mozilla suite, I did some playing around with exchanging strings in the UI and translating them to German, just to see how this new "XUL" stuff worked. This ended up in my first contribution contact and me providing a first completely German-language build on January 1, 2000.

A few months after that, in May, I submitted my first patch to the Mozilla project, which was a website change, actually. But only weeks later, I created a bug and patch against the actual Mozilla code - in June of 2000, 13 years ago. And it would by far not be the last one, even though my contributions the that code were small for years, a fix for a UI file here, a build fix for L10n stuff there. My main contributions stayed in doing the German localization for the suite and in general L10n-related issues. Even when Firefox came along in 2004, I helped that 1.0 release with some localization-related issues, esp. around localized snippets for its Google-based and -hosted start page - and stayed with L10n for the full suite otherwise (while Kadir would do the German Firefox L10n). I wrote a post in 2007 about how I stumbled into my Mozilla career.

As Firefox became rapidly successful and took an increasingly large standing in the project and community, I stuck with the suite as I liked a more integrated experience of email and browser - and I liked the richer feature set that the suite had to offer (Firefox did cut out a lot of functionality in the beginning to be able to found its new, leaner and more consumer-friendly UI). When in March of 2005, it became clear that the suite was going into strict maintenance mode and be abandoned by the "official" Mozilla project, I joined the team that took over maintenance and development of that suite - once again using a long-standing internal code name for that: SeaMonkey. In all that project-forming process 8 years ago, I took over a lot of the organizational roles, so that the coders in our group could focus at the actual code, and eventually was credited as "project coordinator" within the project management group we call the "SeaMonkey Council".

When I founded my own business 7 years ago, in January of 2006, I was earning money in surprising ways, and trying to lead the SeaMonkey project into the future. We were just about to release SeaMonkey 1.0 and convince the first round of naysayers that we actually could have the suite running as a community project. In the next years, we did quite some interesting and good work on that software, and a lot of people were finally realizing that "we made it" when we could release a 2.0 version that was based on the same "new" toolkit that Firefox and Thunderbird were built upon, removing a lot of old, cruft code and replacing it with newer stuff, including the now common-place add-ons system and automated updates among a ton of other things. I would end up doing a number of the major porting jobs from Firefox to SeaMonkey, including the places-based history and bookmarks systems, the download manager (including a UI that was similar to the earlier suite style), and the OpenSearch system. With the Data Manager, I even contributed a completely new and (IMHO) pretty innovative component into SeaMonkey. In those times, I think I did more coding work (in JS, mostly) than ever before, perhaps with the exception of the PHP-based CBSM community and content management platform I had done before that.

The longer I was in the SeaMonkey project, the more I realized, though, that the innovation I would like to have seen around the suite wasn't really happening - all the innovation to the suite came from porting Firefox and Thunderbird features and/or code, and that often with significant delay. Not sure if anything other than the Data Manager actually was a genuine SeaMonkey innovation, and I only came up with that when trying to finally get some innovation going, back in 2010. I was more and more unsatisfied with the lack of progress and innovation and the incredible push-back we got on the mailing list on every try to actually do something new. In October of 2010, I took a flight to Mountain View, California, to meet up with Mitchell Baker and talk about the future of SeaMonkey - and I also mentioned how I wanted to be more on the front of innovation even though I seem to not manage to get the SeaMonkey community there. Not sure if it came out of this or was in the back of her head before, in one of those conversations I had with her, she asked me if I would like to work for Mozilla and Firefox. I said that this caught me by surprise but we should definitely keep that conversation going. Just after that I met then-Mozilla-CEO John Lilly, and he asked if Mitchell had offered me a job - just to make sure. As you can imagine, that got me thinking a lot more about that, and gave me the freedom to think outside SeaMonkey for my future. I was at the liberty to think about my personal priorities in more depth, and it became clear that the winds of change were clearly blowing through my life.

After some conversations with people at Mozilla, I decided I wanted to try a job there, and Chris Hofmann proposed my working on tracking crashes and stability, so I started contracting for Mozilla on the CrashKill team in February 2011, first half-time, finally full-time. So, 2 years ago, I opened a completely new chapter in my personal web story. Tracking crash statistics for our products - Firefox desktop, Firefox from Android, and now Firefox OS - and working with our employees and community to improve stability has turned out to be a more interesting job than I expected when I started. Knowing that my work actually helps thousands or even millions of people, who have a more stable Firefox because of what I do, is a quite high award. And I'm growing into a more managerial role, which is something I really appreciate. And I'm connected to all kinds of innovation going on at Mozilla: A lot of the new features landing (like new JIT compilers for JavaScript, WebRTC, etc.) need stability testing and we're tracking the crash reports from those, Firefox for Android needed a lot of stability work to become the best mobile browser out there - and with Firefox OS, I was even involved in how the crash reporting features and user experience flow were implemented. I'm also involved in a lot of strategic meetings on what we release and when - an interesting experience by itself.

Where this all will lead me in the future? No idea. I'm interested in moving to the USA and working there at least for some time - not just because it would make my day cycle sane and having most or all my meetings within the confines of the actual work days in the region I'm living in, but also because I learned to like a lot that country has to offer, from Country Music to Football and many other things (not to mention Louisiana-style Cajun cuisine). I'm also interested in working from an office with other Mozillians for a change, and in possibly becoming even more of a manager. Of course, I'd like to help moving the Mozilla mission forward where I can, openness, innovation and opportunities on the web are something I stand behind nowadays more than ever - and Firefox OS as well as associated technologies promise to really make a huge impact on the web of the future. I'm looking forward to quite exciting times!

Being born on a 13th (just like my brother), I've always considered the number 13 as somewhat of a "lucky number" for myself. And today, it's been 13 years since I started contributing to Mozilla!

It's been an interesting ride for sure so far, as a localizer, theme designer, build patch contributor, project leader/coordinator/manager, even JS/XUL author, add-on and web app developer, and nowadays paid-by-Mozilla contributor in stability tracking - just to name a few of the main things.

In those 13 years, Mozilla has changed my life, and enabled me to make a living out of idealism. It's crazy and awesome at the same time, or, I guess, actually crazy awesome!

And now, we're looking forward to achieve great things in "the year 13" that's upcoming in just a few weeks, and where we'll be trying to deliver on the momentum we built in 2012 and even ship phones that make "the web is the platform" literally true with Firefox OS!

I'm excited to have been in this community for such a long time of thirteen years and to continue strong in being part of this great project - and looking forward to making things "moar awesome" in two-thousand-and-thirteen!

The winds of change continue blowing And they just carry me away. -- Albert Hammond

Like many others, I've been thinking quite a bit these days about what went on last year and what will or might come up in 2012. (And I figure I should bring in a bit more from my overall personality into my future blog posts and mention or quote songs I have in my mind on a particular topic, so I'll start with that here).

One topic that has been with me throughout the year and will probably also continue to be with me is change. A lot of it started with my visit to Mozilla headquarters in Mountain View, CA, in October 2010, actually - I posted about my changing personal priorities back then. And I still remember driving my rental car up to Lake Tahoe, thinking about all those things and listening to the then-just-released Zac Brown Band album "You Get What You Give" and in particular the song "Let It Go", whose lyrics gave me the right mindset for what was I was going through and what 2011 would bring: "Save your strength for things that you can change, forget the ones you can't, you gotta let it go."

Following that, I started 2011 by transferring the vast majority of my responsibilities in SeaMonkey over to other people (we have built up a great team there over the last years, including awesome people like Callek, InvisibleSmiley, etc. - kudos to them to be able to take all that over in their free time) and get the ball rolling on making the project even more sustainable in the future (I hope we'll have news for you on that soon).

Instead, I followed another piece of advice from this song - "When the pony he comes ridin' by, you better sit your sweet ass on it" - and started contracting for Mozilla on the CrashKill team in February, first half-time, finally full-time. With that, my focus changed from SeaMonkey to Firefox and from project management to crash analysis.

For one thing, I ended up growing into that role better than I imagined at first, finding crash analysis more interesting than expected, for the other, this change ended up having more influence on my life than I had imagined. With the need to communicate a lot with different people in this job, from the CrashKill team via the Socorro team that works on the crash-stats server and which I'm coordinating with to various devs, engineering managers or release managers as the need arises in crash analysis. Unfortunately with me being a "remotie" all communication needs to be online (or via phone) and is stripped down to the essentials needed for the job. Being a very social person, I'm missing the additional nuances that face-to-face communication would bring to the table, and more need for communication as part of the job makes that more obvious to me. Then, the whole CrashKill team is based in Mountain View, the vast majority of the Socorro team spread across the US, and most engineering or release managers also based in Northern America, so most of that communication as well as all my meetings is happening during US working hours, which from my point of view in Europe is in the evening to night hours, which requires my work time to be mostly at the end of the day. I have been doing work at late hours in the years before, but there was not as much requirement of that before, while now I have to make at least the meetings, and should be available for more conversation on IRC at those times. Making evening appointments becomes quite difficult in that light. And speaking of requirements, while I could basically completely make my own schedule before, I now should bring in 8 hours of work per day, and with doing that at the end of every day, I need to make all shopping and other private stuff in the afternoon, leaving me all day with "I still have a full work day to deliver today" in mind - until I achieve that and fall into bed. This causes its own share of subconscious stress. And I'm doing all the work from my own private apartment, not getting out unless I go shopping or take my usual Monday and Tuesday evening off for some Karaoke. So, I learned that working from home and remotely has its downsides, esp. for the kind of job I'm in there. This is one area I need to work on a lot in 2012 and find solutions that will be connected with another share of change I'm sure.

But not only my role and work life have changed - Mozilla went in a direction I had often spoken for and has changed to a rapid release cycle and started planning for that shortly after I started contracting. I commented in the planning phase and tried to help shape this process and always was convinced it was a good idea, even though we hit more road bumps than expected. I was heavily involved in coordinating to get crash-stats support rapid releases usefully and also laid out publicly how the new process can improve stability. Mozilla also has revamped its mobile efforts completely - both with a completely new "native UI" version of Firefox for Android, which is in Aurora testing now and with a completely open mobile stack in the form of Boot To Gecko (B2G), a complete "operating system" based on the browser and open web standards (requiring new WebAPIs), which is also coming together piece by piece now. And next to those changes, we're also working on changing how identity and logins work on the web and changing the current "silo"ed app store model by bringing open concepts for web apps and markets into the fold that easily allow decentralization and users really "owning" their apps. In the middle of all that, Mozilla has restructured a bit, brought some previously split-off groups back into the common Mozilla fold, hired a lot of new people, lost (as employees but not as community members) a few high-profile ones who were looking for new challenges, worked on the MPL 2.0, founded exciting new initiatives like WebFWD and went stronger on marketing that we are a non-profit - clearly a lot of change happening everywhere, with the mission and the Manifesto standing unchanged and as clear as ever over all of it, though.

All this makes it clear that a lot of change has come in 2011, both to me and Mozilla, and that it's still only the seed for what's to come in the year(s) ahead. The winds of change are still blowing, and I'm excited for what they propel and which interesting experiences they drag in for all of us.

Here's is more on my 10 years in the project: Exactly 10 years ago today, on January 1st, 2000, I released the first fully localized Mozilla release or milestone in German.

(I actually posted about its availability 2 hours before midnight my time, but didn't have any place to upload files back then, so I consider the next day the actual release day, when others could upload them somewhere to be accessible to the public.)

Yes, right on the "Y2K day" so many people feared, just 15 days after I posted first on the L10n group and was assigned German localizer, I made a fully localized M12 available to the public - starting a story that is still ongoing, now with a community of German localizers bringing all major Mozilla applications to the largest user base of a locale other than US English, and me still doing the suite part of that, now under the SeaMonkey brand.

To celebrate this anniversary, I added a download page and news story for that release to the German SeaMonkey website today (and the same for M13, which was also still missing).

I almost can't believe I've been serving the German community those builds for 10 years now - and most of that time, I did all the packaging myself, creating language packs and tearing apart en-US binaries to create German one by replacing the L10n files, manually in the beginning, with a script in later years. It's only been now since SeaMonkey 2.0 (including Alpha/Beta) that the Mozilla build machinery has started to produce those for the suite as well and I don't have to run things locally and by myself.

With that, I wish a successful new year ("Ein erfolgreiches neues Jahr" in German) and hope for continuing to serve the community with localized builds for a long time to come!