The OpenDocument XML.org web site is not longer accepting new posts. Information on this page is preserved for legacy purposes only.
For current information on ODF, please see the OASIS OpenDocument Technical Committee.

Blogs

As of today, June 27th, I am no longer working for IBM. Last quarter’s widely-reported “resource actions” (lay-offs) hit my group and this time my number came up.

It was a good run, 27 years with one company, something that is not so common today.

Fresh out of Harvard I started working at Lotus Development Corporation in Cambridge, Massachusetts, initially doing technical support, including for the Lotus 1‑2-3 C-language developer toolkit. From there I worked in support’s application development team, developing and maintaining our internal information retrieval system, a hodgepodge of a DOS user-interface, a search engine (using a Bayesian inference network) and a fax-on-demand system, all over NetBIOS.

From support I transitioned over to development, to the SmartSuite team, where I first focused on Freelance Graphics, which was transitioning from C to C++, Windows and OS/2, then on a set of Windows ActiveX controls called eSuite DevPack, some Java components and attempts at an office suite running on a Java-based “thin client” or network computer (eSuite Workplace.) It was a time when the thinking, at least in my little part of the world, was that the traditional desktop applications were dead, and all future work would be done in Java running on your desktop web browser. From this came the browser wars.

Then, in 1995, IBM came a knocking and bought Lotus. Our focus, naturally, shifted from desktop to server-based computing, from Java applets to Java servlets. I worked on various projects, from the K-Station Portal (based initially on Domino) to the Apache Xalan XSLT engine to XForms to WebSphere Portal. I developed a framework for document conversions within WebSphere Portal that we called Document Conversion Services (DCS).

Then, one day, I got an odd call, out of the blue, a very senior person asking whether I was familiar with the file formats from SmartSuite and Microsoft Office. Evidently, no one else in the company would admit to having that arcane knowledge. So, I was drafted onto a “special project,” with a few other talented engineers, a real fun group working on various stealthy tasks, the details of which I am still not at liberty to discuss.

Somewhat overlapping the above, I worked on the things that readers of this blog will be more familiar with, the development of the OpenDocument Format (ODF) standard at OASIS and ISO, and the arguments against ISO ratification of Microsoft’s Office Open XML (OOXML) file format. This then overlapped, in part, with my work to establish the OpenOffice project at Apache, based on Oracle’s contribution, to get IBM Symphony contributed as well, and to bring those two efforts together.

Those years were among the most memorable of my career. I was able to work with a lot of talented and enthusiastic people, within IBM, of course, but also at other companies, with non-profits, with academia and government. I was able travel and see parts of the world I might never have otherwise seen, speak to a lot of audiences about the importance of open standards. I even testified to a few legislative committees. My business travels took me to Brussels, Berlin, Budapest, Barcelona, Granada, London, Paris, Lyon, Rome, Orvietto, Geneva, Amsterdam, the Hague, Beijing, Seoul and Johannesburg. It was a lot of hard work, but it was meaningful. Open standards and open source matter. I have many fond memories of those years.

Eventually, however, corporate interest in document editors, document standards, “social documents” and similar initiatives fizzled, and I no longer had support for remaining involved in ODF and OpenOffice. I needed to move on, to find a new gig.

I looked internally within IBM for something that would combine my hard technical skills and my soft skills, including working closely with attorneys, an ability to “meet them half way” when discussing complicated legal/technical topics. Since I’ve been an active inventor throughout my IBM career, with 54 patents to my name, and a good head for reading and analyzing patents, I spent a few years working as a patent engineer, helping to monetize IBM’s vast patent portfolio, developing technical evidence for infringement, identifying possibilities for patent licencing and assignment, etc.

That’s where things stood as of today, when I handed in my badge and laptop.

As for what is next, I honestly cannot yet say what “Rob 2.0” will be. I plan on taking some time to mull things over and explore my options.

One thing I do plan to do, relatively soon, is start a new blog, a fresh start, on a new path at this domain, preserving this older blog at its current (/blog) URL.

Well, it has been a while I have posted anything on this blog, a little bit over a hear to be precise. I intend to post more in 2018 but I will likely not keep a regular schedule.
Today I would like to explain my reasons for my candidacy at the board of the Open Source Initiative. I can think of two kinds of reason for my decision: one is personal, and the other one is directly related to current state of Open Source and software freedom. Let’s start with the first one: I’m currently helping the Open Information Security Foundation and the Suricata project in my capacity at ANSSI, while contributing in a minor way to the LibreOffice project and the Document Foundation.
I’m also helping an exciting blockchain project called NotaryTrade, which relies both on Free & Open Source Software and Open Hardware. These are my “major” involvements at the time, and what this means is that I’m no longer focused on one community and one project like I was several years ago. The way I work and contribute, while remaining the same in many ways, has changed. What is different as well is my vantage point on the policies affecting software freedom and I believe I could be useful to one of the most important entities in the field, the Open Source Initiative. This brings me to my second reason:

Free/Open Source Software has won. That’s not exactly new. What has NOT won however is the Free/Open Source Software as a set of principles and ideas. The entire industry is happy to reuse Open Source components but reluctant to admit they’re integrating these same components in their solutions. There’s a large part of the IT industry that does not hire contributors to Open Source projects in their capacity as Open Source projects practitioners. Yet it will gladly reuse components licensed under an OSI-approved license. In other words, a large part of the IT industry handles Open Source Software as an externality it does not have to pay for, yet relies upon for the solutions it sells and distributes.

Another worrying trend is an increase in attacks on the basic legal aspects of Open Source Software (and Open Standards as well): public policy initiatives as well as industry-wide moves aim at weakening the intellectual property tenets of Open Source and we must ensure that these trends be halted and the wider industry educated on Open Source and open standards.

The Open Source Initiative is currently in the right position to improve the standing of Open Source in the industry and defend its principles and licenses against damaging policy projects. I would like to help the OSI tackle these challenges to the best of my abilities.

You can find a short bio and a few key “agenda items” on the my candidate page and I’m of course happy to address or answer any comment posted there or here on this blog. I would also like to specially thank the Open Information Security Foundation(OISF) for its support of my candidacy. If you don’t know what the OISF does visit the Suricata project and you will discover how Open Source plays a major role in cybersecurity. Last but not least, I’m asking for your support and I hope you will help strengthen the OSI board with your vote.

I have been seeing the increasing popularity of voice-controlled "smart speakers" like the Amazon Alexa series, including heavy use among many of my close relatives and even their very young children. Attending a Gartner conference, I watched analyst Jason Wong gave a presentation about "voice first" apps, showing an example from Rhino Fleet Tracking and explaining Amazon's recently announced "Alexa for Business" offering. This was something that could be used for internal business applications, not just for personal home use. Hmm. Internal business use if my area of interest as CTO at Alpha Software.

By the time I reached the airport for the ride home, I had checked out some of the Alexa documentation and saw that it would be pretty easy to hook up Alexa to query a database to explore this area. After a little work back home, I found that connecting to the REST API of our new Alpha TransForm mobile forms data capture system would be even easier. I soon had a demo and made a short minute and a half video "Simple business app with Alexa":

I then got to discuss the video with a variety of analysts and others and think about this area a lot. One thing I was struck with was the similarities between the Alexa-style voice input and traditional Command Line Interfaces (CLI). CLI, first popularized in the computer world back in the 1960's or so and continuing today in Linux, Windows, etc., has always appealed to many professional developers despite the advent of GUI and touch-based interfaces. There is something special there. Perhaps voice and CLI share some attributes that would explain the rapid rise of its popularity in smart speakers. I decided to write an essay exploring this.

By any measure, the rise of open source software as an alternative to the old proprietary ways has been remarkable. Today, there are tens of millions of libraries hosted at GitHub alone, and the number of major projects is growing rapidly. As of this writing the Apache Software Foundationhosts over 300 projects

The Supreme Court issued an opinion today that restricts the ability of patent owners to choose the court in which they bring an infringement suit. The case is called TC Heartland LLC v. Kraft Food Group Banks LLC, and the justices unanimously ruled in favor of the new restrictions.

Several months ago, I called on France to learn from America’s mistakes. I told the French that it wasn’t too late to save themselves. They still had a chance to do what we could not – to vote a xenophobic, nationalist candidate out of the race and away from the presidency (though, in their case, Donald Trump is replaced by Marine Le Pen, leader of the far right party, le Front national). I warned them of the complacency felt by so many Amer...

Along with death and taxes, two things appear inevitable. The first is that wireless connectivity will not only be built into everything we can imagine, but into everything we can't as well. The second is that those devices will have wholly inadequate security, if they have any security at all. Even with strong defenses, there is the likelihood that governmental agencies will gain covert access to IoT devices anyway.

So awhile ago, I registered the citationstyles.org domain name, and Rintze Zelle and I, with some help from the team at CNMH, moved CSL hosting over to that domain.

As I’ve suggested in an earlier post, however, I have some rather ambitious plans for expanding that site. Following is a bit more fleshed out idea of what I have in mind.

Mendeley has put some resources into a promising new WYSIWYG CSL creation and editing interface. At this point, it’s far enough along to show a lot of promise, but is still missing a number of key CSL features that it really needs to be functional with real world styles. But I expect this will come soon enough.

I would really like to host this new application at citationstyles.org, and to use it to create a community supported style creation and editing repository. So imagine a few example use cases:

Sarah the chemist starts a manuscript she wants to submit to a journal. She does a quick search in her local application (but which is in fact searching a remote repository) for this journal style, but finds it doesn’t exist. The interface includes a link to “create new style”, which brings her to citationstyles.org. Once there, she is prompted for some information about her style that helps the application narrow down exactly what she’s looking for, and presents her with four options that it thinks might be close to what she needs. Upon inspecting the example output, she realizes that her journal style is exactly the same as a style for another journal. Rather than create an entire new style, then, she simply clicks a button, enters the new title and other metadata, and the style is ready for her and others to use.

A variant of the first case, where Sarah finds a style very close to what she needs, but with some important differences. She clicks a button to edit the new style based on this existing style, which presents her with a pre-filled style. She quickly identifies what she needs to change, does so, and then goes on her way. The entire process take her three minutes.

John the psychologist realizes there’s a mistake in the community version of the APA style. He goes to a page for that style, and enters a comment with the relevant information. Another user who has taken responsibility for this (this user could be someone from the publisher or journal itself, BTW), quickly makes the change, and it is instantly available to hundreds of thousands of users of a numbers of different bibliographic applications.

A group of scholars form a new open access journal. They want to make it easy for their users to create consistent citations and bibliographic entries. The new editor goes to citationstyles.org to create a new style, simply bases it wholesale on Chicago, and in two minutes is done: the journal’s styles is available for any to use.

I could add more, of course, but I think this suffices to get across the idea. It is based on my strong belief that academic users—whether they be beginning undergraduates, or senior scholars—really don’t ever want either to:

create styles … unless they don’t exist

edit styles … unless their styles don’t work

In other words, people don’t want to bother with these esoteric details unless they must. And crowd-sourcing the maintenance and evolution of these styles is the sane, practical, thing to do. I want citationstyles.org to be based on this notion. Neither I nor Rintze, however, have the time or skills to realize this vision. So we’d welcome help to make it happen.

Michael Feldstein has a post on the new Repository API in Moodle, and explains that it enables easy import and export of content to/from course sites. But, he suggests, this may well be a solution to a more fundamental design failing; as he puts it:

A fundamental flaw in LMS design is that the course, rather than the student, owns course documents. While it’s great that Moodle makes it easy to export course contributions to places where students can hold onto them after the course gets archived, this mechanism relies on students making specific efforts to save their work. I would prefer to see a system in which the canonical copies of student-created course documents (or faculty-created course documents, for that matter) live in the users’ private file storage space and the course instance is granted permission to access them.

I think is exactly right, but I see two issues. First, who owns group created/edited documents? I doubt this is an unresolvable issue, but it does add a layer of complexity to the discussion.

Second, I’d want to consider a broader notion of sharing. Consider an example:

I teach a large-enrollment introductory course that is part of the University’s “Top 25″ initiative, which seeks to reorient these sorts of more typically lecture courses around principles of inquiry-based learning. We have a team of people who teach this course who worked at figuring out new course modules that we could share among instructors. But the sharing happens (or not, as it were) through a wiki, and the kind of content we have up is not available in a fully ready-made form such that each of us can simply take it and go in our individual courses. Sharing just takes too much work as it is.

I’d like my LMS to make it really easy to share teaching resources among faculty; ideally not only within just a particular LMS instance at a single university, but across universities. Why can’t I, for example, create a course module and make it public? Why shouldn’t I be able to easily borrow work from colleagues at other institutions? And by easily, I don’t mean having to force them to export some damned package, email it to me, and then make me import it. I mean single-click sharing. What if, for example, I could search for particular concepts in my area of geography, and get a list of modules from both my colleagues here, but also other colleagues elsewhere, and simply click to use it in and/or adapt it to my course?

So that’s a use case: I really want to contribute to and borrow from my colleagues’ work in ways that go far beyond what’s now possible. What does it take to make that possible? Am not exactly sure, but think it’s likely to require rich metadata and structured content authoring. Sakai 3 will, for example, have a template system that allows for wizard-like creation of new content. I could imagine using those templates to layer RDFa metadata into the content itself, and then somehow collecting that metadata and exposing it through some sort of API (SPARQL?).

We’ve got a completely 1.0-compliant CSL processor in the form of Frank Bennett’s citeproc-js, which is backed up by an extensive test suite. This has just recently been folded into the Zotero trunk code, so should be rolled out to Zotero users in the coming months.

The Mendeley team is also planning to use citeproc-js, though I haven’t heard any update on timeline.

a new app called Peaya has CSL support, though I know no details (in fact, hadn’t ever heard of it until just a bit ago, which bothers me)

Andrea Rossato is updating his wicked fast Haskell implementation to be 1.0-compliant; usable, among other things, with the really nice markdown processor Pandoc

What do I take away from this? That the idea of CSL is gaining traction: that citation styles are too much work to be worth the hassle for every application creating their own language and associated styles, and that users don’t really want to think about citation styling; they want stuff to “just work.”

So here’s my vision of where I’d like to be in another year or two:

“CSL support” is considered an important feature by users

A complete and beautifully functional online CSL creation application is up and running, and the result is an explosion of good, correct, and up-to-date styles. Right now we have a bit over 1,100 the last I checked; I’d like to see this increase to cover virtually all current journal styles. To do this right means it has to be really easy to both create new styles, and comment on and subsequently edit existing styles.

Wide and deep (e.g. fully compliant) support for CSL across a range of applications and application types (online, desktop, etc.). This not only includes correct formatting, but also making it really easy to find and use the styles noted above (and passing around files by email does not count).

But there’s still some distance between that idea and the current reality. For one thing, there’s not as much collaboration on CSL among developers as I’d like. Ideally, everyone that implements CSL should have some sort of public commitment to, and benefit from, future CSL development. At minimum, this should involve participating in development discussions. But beyond that, we need people to help with:

web design for the citationsyle.org site

finishing the style creation application and repository (PHP and JQuery skills needed!), and figuring out how best to exploit this in applications

So continual progress, but still a fair bit of social and technical work to do!

My institution is entering the Sakai community at a time that is both awkward and exciting. Sakai is now a two-product world. Sakai 2 is well-developed and stable: the LMS we have now. Sakai 3, on the other hand, is the emergent next-generation LMS: incredibly promising, but not yet ready for wide-scale deployment.

Given our roadmap to transition over the next year or so and have Sakai fully deployed in the Fall of 2011, the obvious question all of us that attended the Sakai 2010 conference were asking was: should we just look to jump straight to 3? Ultimately, after all the discussions, we ended up with about four different possibilities:

do Sakai 2, and effectively ignore Sakai 3

do Sakai 3, and ignore Sakai 2

run Sakai 3 for the nice new social-networking features to act as a kind of portal with Facebook-like features, but run Sakai 2 in “hybrid mode” for the more traditional LMS functionality that may not be ready when we need it

similar to the above, but run the two instances completely separately

Each approach has its trade-offs. The first ensures a longer transition to Sakai 3, where I think many of our faculty and students would really like to at least experiment with it ASAP. It would also insure another, somewhat abrupt, transition. The second is probably not realistic in our time-frame; some LMS functionality that some faculty will need will likely not be ready by Fall of 2011.

I got the feeling that our group was more attracted to the last two options, both of which would present faculty and students with the new face and the unique features of Sakai 3, and allow a more incremental and seamless transition to the next-generation LMS functionality as it became available. I also personally gathered that the ultimate decision will have to come down to facts on the ground, as they evolve. In short, we probably ought to concentrate on Sakai 2 now, but monitor the progress of Sakai 3. If the project moves at the pace projected in the roadmap then running 2 and 3 together in hybrid mode may well be a viable option. If not, running them separately initially might make more sense.

Another related important question will be what we use for portal functionality. Sakai 3 could hypothetically serve as a nice, flexible, portal interface. It is substantially more ambitious than the traditional LMS model. Certainly some of our people were thinking about this idea. And other institutions have as well. UC Berkeley, for example, is deploying Sakai 3 as its portal system for the coming Fall. But such a move at my campus would likely require a rethink of what our portal functionality should provide, and unlike Berkeley, we already have a portal constituency on campus. So I can imagine some political challenges as well.

Having just recently been involved in Miami’s decision to move from Blackboard to Sakai, I was asked to attend the annual Sakai conference along with our some of our IT and instructional design staff. I just got back last night. Here’s some thoughts and impressions.

For some background, I’m an academic whose focus has nothing to do with technology. Nevertheless, I have years of experience in working with open source communities on issues related to academic (mostly research) authoring (see, for example, my work on CSL, which is an outgrowth of work for OpenOffice). But because this work is not central to my academic position, I have tended to avoid investing cash and time resources in attending related technology conferences. With Sakai, though, it’s a little easier to justify my involvement, since it has direct impact on my teaching, and on the broader teaching and learning community at my institution. Aside from a talk I gave at a Code4Lib conference a few years ago, then, this is my first edu technology conference.

So what did I think in a nutshell? I was deeply impressed. The Sakai community is diverse, smart, passionate, and energetic. The sense of mission the community has is almost palpable. It is clear that there is a lot of deep thinking that happens in this world, and that there is a lot of discussion and community engagement around that. At the same time, this seems to be a quite pragmatic community as well. They know what they want to do, and they seem to know how to get there.

In particular, my respect for the Sakai 3 effort continues to grow. Before we made the decision to go with Sakai, I had already spent a lot of time looking at the project: downloading and running the current code, looking at the technical design, reading through the more user-oriented design documents, and talking to the Sakai product manager (Clay Fenlason) about the process by which they were realizing this ambitious vision for a next-generation LMS and collaboration system. So I was already really impressed with Sakai 3 before the conference. But at the conference, you can see how all this works is materialized.

I watched a demo of the NYU pilot project (see, for example, this session description), for example, that will be going live in the coming months. Because the lead Sakai 3 UI designer was in the room as well, we could have a collective discussion about details of the work, both now and in the future. What became clear in these and other discussion is that there are some really sharp people working on this project. At no time did a question come up where I got the impression that these people had anything but an absolutely clear focus on what they were doing.

I also went to a session that explained all the work and thinking behind this diagram.

This diagram represents a year of intense work of pedagogical experts from around the world, trying to imagine (and re-imagine) the core principles that should drive the design of a next-generation LMS. The idea is that nothing concrete moves forward with Sakai 3 without justification in these principles.

Here’s an image from the session:

The session drew broad participation. It wasn’t just instructional designers or pedagogy people in the room. The guy you see in the right foreground with the dark blue shirt is Clay, the product manager. There were also a number of programmers in the room involved in the discussion as well. This is really good to see, as there are sometimes obvious disconnects between more user-focused design people, and programmers. There were even a number of faculty participating in the session as well. This is what the Sakai world means when they say that Sakai is by educators for educators.

I was also struck that the design principles noted above, and the way that Sakai 3 is proceeding more concretely, is fully consistent with the educational mission of Miami. This is software that should beautifully enable more student-centered, integrative, learning and research collaboration in ways that are simply not possible in current generation LMSs. So my hope is that my institution fully embraces these possibilities, and contributes what it can to realizing them. Now is the time to think big!

Last week, I was part of a meeting that decided on a recommendation for Miami University’s LMS transition over the next year or so. We ultimately chose among four options:

Blackboard 9 (stay with Blackboard, but move to next version)

Desire2Learn

Moodle

Sakai

Interestingly, there was very little support, if any, for continuing with Blackboard. There’s just been too much frustration with both the software and the company. It’s hard to justify spending so much money on such a mediocre solution, particularly given current budget issues.

Our ultimate choice was Sakai. I can’t say exactly what it was that ultimately organized the consensus around the choice, but my own argument in the meeting was roughly as follows:

all of the current LMSs are more alike than not

the open source options (both Moodle and Sakai) give the institution greater control over our own destiny going forward, with more options for support, for influencing the direction of the software, and for deciding when we want to transition to new versions

Sakai in particular has a really smart forward-looking roadmap in v3 which is shaped by the right, pedagogically-oriented, vision

For me personally the plans for Sakai 3 were a primary differentiator. The tight coupling of a new architecture placed at the service of a shiny new interface that is easy-to-use and flexible, and which is designed based on user-testing from the beginning, is, I think, the right direction. The widget and template-based approach has the potential to make it easier and quicker for new users to get going. The devil will be in the details of exactly how well they implement these ideas, but I am looking forward to seeing how Sakai 3 evolves.

Now the real work will start for the IT staff here. They’ll have to figure out the best way to transition course content from Blackboard, to train faculty and students in how to best make use of Sakai, and set up some kind of governance structure to manage our relationship with this new technology. I’m hoping this can include some mechanism to get IT services staff and interested faculty involved in the Sakai community, and contributing in different ways to its future evolution.

Earlier, I covered some interesting new characteristics of Sakai 3, but want here to add another. Existing LMSes are hamstrung by a number of assumptions and limitations. To sum them up, today’s LMS tends to be both course-centric and tool-centric. If, for example, students that share two or three related courses want to setup a group, they can’t do it; the LMS assumes students are part of courses (or in some cases, non-course sites). Similarly, the LMS experience is constrained by a focus on discrete tools. If you want students to reflect on some ideas, and then host a discussion on them, you need them to go to two separate places: some webpage-like thing that describes the ideas, and then a separate forum where the discussion may happen. If the student wants to refer back-and-forth between the two areas, they need to do awkward things like open two windows or tabs, or do the browser back-and-forward button thing. This is a totally artificial limitation that has real consequences.

Thankfully, Sakai 3 does away with these limitations. Groups, for example, may exist independently of course and sites, and so will allow more flexible sorts of online sociability and collaboration among students, researchers, and so forth. On the tools front, Sakai 3 breaks down the walls that have previously divided them. If you want to host a discussion on some content, you can simply create your page, add the content, and then at the bottom of the page add a “discussion” widget. Upon doing so, a discussion thread will be available at the bottom of the page for students to view and contribute to.

The current UI has widgets for a variety of common features: polls (complete with a nice instant-view graph of the results), comments, quizzes, etc. But it also has some clever new ones, such as a Google Maps widget that allows you to embed a live Google Map (though as I geographer, I have to say that I’d really like to see more here, like the ability to ask for a country, and have it understand what I mean; this might be more a limitation of Google Maps though).

Here’s an example of what this looks like with the polling widget. First, we decide to add a poll to our page. We go to the “insert more” drop-down on the right side of our editor …

Once we select “poll” we get a dialog to set it up.

Note that this dialog pops up in place; no need to go to some separate page to manage this. Once we have it all ready and click “insert widget”, and save the page, we then see this, which is also what students will see …

When a student comes across this, again, they don’t have to go to some separate place to take the poll; they simply click their choice in place. Even more cool, once they’re done, the live widget presents the results of the survey in a graph view.

In turn, this graph will continuously update as other students take the poll!

A couple of months ago, Lance Speelmon at Indiana University presented a demo of this at a Sakai conference in Japan. You can see that starting at about the 23 minute mark or so of this video:

So let’s pause for a second and ponder the implications of this: I will be able to create a page for some topic in a class. I can add some text to present the issues for the topic and link to some background readings. I can then embed any other widget I want right there in the page! Bigger picture, this widget architecture is designed to be easy to work with for developers. So if I have some idea for a great new widget, any on-campus developer with basic web development skills could hypothetically help me create that widget.

When I started pondering what I want in a next-generation LMS, this is exactly the sort of thing I was imagining!

So in trying to come to a conclusion on Moodle vs. Sakai, it’s easy to get wrapped up in the minutia of feature comparisons and such. It seems to me, however, it’s important to keep in view the larger, longer-term, picture. In this case, that in part involves the strategic directions for these two projects, which will give us a sense of where they might be in five years. To wit, below is my understanding of Moodle 2 and Sakai 3. Am in a bit of a hurry with end of semester chaos, so please correct me if I have anything wrong here, or if I’m missing important details.

Moodle 2

As I read it, Moodle 2 is a significant change to the platform, but a largely incremental one. The primary change appear to be the addition of a repository API, which provides a flexible way to add access to different kinds of resource repositories. For example, there is a plug-in that uses this API to make it easy for users to browse and insert images from Flickr from within the standard Moodle editing tools. In addition, there is work on new features, many of which are outlined in the following video:

In other news, there appears to be independent work on making Moodle friendly for mobile devices. Here’s a video of one such example:

From what I can tell, Moodle 2.0 will be ready for deployment sometime later in 2011.

Sakai 3

Sakai 3, on the other hand, is a more radical change: effectively a complete rewrite of the platform. This rewrite involves building the Sakai-related functionality on top of other, more generic, open source code. The new core code, and hence what the Sakai community is responsible for maintaining, is dramatically less than the old; at present a reduction of close to 90% of the code base! In addition, one of the core developers on the new Sakai core has also become a developer on the Apache Sling project on which the Sakai 3 core is based. This reflects some smart strategic decisions, and should provide a focused, easy-to-develop and maintain foundation.

Following are a few examples we can glean of what this might look like from the design wireframes (visual mockups, not necessarily running code at this point) and the running demo code.

Example: Everything is Content

Michael Feldstein does a good job explaining what this all means. But perhaps some pictures will make the implications more immediately apparent. Consider search. Because existing LMSs are both organized based on courses and tools, its quite awkward to search for content (forum posts, blog posts, assignment or page content, etc.) globally. On the other hand, consider this proposed search UI for Sakai 3:
So one does not go into, say, a forum and search the forum. Rather, one has a search interface that is the same whether you search the entire university’s content, or an individual course. That integrated search interface looks beautiful, and will be instantly familiar to anyone used to using contemporary web interfaces.

Example: Widgets for Integrating Different Content

The new interface is based on widgets, which allows you to quickly add different blocks of features, and move them around as well. Because of the new core foundation, these widgets are also designed to be really easy to develop, so that it’s much easier to add new functionality. In this view, for example, you see a “widget” I’ve added to access my Google Docs documents from within Sakai.

Example: Editing

One design priority for Sakai 3 is to make editing content much easier. Here we see the clean new editing interface.

In addition, all content is versioned, so that you can easily step back through changes, and see who made what changes. Since all content is treated uniformally in Sakai 3, there are no artificial limitations in how this versioning support can be applied. Here’s what it looks like the UI currently:

What I get out of all of this is that Sakai 3 will be more scalable (faster), more flexible, more elegant and easier-to-use: a brand new LMS designed for the needs of the 21st century. The devil will still be in the details of exactly how they implement specific features (gradebook, assignments, etc.) on top of this new core, but I am also really encouraged by what I am seeing of the design process. It demonstrates an attention to detail that is necessary to do this right.

The current roadmap is that it should be ready for large-scale deployment sometime in mid-to-late 2011 [I corrected the year from 2012, per comment below]. Also, there’s some work going on (at Indiana?) in allowing mixed 2-3 deployment; using v2 tools within a v3 context for example.