Posted
by
ScuttleMonkey
on Monday January 25, 2010 @12:49PM
from the does-it-provide-synergistic-roi dept.

With the recent release of Firefox 3.6, Mozilla has also decided to try out a new development model dubbed "Lorentz." A blend of both Agile and more traditional "waterfall" development models, the new methodology aims to deliver new features much more quickly while still maintaining backwards compatibility, security, and overall quality. Only time will tell if this is effective, or just another management fad. "If the new approach sounds familiar, that's because Unix and Linux development has attempted similar kinds of release variations for iterating new features while maintaining backwards compatibility. HP-UX, for example, is currently on its HP-UX 11iv3 release, which receives updates several times a year that add incremental new functionality. The Linux 2.6.x kernel gets new releases approximately every three months, which include new features as well."

I'm sure it was. It was looking for a flower named "Lorenz" [wikipedia.org], not "Lorentz".

TBH, I have no idea where Moz got the name. The only Wikipedia hit I got was for the Lorentz Transform [wikipedia.org], which is the equivalence and mutual convertability of different relativistic frames of reference. Is this Moz's way of saying "we'll all be going at different relativistic speeds, accelerations, and frames of reference"?

Can I use the theory of special relativity to get out of missed deadlines? Sure, we are way behind in this frame of reference. But as viewed from a different frame of reference traveling near the speed of light relative to us we shipped yesterday!

And there was me thinking it was named after EVE T1 salvagable materials... looking foward to the "Burned Logic Circuit" version... oh, wait, maybe we've already had that one, it makes your RAM fry from overuse.

The equation Einstein came up with more than a century ago can be considered a degenerate form of the mass-energy-momentum relation for vanishing momentum. Einstein was very well aware of this, and in later papers repetitively stressed that his mass-energy equation is strictly limited to observers co-moving with the object under study. However, very, very few people seem to have paid attention to Einstein's warnings, nor to any of the more recent warning

That depends entirely on how you define mass. Invariant mass doesn't change. Relativistic mass, (i.e., an object's resistance to deflection in spacetime), does.

But at the macroscopic level invariant mass is a convenient fiction, unless you're dealing with something at absolute zero. If not then guess what: the invariant mass includes the object's heat expressed in kinetic energy.

I think you'll find that it is the otherway around: release dates will get a lot harder to hit because less time appears to pass for the fast moving developers compared to the rest of the planet. Also mass (not weight!) is an invariant quantity so there will be no change. Yes I know that a lot of people often think that the mass increases but it does not the 'gamma' factor in momentum comes from the velocity NOT from the mass which is why things like "F=gamma ma" do not work.

the new methodology aims to deliver new features much more quickly while still maintaining backwards compatibility, security, and overall quality.

A style of management is only as good as its manager(s). We've had many, many methods of improving all three of those but as an industry we routinely and repeatedly turn it down for most applications over cost considerations. A new hybrid model of development won't change this -- continual pressure from inside the organization will eventually subvert any gains at the process level. Senior level management has to push this from the start -- only then would this or any other kind of methodology have a chance at achieving its goals.

At Yahoo! we tried this on a few projects and ended up calling it waterscrum. Wanting the dev flexibility of agile and the (perceived) business certainty of waterfall at the same time isn't really possible when it's not understood that the dev methodology has impacts outside of the tech organization. If you're doing agile dev, the marketing materials, sales collateral, etc are much more difficult to write and lock down when you're looking to make a splash in the market.
For agile to work the entire company needs to be okay with some level of uncertainty, or at least understand that for major market releases you still need to plan a date far in advance. Just because you're launching code doesn't mean you're launching a product, and getting materials locked down is harder to do when, by definition, changes happen more frequently.

Dang! I thought I had perfect idea how to mix waterfall model with agile development. I started writing an article about it some months ago but can't get myself to finish it.

Idea was basically that when you start a project you must know at least something about what problem the project tries to solve and there's your goal. When the goal is at least somewhat clear you write requirements analysis and architectural specification. You can always come back to arch-spec but you have to understand that making dram

Oh. Thanks for the info. I'll definetely check RUP when I have spare time. My opinion is that you must have clear vision from the start for at least what is the problem we are trying to solve with this application. How else you can say when your project is finished? In agile development the goal is not always so clear and it can change a lot in the process even though things like Scrum was designed to prevent losing the focus/goal.

The waterfall model is horrible for big projects. I thought everybody knew that and had switched to the spiral model a loong time ago.And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”No, dammit! It’s just as bad.Maybe that’s why they try to mix them both... To get to the actually healthy middle ground.

But still, it’s silly. We have a perfectly good spiral model. Hell, the whole game industry uses it. (As far as I know.) And it works great, even on those huge 5-year projects. (Notable exception that proves the rule: Duke Nukem Forever.)

Sorry, but that will result in a huge epic failure, and probably Firefox’s death.Mark my words.:/

The waterfall model is horrible for big projects. I thought everybody knew that and had switched to the spiral model a loong time ago.

The spiral model is utterly terrible. Since the DoD moved over to it, every one of their projects is over budget, underperforming, and late.

Agile isn't all that much better. The whole point of Agile is that you can have all of these changes... but you can get that with shorter release cycles, and its pretty easy to game Agile as much as any other model.

Based on my experience so far I would say to do the technical structure with waterfall, and the functional structure with agile. What do I mean by that? Well, most of the time the customer doesn't really know what he wants, which is why blueprinting fails so miserably. But you can often at a technical level know what a customer wants. Let's for example say the customer wants drop down fields in an application. You know you'll need a storage backend (database?), UI front end (web app?), you need functions to manage the values, you need listing, sorting, filtering (single or multivalue?), security, audit logs and so on. You can design a ton of things by waterfall without actually knowing what drop downs the customer will want.

Agile promises to do that by refactoring which rarely happens because it's very likely to break things that were already working, despite the unit tests. They need the documentation from the original waterfall design, and they need the testing from the new waterfall design to ensure quality. One of the things I've noticed suffers most in agile is the documentation because there's an implicit belief that this will all change again, so people skimp on it even more than usual to document it when it's "final". The result is often that things are kludges made to extend things rather than actually going back to refactor, because people spent very little time thinking about a long term design in the first place.

Conversely, I have done quite a few implementation projects and in most the customer has only a list of specifications and no real idea how he'd like it to work. Creating a blueprint accurate enough that technical people could implement by and that the customer understands well enough what he's not going to say "well, that's not what I wanted" is like pulling teeth. And at the end of the day, different stakeholders will still have a different idea in their mind of what it's going to be. If you have a decent architecture, then you can do agile on top of that. Want this link to go there? Want to see these things? Can we get a checkbox there? Can you calculate that in a preview? Hopefully yes, but if it goes against the architecture it might need to go a longer waterfall process.

There's a balance here, on the one side you got expert systems that try to be ultraflexible in every direction but only ends up as an overcomplicated mess. On the other, you have the projects where nobody took five minutes to think "Am I trying to solve one special instance of a general issue here?". I've no idea if it'd only make a complete mess of two development methodologies, but I'd sure like to try it out sometime.

And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”

This is not meant as a flame, but I don't think you have a clue as to what Agile here means. Possibly because term has been abused a lot by people who just want to get rid of all processes -- nonetheless, agile does not mean "no process". Just a light-weight common-sense process that most mature

And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”

It is obvious that you have never worked with a properly implemented agile process.

First of all, spaghetti code is absolutely not accepted. High quality code is imperative to maintain a successful product in the long run, and something methodologies such a Scrum explicitly declare as non-negotiable.

Yeah, of course not. And then, failing at a sprint goals is absolutly not accepted either (not at least by managment, which is all that matters). Do you know what happens when those two "absolutes" collide?

"High quality code is imperative to maintain a successful product in the long run"

And doing what needs to be done to reach the short run goals is imperative to get this week's paycheck. Again, do you know what happens when those two "imperatives" collide?

I agree with you that nothing is as important for success as a skilled workforce, althought this is not limited to management. I would also argue that, in practice, skilled people will work in a similar way regardless of what methodology you put them in.

However, that said, there still are differences between methodologies that are worth considering. Since we were talking about Scrum; one it it's most important aspects is not so much to dictate how you should work, but rather to provide tools and processes f

Also, it is the Scrum Masters job to shield the team from pressure from higher management. Stakeholders are allowed to (and must) prioritise tasks, but the influence should end there

In my experience, people at the level of a Scrum Master may have the responsibility to shield the team but they rarely have the authority. If the Scrum Master, or even a level higher, is unable to deliver to management's expectations, don't believe management is meeting to decide how they failed or how to replace themselves.

If Scrum is not understood and fully endorsed by the entire organisation, including higher management, it is doomed to fail. I agree this is a common problem, but the problem is not really with the methodology as such.

Projects have defined, achievable end states, and should have built-in mechanisms for winding down. Firefox achieved the objectives of its project, to release a lightweight standards-compatible browser, somewhere around version 2, at which point it should have wound down into a maintenance mode if it were strictly a project.

What we have here is an activity organized by a party which has interests beyond producing a good browser as a project. At the very least, the Mozilla Foundation's and

I think a typical yet reasonable school of thought is that the best model depends on the characteristics of the project. Some projects are very fluid and some projects are very constrained. Designing the next cool iPhone game versus programming a perfect clone of last month's cool iPhone game.

Mozilla have fallen into the classic trap of trying to expand its user base via increasing features, as opposed to keeping its user base by increasing quality.

We don't need new features directly in Firefox. Plugins do that. Remember that long ago the project made a conscious choice to take a performance hit to provide third-party access into the browser via the elaborate XUL and plugins frameworks, to minimize pushing code and features onto users who don't need them.

Mozilla have fallen into the classic trap of trying to expand its user base via increasing features, as opposed to keeping its user base by increasing quality.

It works for Microsoft.

We don't need new features directly in Firefox. Plugins do that. Remember that long ago the project made a conscious choice to take a performance hit to provide third-party access into the browser via the elaborate XUL and plugins frameworks, to minimize pushing code and features onto users who don't need them.

It was a minority browser then. Most people do not install plugins, or install very few. Most people do not want to work out which plugins are incompatible with each other. Most people judge a browser by how good it is out of the box.

Actually, if you compile it yourself, cant you turn off most of the bloat?

You or I could probably piece together an understanding of how to do that inside a working week or so via this list of fragmented documentation:https://developer.mozilla.org/Special:Tags?tag=Build+documentation&language=en [mozilla.org] but I don't think the rest of their intended audience of hundreds of millions of users should need to do that in order to use an efficient browser. (In the face of mobile phones and other light devices which provide fully capable browsers, pointing out that Firefox uses 200 MB less mem

The elegant solution to too much choice among plugins isn't to revamp the software development workflow, nor is it to load every conceivable feature into the default interface. (Assuming that the opposite were true, Firefox would ship with all 10,000 plugins loaded, which it does not.)

Since Firefox is starting to resemble an operating system anyway, it might be time for Firefox distributions, which default to a core consisting of functions expected of every browser, along with the small number of exceptiona

That would be a valid argument if Microsoft and the Mozilla Foundation had similar goals; they do not. Microsoft is a multibillion-bollar global company intent on making money; Mozilla just wants to make a browser and a few related applications.

It was a minority browser then. Most people do not install plugins, or install very few. Most people do not want to work out which plugins are incompatible with each other. Most people judge a browser by how good it is out of the box.

There's a word for that. It's called 'integration'. You add new features as extensions and then you provide browser packages which include a few extensions installed by default. That way, those who do not want a particular feature can remove it.

Mozilla Corporation's goals are substantially to do activities which bring in revenue, as with Microsoft. Mozilla's main vehicle for doing so is to package and distribute a browser through which income is generated via Google searches. To maximize revenues, they need to maximize both market share and usage of their browser.

The new focus, maximizing market share (quantity), could help, but not as much as a new strategy which maximizes both market share and usage (quality). Under those 30 MB or so of binaries

Mozilla Corporation's goals are substantially to do activities which bring in revenue, as with Microsoft. Mozilla's main vehicle for doing so is to package and distribute a browser through which income is generated via Google searches. To maximize revenues, they need to maximize both market share and usage of their browser.

Mozilla Corporation is a wholly-owned susidiary of the Mozilla Foundation. As such, it's revenue-raising activities are limited in scope to assist in raising money for the Foundation and to fund development activities for Mozilla's projects.

As such, Mozilla Corporation is profit seeking only in as much as it furthers Mozilla Foundation's goals.

Under those 30 MB or so of binaries, libraries and other stuff, I'm sure exists a small feature subset set which would give all Internet users a compelling reason to switch to and stick with Firefox, if that feature subset were promoted correctly.

See Epiphany and Chrome/Chromium. These projects are exactly that, but they're based on the leaner and faster Webkit libraries, rather than Gecko. Firefox aims to

Under those 30 MB or so of binaries, libraries and other stuff, I'm sure exists a small feature subset set which would give all Internet users a compelling reason to switch to and stick with Firefox, if that feature subset were promoted correctly.

Except for whatever browser comes with windows, which many users don't know about, and for sites which force users to use IE, such as activex, etc, which are several. I work in a cyber cafe with about 100 inexperienced users a day. Many of them actually are arrogant with me saying IE is better, it always works with the site. And unfortunately, with many sites, I will be forced to admit it is true - without IE and windows, you can't get your job done, things won't work. It happens on government sites and ba

If the plugin architecture has become a problem (and it has due analogs of shared memory and lack of process isolation leading to potential security issues), then they should work to revise or remove it. Moving to an gushing agile waterfall feature stream or whatever development and release paradigm isn't a plausible solution.

Firefox reminds me of Windows 3.1 in an uncomfortable number of ways. Besides their co-operative multi-tasking environments and storing system settings in several different places, the

You know that it is silly, that every time a new version of FF comes out, every add-on author has to up the version on his code and resubmit to amo? Most of the changes from version to version of FF does not affect most addons at all and yet there is this whole thing with addons having to be resubmitted, wait in the queue for weeks and at the end the only change in the new version is the maxVersion tag in the installation rdf.

On the other hand there is now talk of completely changing the system of interfaces between addons and the browser. Who has time and interest to rewrite the same thing over and over again?

That's great, but there are lots of extensions that do in fact break. If users update to a new version of Firefox and their extensions don't work, or cause their browser to crash or otherwise malfunction (not a theoretical problem), they are not happy users.

The Jetpack project is working to create a stable (but admittedly more limited) API for extensions to use to make it possible to sidestep this problem.

How do you know that Mozilla are not improving quality? If you pay attention, Mozilla are improving the quality of the codebase (memory consumption/leak fixes, crash fixes, etc.).

And while plugins do add some features, what about HTML5 support? Support for SMIL animations in SVG? Out of process plug-ins? Better JavaScript performance? Support for additional emerging and evolving standards? Better OS integration on Windows, Mac and Linux? Hardware-accelerated page rendering? WebGL support? And much more.

Why does the Linux community have so much trouble with it, while nobody else does?

FreeBSD, for instance, manages to have several major-number releases in use at any given time. FreeBSD 9.x is in development. FreeBSD 8.x is the recommended production release. But even FreeBSD 7.x is still supported. Not only that, FreeBSD manages to get out several point releases each year, in addition to a major release. But it has none of the problems you mention.

In the world of Free (Gratis + Libre) open source software (FLOSS?) there's little need to waste time patching an older system when everyone has free access to a newer system that's backward compatible. That job is left to distribution owners (like Ubuntu whose October 2008 "LTS" is still patched by them).

"In the world of Free (Gratis + Libre) open source software (FLOSS?) there's little need to waste time patching an older system when everyone has free access to a newer system that's backward compatible."

It's only that in too many times it tends to be *not* so backward compatible.

You're both right. New features getting adding to the stable kernels have done much to reduce stability between kernel versions. So much so that distros have had to pick up the slack by introducing an increasing number of patches. Have you ever looked at the patchset list for Ubuntu? There have been like 17 different kernel patchlevels for Karmic Koala since it was released in October. That's more than one patchset a week, and each patchset can have anywhere from 1-10 patches.

Patch levels don't start at 1 the day of release, they start the day they start working on the next branch. The kernel included in the installation CD was at patchset 14, the latest released one is 17 (However there were 2-3 updates that didn't change the patch level). And Lucid is already at -11 (see http://changelogs.ubuntu.com/changelogs/pool/main/l/linux-meta/linux-meta_2.6.32.11.11/changelog and http://packages.ubuntu.com/lucid/linux-image).

It's not the waiting what sucks. What sucks is that the old development model was more unstable. For big projects linux with a lot of activity, long development cycles just don't work. You don't have releases, so users don't test it. Once you get out the first stable release, users notice that it's very buggy (but you still don't know all the bugs, because most users and distros are still not using it because it has too many bugs), and it takes a full year to get the codebase into a decent shape. That's what happened with Linux 2.6. They had been dropping thousands of LoC for a couple of years. Because it's a "unstable cycle", quality was not so important, the main tree was used as a repository for "work in progress" code, and even if it was important (which it isn't, even it there's a corporate policy that says that it must be) you can't measure the quality of the code, because the users are not using it.

The new model, in the other hand, allows new feaures in every release, but it's much easier to track regressions compared to the previous model. The new features are required to have some quality, they can't have serious bugs, maintainers must agree that they can be merged, and they only can be merged in the first two weeks of the 3 months development period. It allows to make progress faster, and at the same time bugginess is controlled more easily. Previously, you had a huge diff of several MB, users reporting that the huge diff was causing several bugs and regressions in their systems, and developers had to start debugging the alpha code they had written, and had not tested, two years ago. IMO, long term, it's much better for everybody. It's not surprising that FreeBSD and Solaris are using this model too, it makes sense for Mozilla to use it aswell.

It was worse than that. The lag prompted several maintainers and distros to backport changes and the result was *two* unstable branches. I recall trying to get a new server online only to discover that the old kernel would crash on boot and the new kernel would crash randomly afterwords.

With the new development model it has been much easier for me to keep stable systems.

It's not about the version numbers but the development model. Basically, "do we add some features in the stable branch and make frequent releases, or do all new features go only into a dev branch that gets stabilized and released only rarely?"