Posted
by
michael
on Sunday September 23, 2001 @02:36AM
from the 6346-or-bust dept.

The famous Anonymous Coward writes: "I saw over on Gnutella News that LimeWire LLC announced that they're releasing the LimeWire codebase under the GPL license and that they've setup limewire.org as a site dedicated to Gnutella and LimeWire development. LimeWire's codebase is currently being used by two of the most popular Gnutella clients: LimeWire and SwapNut. As far as I know, this is the first time a formerly closed-source file-sharing codebase this popular has been open-sourced." gtk-gnutella is coming along nicely for Linux, but more competition is always better.

Source code availability does not open source make. No more than not requiring purchase makes free software. Perhaps it was halfway "shared source" before (but even that's being unfair to "shared source").

Congratulations to LimeWire for releasing an excellent libre software application!

Translating java bytecode back into source code is not very hard. LimeWire, being a java app, was halfway open source already.

Not at all. The thing about that is that you would be breaking their copyright (and ergo the law) if you modified and/or redistributed the code. This way, it's properly free (RMS-sense), as opposed to just crackable.

Only temporary variable names are lost, unless the author used a scrambling program. Class, field and method names are preserved, because the link process for Java occurs at load time, and they are needed for this.

It's illegal (or at least ignorant) comments like this that give the open source movement a bad name. By your reasoning, every OS and binary ever released is halfway to open source, since it disassembles easily into assembly code. And plenty of people are fluent in assembly to take the "project" over from there.

Being "open" takes intent on the part of the creator/releaser/licenser.

Sure it's a LITTLE slower than a regular app, but Limewire's latest release fixed a lot of issues that previous versions had (like the redraw after 'unhiding' is fixed for the most part), doesn't crash at startup as much and is a little faster than the last rev...[digression:but this version won't hold preferences for sharing files]. until macphex (ALSO JAVA) puts a file type option in the search capabilities, i'm using limewire.

Mac version is pretty solid too. Only problem is under Mac's JVM, if you are running out of memory, new objects don't always get created.. so new connections, new widgets don't show up when you are low on mem.:)

Just to clarify, there is no "Windows version", "Linux version", or "Mac version". It is a 100% Java pprogram, and all versions are identical. You can run the LimeWire.jar file from an JVM and see the exact same client. The only difference between prepackaged versions is they wrap the JAR file in an executable launcher for that platform ( A.EXE for Windows, ELF for Linux, etc.)

Like many other closed-source Java apps, LimeWire is prepackaged for different platforms to make it easier for one to install the client on various platforms. For Linux, one gets a nice Bash script that allegedly makes running LimeWire easier.;-) Similarly, the MacOS version has a nice installer and a nice script that makes starting the client much easier to start up.

limewire is one of my favorite gnutella clients, also the first decent windowed java app I've seen. I commend them for doing this, but have to wonder how this fits into there business plane. They just made a deal with File Metrics Inc to brand Limewire tech as SwapNut. but why would they make there source (read: IP) free if there business plane is to license there IP?

The GPL is actually quite useful in cases like this -- as QT have found. You release the code base as GPL, which allows it to be used in any GPL compatible code... but if companies want to use the code in their closed products they have to talk to you and pay you to license it to them under something else.

Don't know about the original poster, but I would have. Lots of nice features that other clients lack. On the worst end, I'd put Mactella. I use LimeWire at home on my Linux box, and at work on my Mac box. Nice li'l app.

The third party programs included with BearShare are optional (all you have to do is un-check the checkboxes during installation) and they do not monitor your internet usage any more than Macromedia does. When you visit a web page with Flash content, the Flash plugin "knows" this and displays the movie. Similarly, when you visit a web page which is cooperating with one of BearShare's third party programs, the program "knows" this and displays an ad.

For the last fucking time, Onflow does not send your browser history to the NSA! Please stop spreading paranoia.

Is it me, or is everyone else reluctant to download some slow java program with a klunky ui that's 3.44 meg plus the 14.4 meg JRE 1.3; over a lean, mean gtk version that's a 157k download that I can set up with./configure; make install?

I mean, I wish the limewire people the best, they've obviously put in a lot of hard work and long hours, but it just pains me to see a program that big and inefficient. Is it ever going to be possible to compile a java programs into small to medium sized, standalone executables? I realize you normally need to have the java virtual machine running, but this just seems... messy.

All you java advocates, this is your chance to defend your language of choice and explain it to me and the rest of the/. crowd.:)

And yes, I have used limewire before, albeit quite awhile ago.

Sure, this is a little bit off topic, but how often can you say yay, another program is open sourced.;)

So, look at it this way... just like I already have GTK on my system, I already have a JDK/JRE.

I download Limewire (3.44MB vs 157k is a negligible difference these days) and unzip the distribution. Then I run a shell script that sets up the environment and runs the app where it sits. It takes a little time to fire up the JVM, but then it's just fine as far as speed goes.

With a GTK client, I have to compile and install it, then I get to run it.

If it wasn't written in Java, you wouldn't be getting it at all, since it would be Windows-only (or, at best, FLTK or wxWindows).

The download isn't 3.4M, it's about 1M.

The Java runtime download isn't 14M, it's about 9M.

You seem to be assuming that Gnome/Gtk is somehow free while Java needs to be downloaded. Why? The Gnome/Gtk libraries, as well as the C support libraries, are huge downloads. I remember last time I installed a basic Gnome desktop, I needed to download about 20-30M.

Sun's Java runtime isn't slow, although the Java GUI libraries are clearly less efficient than Gtk+. But, then, Gtk+ is also hugely less efficient than Xaw. As machines get faster, we create and use toolkits that are more convenient and less efficient. The Java toolkits have a lot more functionality and are generally easier to program and more robust than Gtk+.

The Java toolkits have a lot more functionality and are generally easier to program and more robust than Gtk+.

For us M$ developers, GDI+ (the graphics interface for.NET) seems to be a lot faster than Java, and a bit easier to program. Either way, computers are supposed to be Human Centered, so if the computer has to do more work for the sake of less human work (!= human sloppyness), I'm all for it.

The filesizes that I posted are right for linux versions, and are correct.

libgtk1.2 is ~615k while libglib1.2 (required by gtk) is ~61k. Plus the dev.debs, probably not much bigger. (I already closed that window and I'm lazy, heh) Most importantly I have all of these.debs already installed, as nearly all my applications I run are gtk based. People who say that linux doesn't have standard toolkit/feel aren't running the right programs IMHO.:) (I don't use gnome or kde, I use windowmaker only).
My desktop screenshot [steem.com] can be found here, for the curious.

And the speed all matters on how fast of a computer your linux box is. Some of still don't have exactly top notch hardware, or a top notch internet connection for that matter.:)

My main gripe I guess is that no one is programming things like microsoft office or photoshop for java. Why is that? Too slow? Licensing too restrictive? Because the public (like me) wouldn't accept it as a real competitor? People see it as an additional thing to insta/run? Too big of a memory footprint? Or What?

Aparently no one's tried downloading anything with mozilla, because both the JRE 1.3.bin file and the limewire.bin file open up as plain text in mozilla. Sun uses a form submit to download from an ftp server, why? And the cnet auto refreshes to an ftp server, I'm never given a link so I can right click donwload or copy the ftp urls to the clipboard to download using snarf. I had to hit stop on each ftp download, and copy the url from the location bar.

After awhile of fighting, I figured out that I need to add the jre bin dir to my path to make limewire run. (This isn't in the jre install notes at all).
Now there's the jre 1.3x and 1.4x, limewire doesn't work with 1.4x, so if I have a 1.4x java program I need to have both jre's installed in running. The memory footprint of the jre 1.3 according to top is 30 meg, yikes! gtk-gnutella is 3 meg.

Now I'm not saying that gtk-gnutella is the perfect program, far from it. But recently the gtk/glib libraries for win32 are becomming pretty good. I've seen a few cross platform programs linux/win32 using these graphic toolkits. www.videolan.org (dvd player for any os out there) is one of them that comes to mind right away.

libgtk1.2 is ~615k while libglib1.2 (required by gtk) is ~61k. Plus the dev.debs, probably not much bigger.

You are comparing a minimal set of Gtk+ runtime libraries with the JRE, but that's comparing apples and oranges. The JRE isn't just a bunch of GUI libraries, its a huge set of powerful libraries, a runtime compiler and optimizer, and a lot of other stuff that you couldn't get for Gnome if you wanted to. In real life, people install "the Gnome environment" and "the JRE", and those are the sizes you need to compare.

My main gripe I guess is that no one is programming things like microsoft office or photoshop for java. Why is that? Too slow? Licensing too restrictive? Because the public (like me) wouldn't accept it as a real competitor? People see it as an additional thing to insta/run? Too big of a memory footprint? Or What?

Well, who is going to do the programming? People on the Gnome and KDE projects seem quite anti-Java because they have some belief in the superiority of C and C++. (Now the Gnome people are going off on a C# tangent, which, being a Java clone, I suppose is better than C, but it's too little too late.) And why would any commercial vendor bother?

After awhile of fighting, I figured out that I need to add the jre bin dir to my path to make limewire run.

For the Windows (and probably MacOS) version of the JDK, you just click on it to install it.

The filesizes that I posted are right for linux versions, and are correct.

Well, then the Linux packaging isn't very good.

And the speed all matters on how fast of a computer your linux box is. Some of still don't have exactly top notch hardware,

Of course, you can always aim low and try to produce software for older computers. But how is open source ever supposed to lead with that kind of attitude?

Now I'm not saying that gtk-gnutella is the perfect program, far from it.

Far from it, actually. The UI has numerous serious problems. I mean, come on, using a list box for tabs? Truncated text labels? Using list boxes for displaying statistics? A window that won't resize to anything narrower than 1027 pixels (how are people on older laptops supposed to use it)? No menu bar? Why didn't the author use the right kinds of widgets for the job and make the window resize properly? Gtk-Gnutella could be a poster child for how making programming too hard leads to serious design and implementation problems, and Gtk+ C code is intrinsically so interwoven that these kinds of problems are hard to fix.

Now, I'm not saying that Java is the perfect programming language, or that LimeWire is the perfect Gnutella client. But Java is a whole lot better than Gtk+ in terms of programmability and portability.

Something like Gtk-Gnutella should be written in Python, Tcl, or Java, not a low-level language like C. Python or Tcl are great for single programmer projects (they allow very rapid development), while Java is better for large multi-programmer (programming in Java is much slower, but it's quite a bit easier to coordinate among many programmers).

I don't know what "pre-installed" is supposed to mean. Gnome/Gtk+ are optional packages in almost all distributions.

Java doesn't. I just wonder why that is so.

You might ask why Java isn't part of the RedHat or Debian package systems. There is actually lots of demand, but unfortunately, Sun's licensing policies make this difficult. It's too bad Sun can't market themselves out of a paper bag. However, the Sun JDK is trivial to install. Also, Debian, RedHat, and other systems include several other Java implementations.

Could it be that *gasp* Java is not popular?!

Java, while clearly not perfect, is wildly popular: it is taught widely in colleges, is part of the AP exam, is used extensively in research, and is one of the most popular platforms for building enterprise appliactions. It's a shame that the most vocal Linux proponents seem to be so hostile to it. And it's particularly regrettable that people like de Icaza are off on a wild goose chase with a less mature Java clone called "C#".

as pointed out by other reponses, your arguments about library requirements and filesize are more or less moot points.

More importantly, we should look at the program's versatility and ui.

Java has worked hard to become usable over many os/hardware combos. For the most part it succeeds, albeit at a cost in speed. GTK+ is designed primarily for X clients, and isn't usable by the majority of filesharers. The Limewire programmers can program in a relatively nice language and develop an entirely cross-platform result at no extra cost.

UI is what matters now to people. Limewire has a well thought-out and easy to use UI. This is important for the masses -- the same ones who'll be sharing the files you want. Lean and mean doesn't equate to much when people really just want simple and effective.

bottomline: use whatever floats your boat, but most of us will stick with something we can use off of any modern machine with no need for elbowgrease.

And that's exactly my problem with Java. I avoid Java programs whenever possible, as the UI tends to be slow and clunky, and just doesn't fit in with all the other UI apps I use (it won't follow themes or UI conventions).

I _do_ like the fact that I can run stuff that I otherwise wouldn't have had accessible, but as soon as there is a native application to do the same thing, the native one just is so much nicer.

Once the Gnome and KDE people have agreed on some interoperability standards (drag and drop, themability and UI functionality), I'll have KDE apps to consider as native as well. Would it be _that_ difficult to reimplement the Java UI in a native manner as well?

"and just doesn't fit in with all the other UI apps I use (it won't follow themes or UI conventions). "

It's funny that you should mention this in a discussion on gui applications for linux. If there's such a thing as a standard look and feel for linux I have yet to encounter it. There's several desktop environments each of which come with their own widget sets, their own way of theming them, their own component models (if any at all) and their own look and feel. Generally you need all of them in order to run common desktop applications. There's no way you can target all those environments as a programmer. And applications written for one environment integrate extremely poor with the other environments (beyond the point of being able to display the user interface).

With Java you want to abstract away from it all so that it works on all platforms. That means you can't rely on native things to work consistently everywhere.

Limewire has achieved that. It's a simple, elegantly designed UI that works the same on each platform. Most of the native competitors pale in comparison and look clumsy when compared. It being crossplatform is vital since gnutella works better if there are more hosts that share files. The limewire people just have to design the GUI once and can focus on adding new features (which they do).

Admittedly there's a problem with integration with the native platform. However, on linux it is absolutely unclear what exactly this native platform is. Should sun integrate the JDK with Gnome, with KDE, with motif with X? Should they create separate jdk's for each environment? What about versions of each environment? The problem is that there is no standard and consequently all sun can do is target the most common denominator. They don't have that problem on mac os X or win32. The JDKs on these platforms generally integrate much nicer. They use file dialogs, the printing facilities, the native 3d, 2d and multimedia libraries, the clipboard and so on. Achieving the same on linux is nearly impossible since there are multiple implementations of each of those components. However, that is a linux problem and not a java problem. IMHO this is the primary reason that linux on the desktop is still not happening outside the developer community. Also I am very pessimistic about these issues being addressed in the near future.

It's also worth pointing out that you CAN have a native look and feel if you want. It's a one line code change. We could easily make it an option in the LimeWire GUI, but we like our cross-platform L&F much better.

Native integration is not that difficult. We've added some native code for Windows (system tray, file launching, etc.) since that's the vast majority of users. We'd welcome any volunteers to add native support (like file launching) to other platforms. Hey, I run LimeWire on Linux myself.

I would have to say that gnut is probably the best gnutella client I've used. It's fairly small, lightweight, and easy to use. Lately it's just running as a server though since I've run out of HD space. Hmmmm, time to fiddle with the burner I suppose.

When everything looks bleak - terrorist attack, lost of lives, liberty, and even FREE SPEECH, and open-source projects either folded (going to close source) or were yanked due to legal pressure and such - this is indeed a good news !

gtk-gnutella is coming along nicely for Linux, but more competition is always better.

No offense Michael, but I disagree. I don't know how it is with file sharing systems on Linux, but Windows is glutted with the things. I've used a few and my college roommate experimented with tons of the things. I don't want a lot of variety, I just want a simple interface and a simple system that finds what I want and is relatively lawsuit proof.

Google is the ideal for web searching and something approaching that caliber for file searching would be wonderful. Make it easy, stable to use, and uncomplicated, then get everyone to use it (or make it interoperable with other networks) so that you have the best chance of finding what you want.

No offense Michael, but I disagree. I don't know how it is with file sharing systems on Linux, but Windows is glutted with the things.

Limewire supports and uses the gnutella network. The competition he was discussing was with the different gnutella programs, just like Eudora, Netscape Mail, Outlook and many others support the same standards, but provide different interfaces and features.

I do agree though that the peer to peer file sharing needs to be standardized. It's just as bad at the network file sharing protocals, like NFS, SMB, Appletalk, etc... I've seen EDonkey, Gnutella, Napster, and several others I can't remember. Some introduce nice ideas, but the overall community would be better off if that energy went into one standard. Why can't Gnutella be adapted to support EDonkeys fragemented download prcedure?

Oh well, it's yet another example of how the software industry can't learn from the mistakes of the hardware side. Standards have greatly improved the hardware market, imagine what will happen when almost the entire software market sees it this way.

Standards have greatly improved the hardware market, imagine what will happen when almost the entire software market sees it this way.
I'm not really sure what you mean by this. Standards have lead to a lot of cheap hardware, which is good for consumers, but bad for business. Whenever you make it easy for you competitor to sub thier parts with yours, the business is going to lose saies, and because of pricing pressure, lose some of thier profit marign. Why else would Mircrosoft use SMB instead of NFS? It's so that it is difficult to replace a few of those NT boxes solaris or linux. By not using standards, you keep people dependent on your software which is good for the software company. So on one hand, they can support standards and lose money, and on the other, they can use thier own methods and (potentially) make tons of it.

>And the TUCOWS and C|NET search pages don't serve you how?
Try too much fucking hype for various "services" that don't have anything to do with searching or anything else you may be doing at the moment.

Gnucleus is an open source Gnutella Client, and from all the ones i've tested so far (LimeWire, SwapNut, Bearshare, Gnutella [Classic], Gnut) it's the f***in best. Like every other client it takes some time to connect, but after Gnucleus is connected it's really fast. It's Windows only for now, but the developers say it should work great in wine, cause it uses the MFC of Windows. I haven't tried that yet.
X

But when will the leading P2P sharing programs work with each other? How about a "plugin" system. I would like one program that works with all the systems.

Also, for all the talk of GUIs, all the current programs I have seen suck. If you want to see real innovation in intuitive and functional interfaces, see the headway that Apple Computer has been making with some of their appliance applications, such as "iTunes" and "Sherlock."

A plug-in system would facilitate specialization by developers who want to make new algorithms, implement new protocols, or create new interfaces.

OK, I have used Limewire in the past and I like it a lot, but the CPU load makes me cry. If you share a lot of files, the CPU load becomes unbearable and slows down your system. I have looked at gtk-gnutella, I have toyed with Phex (another Java client), I have compiled gnut and so on. But only recently I found the right app for my KDE desktop:QTELLA [kde.com].

Has all the features one would need. Of course it is a lot faster than Limewire.

Finally one thin I would like to see: A pure and true gnutella server daemon. No GUI. No nothing. Even gnut requires logging in. So how can I start a gnutella client by ssh? How do I control it ? Not possible, the program clkoses as soon as I drop the ssh connection. Now that would be a nice feature in a gnutella client.

I have used Limewire in the past and I like it a lot, but the CPU load makes me cry.

It's written in Java. Not a flame on Java but it's the truth. The coders are extremely talented and have done an incredible job, but there's only so much you can do (performance wise) for windowed Java apps.

This isn't exactly true. Java and C/C++ are very similar in performance if you use them the same way.

There's no reason that I am aware of why Java and C can't have identical speed if coded carefully. However people use Java because they don't have to and don't want to code that way.
(There is some handwaving above- some of the Java I/O libraries are a bit slow; that's a library issue, rather than a language issue though.)

The performance I've seen has suggested that the problem isn't Java, but simply poor programming. The searches are essentially full-text searches of a lot of rather small texts. Limewire starts taking up significant CPU time with text on the range of a very small website, like 3000 words. I suspect it doesn't have any optimizations of the search algorithm, but just walks through the words checking one-by-one for a match.

Admittedly, maybe you could get by with this in C/C++. But the blame still doesn't lie with Java.

I have used Limewire in the past and I like it a lot, but the CPU load makes me cry

LimeWire uses a negligible amount of CPU on my machine. Sharing a lot of files should not make a lot of difference, since LimeWire uses a rather sophisticated indexing mechanism. Perhaps you're using an outdated JVM? Or you've set the JVM max heap size (-mx) too small?

Finally one thin I would like to see: A pure and true gnutella server daemon.

Check out the core package of the LimeWire project. There's a minimal command-line interface version buried in there. Probably not hard to get it to do what you want.

Thanks for your work on LimeWire, it's a great Gnutella client! I enjoy using it on MacOS 9.2.1, it is nice and fast and usable.

What can you tell us about the MacOS X 10.0.4 support? It is the exact opposite, slow to load, slow to run and consumes massive amounts of CPU. That and the widgets look funky, buttons don't line up with Aqua title bars and when you resize it splatters everywhere.

gtk-gnutella is coming along nicely for Linux, but more competition is always better.

As has been already said, gtk-gnutella is not doing anything nicely, it seems to crash after just a few minutes of use. What other didn't seem to mention is that Napshare [sourceforge.net],
while it looks almost identical to gtk-gnutella, has no stability problems whatsoever, even though it's version 1.0 * 10^-7 or something =-) I guess that show that version #s really don't mean squat. Try napshare if you want an X11 gnutella client, it fits the bill quite well.

Limewire.com seems to be slashdotted or otherwise
unavailable (even tried the google cache), but there is a good article [209.10.179.92] from digitalmusicweekly.com about Limewire LLC and how the Limewire client fits in. Basically, they want to make money from servers (or something like that), and never wanted to charge money for the client in the first place. So GPL'ing it makes lots of sense - they don't lose anything and they might gain development help, more users, and stuff like that.

One thing I've always wanted was to just specify a file and leave it to go get it and download it itself.

In particular if 5 sites have the file I should be able to connect to all 5 of them (or try to) and download different parts of the file in parallel; the protocol allows you to start wherever you want to.

The total load on the network is the same because I'm only connected to each server for 1/5 the time, but I would usually get it faster.

Of course sometimes, one of the files is corrupted or something- it's possible to check the ends of the fragments and splice them correctly or ignore any bits that don't fit.

Swarm Distribution: Mojo Nation breaks up the task of delivering content among many agents across the network, each of which contributes as much as it can to the collaborative effort -- even low-bandwidth, dial-up users can deliver a small portion of a requested resource. Speed and reliability are enhanced because several peers work together rather than one peer working alone.

It seems that MojoNation already implements such functionality. I seem to remember reading about a gnutella client that was working on this, but at the moment I can't remember which one. If not, it would be a great feature to have. Considering the big problem with gnutella, according to all of the technical reports, is the wasted bandwidth and the chatty protocol, any way to more efficently use the gnutella network is a big improvement.

Yes, this concept isn't new- although as you say I don't think it is available for gnutella yet.

Another thing that Gnutella doesn't make use of right now is the partial downloads. If I've downloaded 1/2 the file, it usually doesn't appear on anyones search. If fragments are stored in the searchable directory we can effectively get more results, which means better download speed for everyone, cos there's probably lots of upload bandwidth out there right now going spare.

I don't think this will necessarily make it more bandwidth efficient however, but it would make it much, much, more useful.

In particular if 5 sites have the file I should be able to connect to all 5 of them (or try to) and download different parts of the file in parallel; the protocol allows you to start wherever you want
to.

I was just thinking abouit that last night. What is needed is for clients to include a globally unique id for each file (perhaps length combined with an MD5 of the data) along with the name. That way, searches could quickly and easily determine exactly which servers had the same file (even if naming was different).

The user would enter a name. Client displays search returns. User picks one. Client then searches on the GUID of the file to get a list of all servers having exactly that file. Now, go to download.

That approach brings several advantages:

Multi-source downloads can now be done with confidence that the file won't be corrupted in the process.

When a transfer is resumed, the file won't be corrupted even if it resumes from a different server.

Servers having only a portion of the file can usefully offer the part that they have (by providing a byte range with the search return) In that case, the GUID would be that of the entire file, not just the portion actually available.

The first two features should interoperate quite nicely with existing clients. The last would need to send a different search result packet type so existing clients don't get confused.

As a side benefit for more automated seek and download, the user supplies a list of desired search terms, the client send out the search request, and allows some time for results to come in. It can then choose the most common GUID returned for some assurance that it isn't grabbing a mis-identified or corrupt file. (surely, most users will delete or rename such files that they download).

A GUID would be good, so would a hash. However if a file gets truncated a simple hash gets all messed up.

I don't think the file will be corrupted if it comes from different servers. By ensuring you have an overlap in the fragments you collect you can ensure that they are the same file. The chances of two different files being the same over say, 128 consecutive bytes is very low for most files (mp3, mpeg, binaries).

Also, if you introduce some randomness in where you start requesting each fragment, it becomes more difficult for someone to deliberately construct files that only match in the middle, but all the other bits sound like a cuckoo clock...

The best system is actually a heirarchical hash. First you hash all the 256 byte blocks. Then you write the hashes consecutively and hash each 256 byte block of that, write them consecutively and hash that and so on, until you have a single hash of the hashed hashes. That is the file GUID.

All this gets prepended to the file, and then people can then download the blocks in any order and be sure they've got it all right.

I think this is how MojoNation works, but I haven't checked. The protocol is proof against deliberate tampering with the file, although it isn't proof against people misrepresenting a file's contents in the first place- still you can always play the file before you've finished with most browsers.

Linux' native file system, EXT2FS, is known to lose data like a firehose spouts water when the file system isn't unmounted properly... All the drawbacks of the ancient EXT2FS file system remain in EXT3FS

Cheap hardware designed to be put in a $500 PC that a user shuts down every night is generally not designed to run 24/7. Try doing your tests on a quality workstation or server. Yes, Linux has bugs. Yes, you can help by documenting them so that kernel developers can reproduce them consistently. No, this doesn't stop Google from using a Linux system.

A Linux user has to live with badly coded tools which have low performance, mangle data seemingly at random and are not in line with their specification.

Are you referring to the GNU tools? In that case, why do Solaris admins routinely install GNU software [sunfreeware.com] on their machines?

a lot of them spit out the most childish and unprofessional messages

Example?

If you don't answer these questions in the next version of this troll, even more of us will refuse to bite.

"Beta" == not ready for production use. Despite what you want it to mean in your little fantasy world.

Beta == Complete, but not heavily tested software. It may contain bugs, it its supposed not to do so.

What's your point? Didn't have the guts to compare to an NT-kernel based Windows?

Comparing NT to linux...

NT is crashes more often. (CNet had a test for web servers. Over one year two Linux servers did not require "reboot", where NT based servers had to be rebooted every two weeks)

NT is less scalable (try to run Win2K on 486!)
But Linux can run on handleds to IBM supercomputers

Linux is POSIX compilant where NT is "trying to be" POSIX compiant. (UNIX is the one true way TM)

Linux is open, one can change it to his special needs and exploits can be fixed much quickly. (You do not have to wait for one year for service pack 7)

"Microsoft sponsering Windows helps improving it"

I think you did not get it. Sun distributes some set of standard tools (c compiler, shells, etc). Where most of the Sun admins do not use their Sun versions but install GNU versions instead. because they are more robust, they have more features. This is like everyone installing WinAmp instead of WM7 crap, because it loads faster and it has more features.

"Beta" == not ready for production use. Despite what you want it to mean in your little fantasy world.

Interesting how age old stuff gets different semantics when used by a Linux advocate.

Nah, that's the point! Even the beta stuff is generally better than the MS supposedly-production-ready software.

Do you run them 24/7, or do you shut them down after surfing for two hours?

Yep. What's your point?

The point is probably that many of the Windows bugs come out of the woodwork when the system left on, which is why Windows servers have such bad contiuous-uptime stats.

Just suggesting things the way your like them doesn't make true.

Now I really can't see where this one came from. The only suggestion made recently was that you switched off your XP/NT boxen after a short while, and you just confirmed it!

Sorry, but I don't. I couldn't find "child"ish or "juv"enile or "imma"ture or anything similar in the article you mentioned. Could you please explain?

Being pedantic doesn't help your case at all.

About what is he/she being pedantic? The only reason that those boot messages are being nixed is that Linux gives too much debug output on startup, rather than any immaturity or childishness of the comments themselves.

The steep learning curve compared to about any other operating system out there is a major factor in Linux' cost.

Sadly, true. Unfortunately, you can't fix it without admitting that there is a problem, and I haven't had any luck convincing anyone that Linux has serious user-friendliness flaws. Can't see the forest for the trees, I suppose...

Installing new programs/configuring old ones, I'll give you. You have to wade through a bunch of man pages and websites to even figure out which of the 200/etc files you should be looking in.

However I recently installed Redhat 7.1, and it was at least as easy - if not easier - to do a default workstation install than Win98 (the last one I installed). All my hardware was auto-detected, and works perfectly - which is commendable for a system for which many companies still do not provide drivers.

If I were an average Windows user who didn't want anything more than out-of-the-box usability, then this would have been perfect for me. As soon as install was done, I had web access, e-mail, an office suite (not perfect, as people will point out, but still good), more built-in card games than Windows can shake a stick at, etc, etc. It took me a complete screw-up of KDE to figure out rpm, but now that I can use that, I don't have to worry about installations anymore either.

The biggest learning-curve problem is that there is no easy-to-use GUI version of a lot of command-line stuff, so people still have to know their way around the command line. Even that's disappearing though, so I think the learning-curve problem will be no worse than Windows' in a short while.

Linux requires a *lot* of maintenance, work doable only by the relatively few high-paid Linux administrators that put themselves - of course willingly - at a great place in the market.

Bullshit, plain and simple. Get the services installed, leave everything else off and the systems just run. Witness our RADIUS server, numerous fileservers and firewalls (all with hundreds of days of uptime and the only maintenance is a script which rotates logfiles and emails unusual activity) -- all with hundreds of days of uptime.

Like any other OS, the admin is responsible for monitoring the security mailing lists and installing patches. And like any other OS, you get what you pay for in an admin.

Linux' native file system, EXT2FS, is known to lose data like a firehose spouts water when the file system isn't unmounted properly.

That's a bold-faced flat-out lie. I run Linux on this laptop and have NUMEROUSLY had the volume level down too low to hear the battery alarm crying out. I've lost power at least three or four dozen times this year with no data loss.

Where EXT2 does lose data badly is when the metadata store gets corrupted (power dies when it's being updated or the drive gets bad sectors in those areas) -- However I also know that Reiser, NTFS and VFAT will die horribly under those cirumcstances too.

Factor in also the fact that crashes happen much more often on Linux than on other unices.

Let's see some hard numbers. I've been running 2.2.x kernels for literally YEARS without crashes. Quit running alpha drivers and unstable kernels and your stability will increase. This is just common sense.

The steep learning curve compared to about any other operating system out there is a major factor in Linux' cost.

So you consider a Win32 admin someone who can go to windowsupdate.microsoft.com? Or a SCO admin someone who can call the support hotline they pay for? I don't understand (nor have you given proof) for increased TCO for Linux.

(an aside: The Code Red fix wasn't included in any patches available from there. So whose fault is that, Microsoft for not making security a priority, or the click-happy "admin" for not knowing better?)

I could go on and on and on, but the conclusion is clear. Linux is not an option for any one who seeks a professional OS with high performance, scalability, stability, adherence to standards, etc.

I dunno, I've had no problems setting up and casually[1] admining firewalls, SMTP/IMAP/POP servers, LDAP servers, web servers and plain old fileservers. Like I said, once it is up and running, there is next to zero maintenance. This can be done with any unix; For me, Linux makes the most sense and none of my clients have had complaints about "increased costs of their Linux servers." I don't know whether you're a Win2k, SCO, Sun, QNX or *BSD troll, and frankly I don't care. Your post is so full of shit that I just had to feed you. FUD is FUD.

[1] - I use the term "casually admin" to describe what I do: monitor the security lists, provide updates as necessary and receive the emailled logs. The only time I ssh in is to change the configuration based on a customer's request or perform security updates. To me, this is exactly what server administration should be.

First off, I'd like to point out that Linux is not the same as Open Source software.

An important factor in Linux' cost is its maintenance. Linux requires a *lot* of maintenance, work doable only by the relatively few high-paid Linux administrators that put themselves - of course willingly - at a great place in the market. Linux seems to be needing maintenance continuously, to keep it from breaking down.

From this I conclude that you have never had to administrate an MS-based network. We keep up with the latest stuff, use all-MS solutions, and our sysadmin has to put out fires semi-daily. SO much for claiming to represent those in the trenches.

Add to this the cost of loss of data. Linux' native file system, EXT2FS, is known to lose data like a firehose spouts water when the file system isn't unmounted properly. Other unix file systems are much more tolerant towards unexpected crashes. An example is the FreeBSD file system, which with soft updates enabled, performance-wise blows EXT2FS out of the water, and doesn't have the negative drawback of extreme data loss in case of a system breakdown.

I use dodgy hardware a lot of the time, and my machines frequently get nuked by power cuts. I have never had anything that fsck has not fixed automatically. Ever (OK, one exception - I had to enter the root pasword and follow the simple on-screen instructions to run fsck manually). I have, however, seen several fs's get nuked completely by a power off, and they were all - guess what? - windows FAT partitions. I can see the word "scandisk" appearing on your lips, but that didn't do a thing - and because of the behind-the-scenes and non-configurable system startup, when it touched something vital, I had to bloody reinstall the whole OS rather than just the bit which had failed.

The upcoming 'solution' to this, EXT3FS, is nothing more than an ugly hack to put journaling into the file system. All the drawbacks of the ancient EXT2FS file system remain in EXT3FS, for the sake of 'forward- and backward compatibility'.

EXT3 doesn't try to fix these (as far as I can see) nonexistant grave problems. It is simply what you say it is - a hack to get journalling onto EXT". Incidentally, journalling does give far better crash support so I can't really see what you're whining about there.

This is interesting, considering that the DOS heritage in the Windows 9x/ME series was considered a very bad thing by the Linux community, even though it provided what could be called one of the best examples of compatibility, ever. When it's about Linux, compatibility constraints don't seem to be that much of a problem for Linux advocates.

See my earlier comments about comparing DOS/FAT filesystems with EXT2. Plus, of course, the objection is mostly that MS chose such a cruddy OS to build themselves around (8.3 filenames? Yeeuurgh!), rather than just emulating it (which is what they do now with the NT codebase, and is far less brain-damaged).

Crashes in Linux are a regular thing, and nobody seems to know what causes them, internally.

Examples? A reference to some of the downtime-statistics pages would be useful, as last time I checked I found Linux-hosted sites were far harder to push over than Win2K ones. (This is in addition to personal experience with our network).

I have worked with old and buggy as well as bleeding-edge kernels, and I have still never had a crash apart from with dodgy memory (which also nuked Winblows on startup with no diagnostic info whatsoever), and the teardrop attack, which is now defended against.

The steep learning curve compared to about any other operating system out there is a major factor in Linux' cost. The system is a mix of features from all kinds of unices, but not one of them is implemented right. A Linux user has to live with badly coded tools which have low performance, mangle data seemingly at random and are not in line with their specification.

As you accuse others of evidence-free FUD, could you come up with a defence of this please? What buggy stuff, apart from the things labelled beta? What badly-implemented UNIX features?

On top of that a lot of them spit out the most childish and unprofessional messages, indicating that they were created by 14-year olds with too much time, no talent and a bad attitude. And as for specifications, Linux is considered one of the reference POSIX implementations.

The talent in abundance is indicated by the fact that you rarely see any of these error messages. Plus, of course, I far prefer to see an "oops" and an apology from the programmer when a crash occurs, rather than Windows' cold wording and habit of blaming it all on the "current application".

I could go on and on, but the conclusion is clear. This is an uninformed troll, possibly an astroturf, with little grounding in reality or experience.

OK, so it's a troll. But it's a Saturday afternoon, I'm bored, and so I'll bite.

First off, Linux includes many programs from many authors, and many different licenses, many of which initially look the same, but have drastically different implications.

Want to edit a file? Better get a lawyer on retainer to make sure the license allows you to edit a proprietary document. Or that using the FTP server doesn't make everything you make available public domain.

No OSS licence I know of does things like making its raw data or output public domain. As for editing source, no OSS licences restrict editing source, else they wouldn't be open source licences (see opensource.org). And of course, if you compare this to closed-source products, which you can never edit at all, even the mythical restrictive licences you are referring to would be an improvement.

Another problem is the security, or lack of it. Linux boasts enhanced security since anyone can view the source (A claim that hasn't been backed up by research). While it is true that the source code is available for viewing, the lack of standards in coding and sheer complexity makes it difficult to verify security.

Well, some people seem to have managed well enough to make it several times more secure than any commercial OS I've seen! Anyway, you can't check proprietary source at all, so why are you whining?

Additionally, Linux most often comes precompiled from a distribution, which could have added secret "backdoors" to the software.

True, this is a possibility, but it's never been shown to have happened. Commercial vendors, though, can include backdoors, and have (Front Page anyone?)

As for the bumph of recompiling, most recompiles go jsut fine with the default options. And as for introducing backdoors, from your own assumptions that's impossible - they would have been seen there by other people working on the project.

This is a blatant troll with no regard for the facts, but hey, as I said, I was bored:^)

Well if you were to some how search and find(I couldn't find it for some reason when searching) there was a story on/. about a linux distro written in assembly. Its about as barebones as you're gonna get.