Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

angry tapir writes "I recently had a chance to interview Kaj de Vos, the lead developer of Syllable: An open source desktop operating system that's not based on Linux nor one of the BSDs. There's a write-up of the interview here, which includes some background on the project. I have also posted the full Q&A, which is very long but definitely worth a read."

On both the TechWorld articles, I see an icon named "Prompt" and an window titled "Syllable Terminal". On web.syllable.org, the "Prompt" icon has been renamed "Terminal". Where is this "console" and what games does it play?

He actually explains it in TFA, but long story short, he wanted a server OS that was compatible with both software written for Syllable AND the vast body of server oriented software out there for Linux. The only realistic way of doing this was basically customizing a distro.

He could have gone the Windows or OS X route and basically just layered the server services on top of the kernel as an application, but that would have required re-implementing at least parts of all those services to make them compatible with Syllable. Maybe the maintainers will do that someday, but for the time being their solution allows them to concentrate on further developing the desktop OS while still having a server os that fits into the ecosystem.

Instead, he went the other Windows route and has two different operating systems for desktop and server. History has shown us that this is fucking retarded. It made sense back in the DOS days with Novell, and even in the WfW 3.11 days, but not now.

Instead, he went the other Windows route and has two different operating systems for desktop and server. History has shown us that this is fucking retarded. It made sense back in the DOS days with Novell, and even in the WfW 3.11 days, but not now.

I do not follow you... so it is super easy to install a standard linux distribution on an "Android Mobile"??And it is likewise easy to install "Android the OS" on a standard desktop PC?? Or iOS on a Mac or OS X on an iPAd?

He actually explains it in TFA, but long story short, he wanted a server OS that was compatible with both software written for Syllable AND the vast body of server oriented software out there for Linux. The only realistic way of doing this was basically customizing a distro.

Replace 'server' with 'desktop' in each instance there and you have a fine argument for creating a Linux based desktop. Why was this argument convincing for servers but not desktops?

You seem confused. The project releases a matching server that does use Linux as the kernel. The desktop OS does not. It's based on AtheOS, a new OS from scratch with a fair amount of application support built up over the years for a minor OS.

If the compiler, C standard library, and C++ standard library are from the GNU project, then perhaps you can just call it "GNU/Syllable". There's no Linux in Syllable, just like there's no GNU in Android (which gives GNU/Linux a useful meaning now that there are competing userlands for Linux).

Nope, I'm not confused. I followed it back when it was announced and was quite interested in the KHTML port at the time. I'm referring to new as in not an fork of a former codebase. For instance, Unix was a new OS from scratch, based on the ideas of Multics. MS-DOS was not new (being based on a purchased codebase).

Different meaning of new. You could also use new to refer to original ideas, but in this case, I was referring to codebase.

Not really sure what you're saying but Syllable is both a custom Linux distro, which is their server edition, and an operating system which was built from AtheOS which has it's own kernel, which is what is their desktop edition. The server edition is basically just there so that you can have a server with all the capabilities of Linux but a UI similar to the desktop OS's. It's been a while since I've used Syllable so things may have changed a bit though.

It's a hobbyist OS. If you don't want to use it don't. Most people trying it out probably think it's cool to try these kinds of things out. They know and don't expect them to run all the applications they are used to or that those few that may run once things like GTK and similar is ported over will be of the latest version or run as perfect as on the latest Linux desktop.

Look, its a hobby or niche OS, alright? I'm sure they know they will NEVER take on the big three, but it makes them happy and I'm sure the few dozen guys that use it are happy too. I mean hell, they still sell OS/2 as eComstation so SOMEBODY must like these niche OSes or nobody would bother, right? And it is the SERVER that is built on Linux, the desktop is based on Amiga IIRC.

I just wonder how much of this is hanging onto the past and playing "what if?" because i know that feeling. I was using OS/2 warp ba

No kidding... Not to mention their marketing department didn't quite get the Star Trek based code names they were using... Half their material for "warp" looked more like bad acid trip kind of warp than "warp speed". And I have a poster somewhere from IBM that says OS/2 will "obliterate your work". Really... I don't think they "got it" at all...

Look, in reality, if a new OS today isn't Linux or BSD (at least source code) or Windows compatible, forget about it. The apps make the OS these days. The words "critical mass" keep coming to mind.

I think it's interesting precisely because it's not Linux, BSD, or Windows "compatible".

BeOS, as wonderful as it is/was, did not come at the right time and could not get the apps to make it useful enough to appeal to the masses.

Actually, BeOS came at just the right time to hop onto the Mac-compatible bandwagon. The fact the major application vendors didn't write for BeOS doesn't mean that minor publishers didn't release a bunch of [birdhouse.org] useful applications [birdhouse.org].

Be's problem then is the same problem Apple has now -- MacOS ran fine on BeBoxes then, and Windows runs fine on Macs now. What differs is that Apple isn't starting from scratch; and yet it has demo

Why did they code it in assembly? Given that the x86 world is, as the interviewers stated, b/w Windows/OS-X and Linux/BSD, couldn't they have done it in C, and let some other microprocessor vendors based on things like MIPS, Power, ARM, et al spin boxes w/ these?

To get any boost of performance over C, you have to be an extremely good assembly coder... to get a consistent 3x boost, you are either writing very sloppy C, or you're extremely good at assembly and using a pretty poor compiler/poor compiler settings. It actually takes an amazing amount of effort to beat a compiler these days, because compilers have rules to spot non-obvious stalls and such, where as the human has to rely on analyzing every bit of that by hand.

Also, a system where every component is 3x faster is still only 3x faster overall, there is no Captain Planet performance magic where by the power of assembly combined you get a 20x speed up... not to mention many desktop operations being IO limited (especially the ones that you actually notice the slow down on) and assembly doesn't magically make that faster.

Finally, someone did try it - MenuetOS - and they were able to make a quite compact and fast OS. But they also cut out an awful lot of what goes into a modern OS to do so. Syllable itself is not written in assembly like MenuetOS, which was actually the example used above.

There are a very, VERY few cases where assembly can be considerably faster than C, mostly where the programmer really wants to store(or not store) specific values in the CPU cache. AFAIK standard C has no instructions for explicitly controlling what data is cached in the CPU, the programmer is relying on the CPU and to a lesser extent the compiler, to intelligently cache the data. And 99.9% of the time, if the programmer understands the basics of the cache(how big each of the caches are, how the cache rep

Most compilers (including gcc and MSVC) support these as intrinsics (which are usually fairly standardized per processor platform) though, so you don't actually have to go down to assembly level to access them (the same with SIMD instructions, which is another place you can get large gains over vanilla C) and the instrinsics are exposed as normal functions and types.

Intrinsic code is also more standard than inline assembly, which differs between compilers. You can take the x86 intrinsic code written on MSVC

"To get any boost of performance over C, you have to be an extremely good assembly coder..."

Well that may be true but it's a distinction without a difference. I find most serious assembly language "coders" from the CS side of the house are excellent programmers/engineers. They are going to write MUCH faster assembly programs than any compiler will generate. They understand modern coding patterns... OOP, state charts, algorithmic complexity but they adapt techniques to fit a given problem on a given machine

Crypto algorithms are often done in assembly, just not for performance, but to make sure the math is absolutely correct as well. Otherwise you probably need a lot of flags and extra code to gaurantee the compiler doesn't optimize in a way that can compromise the encryption. Also there are quite a few people who know the profile of crypto algorithms inside and out and thus how to use every bit of power availible int he CPU.. How many people know a window management or the SDL library in quite the same way?

"And yes, asm will usually get you a 3x boost over C - and the performance diff. is cumulative, so having a desktop that's 10-20x faster may be possible."

What?? This is like saying that if you managed to make four relay sprinters three times as fast, the resulting times on 4x100 meter relay would be 1/20 of the original. I'm sorry, but in reality, rather than magic pixie dust land, the maximum theoretical improvement is still only 3x, and unless you improve absolutely EVERYTHING by this amount, your improve [wikipedia.org]

okay, I'm new to this obviously, and know squat about it, but what' the difference b/w MinuetOS and Syllable? I read the above description:

Syllable: An open source desktop operating system that's not based on Linux nor one of the BSDs

So which is the OS - Minuet or Syllable, and what's the difference - is it something like a DOS/Windows3.11 paradigm? And my original comment applied to either - be it Minuet or Syllable - as some posters commented above, modern compilers are advanced enough that they'd beat, not just equal, hand written assembly code.

The difference between Menuet and Syllable is they're different operating systems. They're unrelated. Menuet was mentioned in the article as an example of another OS that wasn't based on Linux or *BSD.

I've done it some 20 years ago in 8085. Assembly programming is fine for small jobs, and when you have only a few registers to deal w/, and not have to monitor the statuses of various flags, which registers are being used, etc. But for modern CPUs, that have plenty of them, and where one would go nuts keeping track of what goes where, just use a higher level language, such as C or even Java, and let the compiler generate the assembly code.

Also in what way do you think www is dying? I'd wager that it's the default prefix for >99.9% of the internet.

If i'm typing or copying and pasting a domain name into a web browser (i.e., one i got from somewhere other than google) i always leave off the www because the results give me an insight into the company whose web site i'm looking at. If the domain name without www doesn't work at all, i know they don't know what they're doing and best avoided if possible. If it works, but redirects to the www version i know they sort of know what they're doing, but are living in the 90s, so definitely shouldn't be a first

Ugh. The redirect option is perfectly sensible, but putting the other thing just plain does violence to the conceptual integrity of the domain name system and I shudder at the thought that there are no people in the world who would consider one 'unprofessional' for not doing it. There are protocols that aren't HTTP, you know.

Yes, and there's a perfectly good way for determining which protocol someone wants, that being the port number. By all means route different services to different machines internally, but the details of that shouldn't be exposed to external users; there should be one, and only one, public-facing domain name, on which all your public services are available.

[......] putting the other thing just plain does violence to the conceptual integrity of the domain name system [......]

I think that's a bit melodramatic! I'm not quite sure which "other thing just plain" you're talking about, but maybe that's implied by the next bit.

and I shudder at the thought that there are no people in the world who would consider one 'unprofessional' for not doing it. There are protocols that aren't HTTP, you know.

Of course i know there are other protocols than HTTP. But if you enter a domain name or URL without the protocol, all web browsers default to HTTP - and they have done for a very long time. What we're talking about is web addresses, not gopher URLs.

Of course i know there are other protocols than HTTP. But if you enter a domain name or URL without the protocol, all web browsers default to HTTP - and they have done for a very long time. What we're talking about is web addresses, not gopher URLs.

All browsers yes. But these days (especially in response to adverts) people find themselves entering the data not into the browser, but a generic search bar in their phones. Many of these will start a google search without a www or http:/// [http] prefix.

But it sounds like what you are effectively complaining about is nothing more than marketing, and I'm going to have to completely disagree with you. Putting no www or http:/// [http] at the start is what looks unprofessional. It's the type of crap I expect of artists who

All browsers yes. But these days (especially in response to adverts) people find themselves entering the data not into the browser, but a generic search bar in their phones. Many of these will start a google search without a www or "http://" prefix.

Have you ever watch a non-techy person enter a URL into a web browser? In my experience, they all type it into the search bar, not the URL bar. I don't know how they cope with Chrome's lack of a separate search bar!

Fine, so the browser defaults to using HTTP. Now, if only it could tell to which HOST you intended to connect within that domain.

Oh, could it be the one serving web pages? and how should we call these World Wide Web page hosts? If only there was some sort of moniker to distinguish them from, say, a file server, or an advertising server...

Now, if only it could tell to which HOST you intended to connect within that domain.

Oh, could it be the one serving web pages?

Every organization operating on the Internet should have a primary public-facing view of the organization through the Internet, its "store front" so to speak. As World Wide Web has overtaken Gopher, this public-facing view has come to be a web site. Therefore, the organization's bare domain should be an alias (CNAME) for the host that provides this public-facing view.

If only there was some sort of moniker to distinguish them from, say, a file server, or an advertising server...

Servers providing large file downloads (generally HTTP on a high-bandwidth plan instead of a low-latency plan) can have separate hostnames wit

If it works, but redirects to the www version i know they sort of know what they're doing, but are living in the 90s, so definitely shouldn't be a first choice.

Speaking as somebody who was a web developer for most of the 2000s, I had a lot of experience running pages without the www. and clients *complaining* that it wasn't there. It was an expectation that all web sites must use it. Leave it out of URLs that people type in (on your letterhead or adverts, for example) and people add it themselves. Allowing two forms of the url, one with and one without, creates unnecessary complications when dealing with cookies. Therefore, redirecting makes everyone happy. E

Yeah, i was a web developer for the last two years of the 2000s (and still do a little bit now and then) and i know what you mean. But it was possible to convince people by then - i'm sure it wasn't possible a few years earlier.

But this is a very good reason why it's such a good indicator of whether the people running the business know what they're doing or not - if they do know what they're doing, they'll take advice from their web dev. If they think they know better than the web dev, then they're clearly

Again I think you missed the point. It's not the stupidity of your clients that should influence what you are doing, it's the stupidity of your client's clients. You're setting trends at the expense of the end user.... you don't work for Mozilla now do you?

If I were a "web developer" I'd build two websites one at domain.com and one at www.domain.com. I'd figure out which one was the one with the dumb people and which one was the one with the smarter people, and develop the pages accordingly. If they were indistinguishable, or became that way, I would merge them back together and use another prefix,

I had this conversation with a dev yesterday. He has to have separate virtual hosts for everything and then has to have a www. version of each. Add in that he'd set up a bunch of them as A records rather than CNAMEs and I have a lot to clean up.

To summarise the thing that makes this different from everyone else is that the parts of an actual application are split up unix style. For example instead of having two or more applications taking your photo and taking out the red eye, the desktop would have thus functionality written once and the applications will simply glue all these standard pieces together.

My only criticism to this is that we already have this in the form of libraries. Perhaps what this guy is after is something more standardised and higher level then that but I don't see how that's not doable in linux.

Thanks for the explanation. I went to their "about" page( http://web.syllable.org/pages/about.html [syllable.org] ) and after about 3 paragraphs of mythology and squishy backstory they still said nothing about what the project is, what problem it solves or what it does differently than other OSes. It probably says so further on but skimming didn't yield anything and it sounded too much like an infomercial to continue.

If it wasn't so late at night maybe i'd have more focus, but that page really needs a punchier intro.

An Intent provides a facility for performing late runtime binding between the code in different applications. Its most significant use is in the launching of activities, where it can be thought of as the glue between activities. It is basically a passive data structure holding an abstract description of an action to be performed.

Didn't Apple do this in the early '90s with OpenDoc? That wasn't exactly a resounding success, in fact the only OpenDoc apps I remember just packaged the entire app into one container, defeating its purpose.

this reminds me a lot of some of the papers Hans Reiser put out for his plans for reiserfs. Having things that could plug directly into the file system to handle file formats in a lower level way than libraries. like you could get a jpeg on your system and use the old way of opening an app that supports jpegs and load/file.jpeg into it.

Or you could allow a kernel file system plugin that would allow you to open/file.jpeg/raw in an image editor that would get the raw data. or it could open/file.png/raw

The most interesting part, in my opinion, is the attempt to make programs more modular, into building blocks. I was going to try to summarize how, but the article says it much better than I can:

On indoor pictures, you want to remove the 'red eye effect' caused by the flash. On outdoor pictures, you notice the horizon isn't straight and you would like to correct that.

"These are common, but technically complicated manipulations on pictures. The correction of red eyes may be offered by multiple applications on your system. The straightening of horizons may require you to buy yet another image manipulation application.

"Why can't you plug in the camera, have its icon appear on your desktop without extra software and click on it, then click on a picture and be offered one option to correct red eyes and one option to straighten a horizon?

Clearly there are difficulties doing this, but it seems like something useful if you can figure out a way to make it work.

"Why can't you plug in the camera, have its icon appear on your desktop without extra software and click on it, then click on a picture and be offered one option to correct red eyes and one option to straighten a horizon?

"Why can't you plug in the camera, have its icon appear on your desktop without extra software and click on it, then click on a picture and be offered one option to correct red eyes and one option to straighten a horizon?

Because it would be stupid to do it on the camera. It's much better to import the photos onto your computer (and, ideally, into a photo management tool) before you start working on them.

You don't need any extra software to do that in Linux - in fact, f-spot, among others, will import the photos, manage them, and remove red-eye or straighten the horizon. I don't understand what the problem is.

Or maybe that they were giving an example of how it could be useful that functionality offered by potentially multiple programs would be accessible from anywhere in any program on your system?.

Maybe they were, maybe they weren't. But, if they were, the article certainly didn't make it clear.

Of course it is always possible to write a program that contains all functionality you need. Their point is that this should not be necessary in the first place.

That could almost make sense in an ideal world. But in the real world, to make use of that functionality, you'd have to write every application from scratch, just for that OS. So Firefox, Chrome, Thunderbird, Evolution, LibreOffice, Gimp and any of the other cross-platform software that most people use most of the time, will never use the OS's built-ins, making them redundant and pointless.

We already do this a lot on many OS. This is software as a component. For instance today most APIs offer a lot of really high end functions some of which would have been entire programs in the past. A good example is text areas that are in effect a simple text editor.Video and audio codec systems like the ones on Windows, the Mac, and Linux are other examples. If you want to play back video you do not need to have very codec known to man supported in your program but instead can use the installed codecs on

You don't need any extra software to do that in Linux - in fact, f-spot, among others, will import the photos, manage them, and remove red-eye or straighten the horizon. I don't understand what the problem is.

Sounds philosophically a lot like the UNIX pipeline. Late to the game as I am, I've really been impressed with what I can do with those little component applications working in concert. The idea of teaching a computer to do your job for you, without having to create the software from scratch, needs to be revived.

Why should an application farm out work to external, autonomous processes? Who's going to control priority, threading, and all sorts of complexity of organizing and managing the execution of all these different parts? Or is it expected to work as a UNIX pipeline, in series?

By having all those services available to all applications as sub-systems or extensions of the operating, applications can take advantage of them without having to manage ex

Exactly. However, I would take it a step further and suggest that the overall idea to push this much application level functionality into OS libraries should be Considered Harmful.

"Don't write your own red eye correction code, it's built into the OS! Oh, wait, now I see that your new version only works correctly with version 1.5 of the library that's not in my current OS release. Guess I have to upgrade the *whole OS* to install your new software."

OSX takes your approach. It has its upsides, but also its problems - they had to push out new versions of absolutely everything when the libpng vulnerability was found. It also means macs tend to need more RAM than other systems - if multiple dynamically-linked applications are using the same library, the OS only needs to load one copy into memory. You might not care about 3MB vs 150MB for diskspace, but it's still relevant for RAM.

What's needed is for libraries to be strict in their versioning, and for the

Granted, this isn't a panacea. I realize that all these different apps would need to be updated independently, but I consider this ramification to be conceptually consonant with my viewpoint (ideally using binary diff patches, of course).

As you pointed out, Mac OS does this "right" insofar as most apps are portable to another machine if you just copy the.app "file".

seriously, when was the last time you had a "dll hell" problem - while linux in particular lags behind.

Haha, Linux is exactly the reason I started seriously wishing for statically compiled binaries. When I compare, for example, getting a recen

PC-BSD [pcbsd.org] does it, without breaking compatibility with the underlying FreeBSD. If you install a pbi package, it contains all the dependencies necessary to that given program and is installed on a separate dir. If you want to use the ports tree or install prebuilt packages, just use the default tools.
In fact, the problem you mention is probably one of the reasons I don't use linux on servers - on multi-purpose machines, I usually use FreeBSD with a bunch of jails, one for each kind of service (eg. database, ma

I would expect to spend less time waiting for statically compiled apps to load from disk, aggregated over my lifetime, than I have already had to spend in dependency hell trying to get two different binaries to play nicely. Not to mention less stress.

You make a good point, but I was exaggerating size requirements for effect: my statically compiled version of the svn binary was < 3 MB. I can think of no good reason why binaries like that shouldn't just be statically compiled. Besides, as the other post