Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

crankymonkey writes "The GNOME desktop environment could soon gain support for building and extending applications with JavaScript thanks to an experimental new project called Seed. Ars Technica has written a detailed tutorial about Seed with several code examples. The article demonstrates how to make a GTK+ application for Linux with JavaScript and explains how Seed could influence the future of GNOME development. In some ways, it's an evolution of the strategy that was pioneered long ago by GNU with embedded Scheme. Ars Technica concludes: 'The availability of a desktop-wide embeddable scripting language for application extension and plugin writing will enable users to add lots of rich new functionality to the environment. As this technology matures and it becomes more tightly integrated with other language frameworks such as Vala, it could change the way that GNOME programmers approach application development. JavaScript could be used as high-level glue for user interface manipulation and rapid prototyping while Vala or C are used for performance-sensitive tasks.'"

Javascript is just one more language, but it's a VERY popular language and a hell of a lot more people know it and use it then C# or GNU-C or anything else. Gnome is about providing a programming environment for normal people to use and if Javascript allows that then they will use it. However...

Gnome's base libraries are C and probably are always going to be programmed in C. This is because C is very universal.. it's relatively easy to port to other platforms and it's able to be included in most other lan

Gnome's base libraries are C and probably are always going to be programmed in C. This is because C is very universal.. it's relatively easy to port to other platforms and it's able to be included in most other langauges as modules.

Don't forget that C produces smaller and faster binaries than pretty much any other language except very good assembly. For code that's called hundred if not thousands of times per second, and where latency is a factor, you want small and fast. Good C delivers that.

C++ and javascript aren't mutually exclusive. In fact, I'm checking slashdot right now during a break from debugging a home project that makes use of both of them. I'm quite fond of the mixture of a C++ backend with a javascript frontend that can be used over the web. In this particular case, it's an electric vehicle simulator that lets you specify your vehicle details and plot a route over Google Maps. The frontend uses form POST requests to call the simulator to run the CPU-intensive simulation on the backend (where it has access to many gigs of heightmap data). The backend talks to the frontend by returning javascript function calls with the results asynchronously.

I've done several projects of this nature before. One weakness is that if the backend takes longer than two minutes, the connection gets dropped. Not a problem on this project, but on a web-based Povray interface I did in the past (lets you customize car paint jobs, then renders the car in a variety of scenes), it was. The solution is simply to have the frontend take responsibility for periodically fetching the results from the backend.

All in all, I find it a very nice balance between the cross-platform web-accessible functionality of an HTML/Javascript frontend and the extreme speed of a C++ blackend.

I'm quite fond of the mixture of a C++ backend with a javascript frontend that can be used over the web.

C++ has the advantage that, unlike C, there's less of an impedance mismatch between it and Javascript. Javascript is optimised for manipulating DOM-type structures which have a very natural expression in C++ and, with a small amount of template-assisted boilerplate, is fairly straightforward to bridge.

There's a pervasive myth that C++ is just objects on top of C. Or that because you can use objects, that means that you *have* to use them everywhere.

C++ is useful for tons of things. Someone who doesn't know how to use templates will create a nightmare's worth of code in a situation that calls for them. People who don't use object data structures in areas that aren't performance critical (and oftentimes even where they are, since there are a lot of optimizations in the std libraries that a lot of people miss out on when reinventing the wheel) often create a memory management nightmare and leaky code. People who don't use const correctness slow down their code and are at more risk for bugs. And on and on down the line.

You misunderstand the concept of cache hits. Whether you get a cache hit or miss isn't dependent on how much memory your *program* loads up; it's based on what's being executed at a given point in time. If you have a core loop iterating through some data structure, if you're not calling any big libraries, then they're not affecting cache hits for your core loop; the cache is going to be dominated by that data structure. Quite the opposite, C++ actually often has a *greater* chance of cache hits because data stays local to an object, and when a block of data is read from memory to the cache, you're more likely to get variables that you need cached.

Again C != C++.

C is a subset of C++. If C can do it, so can C++. Anyways, why did this become a C vs. C++ thread?

(I won't even mention Objective C, which is an abomination unto Nuggan)

And why exactly if I may ask? I'm not an objective-C expert but to my knowledge it's exactly what most people think C++ is: C with support for objects, period. For many tasks it's actually a really nice language that combines some of the main advantages of C++ (objects, inheritance, etc) with the advantages of C (small and clean syntax, fast compilation, fast execution).

So if you don't need full-fledged C++ or know you aren't proficient enough using its powerful features properly (like so many developers th

Javascript is just one more language, but it's a VERY popular language and a hell of a lot more people know it and use it then C# or GNU-C or anything else.

While the language is VERY popular, I disagree that a "hell of a lot more people know it" than most other GNU languages. The vast majority of coders have no idea how to correctly write Javascript. In fact, you can't even say that Javascript is an Object Oriented, LISP-like functional language on Slashdot (of all places) without ten or twelve people trying to tell you you're wrong.

Which sucks. Because Javascript is an AWESOME language. Plus the modern VMs (as opposed to the last-generation interpreters) are getting quite fast. Fast enough to use JS for anything short of compute-intensive applications. Even professional video games could use it as a scripting language with the right underlying APIs. (See my sig for how far it's come with Web games.)

My hope is that as Javascript shows up in more places, developers will take the time to sit down and truly understand the language. And maybe we can even get a few books on the market that don't suck.;-)

I've been programming a lot in Actionscript 3.0 (the backend scripting language for the newest version of flash). This is basically a refined version of javascript, and I have to say it is the most pleasing language to program in I've touched in quite a while.

I think that breaking projects into two levels is often the way to go. Write the crucial, high-speed, massive-volume bits in C or C++, test them thoroughly, and then write the rest of the app (in which performance doesn't matter and reliability an

Umm, because it's precisely the *right* language for the job. C++ restricts your binding options to other languages pretty dramatically, especially if you use parts of C++'s object model such as multiple inheritance. In short, no C++ is the wrong too for the job, Qt's use of it notwithstanding. Although it's true that Qt does have bindings for many languages, the simple fact is that these bindings, such as PyQT, are against the *C* wrapper on the C++ api. So an entire extra layer in the way. Gnome and GTK's gobject-based object model is just about one-to-one with most models of object-oriented languages. Hence you can use just about any language under the sun to develop Gnome apps. All without losing any features of the API.

If you do want to program in C with GTK, you can and it's very easy, actually. Memory leaks are minimized because of the coolness in glib. However, if you're not a C programmer then you shouldn't be programming GTK/Gnome apps in C at all. Use something more powerful like Python. In fact I can't think of any reason (except embedding on small systems, or core libraries and programs) where it's appropriate to write a Gnome app in C. There are tons of awesome bindings available. Use them. Write in C++ (which is a much nicer API at times than Qt's moc-ified native apis) if you want.

So yes. Gnome is able to target the languages that normal developers want to use. Whether that's C#/.Net, Python, C++, Ruby, or C.

If Gnome wasn't written in C, what language would you suggest? How would you provide extensive, 100% coverage of the API in any arbitrary language? Writing GTK itself in C# seems pretty silly. Same for Python, Ruby, etc, unless you want to restrict the entire toolkit to just one language.

I wish people would drop this stupid bindings argument. It's brain dead. Firstly, bindings have to be maintained, properly, if they are to be of any use to developers. That takes effort. Secondly, why bother with binding object orientation and other languages on when you can have a language built with proper object orientation in the first place because that's what your software requires? Vala shouldn't even really be necessary.

Oh, and the Smoke bindings that KDE uses have proved you wrong. So pffffffffffffff.

The problem with attracting developers is that so many of them these days have went on to develop web applications with awful scripting languages like Javascript, Python, Perl and PHP. Developers know these languages.

Bringing developers to the platform is what's important right now. The libraries have gotten better and better, and now it's time to have some real, awesome applications to use them. Part of that means having developers that act

The javascript engine is an embeddable interpreter (that is independent of a web browser), and it is common to combine it with C++ guts. The web browser is just the most well-known example of this combination.

On many projects I export key data structures via SWIG into a scripting language, then use the scripting side to quickly develop and test new algorithms without the compile/link cycle. I've done this on numerous projects, with C++ scri

I'm not defending it, I was just asking if I condensed the point of the project correctly.

As for writing JS to make GTK+ code... if you've ever coded a GUI by hand, you know it's a pain. I realize that tools like Visual Studio, Eclipse, et al are supposed to take care of this, but some people like to code GUIs from the CLI for some perverted reason. I only had to for a class, and I never want to do so again.

That said, if I understand the point of the project correctly, I think it's hugely pointless.

As for writing JS to make GTK+ code... if you've ever coded a GUI by hand, you know it's a pain. I realize that tools like Visual Studio, Eclipse, et al are supposed to take care of this, but some people like to code GUIs from the CLI for some perverted reason. I only had to for a class, and I never want to do so again.

I feel like the tough part is simply working out the layout of the thing - the nested containers, the widgets that go into 'em, etc. From there, hooking up code to the widgets seems like not such a big deal.

You can get through the layout phase with a tool like Glade... Or you can code it by hand. I'd agree that coding it by hand is a real chore, but it's only really awful if you have the code/compile/test/fail/repeat cycle in there. (If it's just code/test/fail/repeat, that's not so bad... But I'd stil

The main reason being, you then have an easily-scriptable commandline version, and an easily-usable GUI version. Bonus is that you won't need any GUI installed at all on a server in order to use the commandline version.

You've also decoupled logic from presentation, which is generally considered A Good Thing -- it makes replacing the GUI easy, and it makes competing GUIs possible, without having to dig into any of the core logic.

Granted, it would be better to take the whole system into account when writing either -- it's a lot easier to write a GUI for a commandline app which was written with that in mind, than one which was written with nothing beyond a VT100 in mind. But the advantage still stands.

C++ is awfully convoluted, maybe. JavaScript is pretty simple and straightforward, aside from a few minor gotchas. Most of the problems with JavaScript are browser API issues and not with the core language itself. It's pretty much the opposite of convoluted.

Actually, yes, JS is much more convoluted than it has to be, partly because it pretends to be so many things at once. Non-local local scope is a good example. For a Java-looking language, you'd expect code blocks to introduce scope, but they don't. For example:

Javascript gets a bad rap for a lot of reasons. Most notably is the fact that Javascript and the DOM are conflated in most people's minds, despite the fact that the DOM is not a part of the Javascript specifications--in fact, while Javascript can manipulate the DOM, it's the browser which provides the bindings. It's not Javascript causing the incompatibility, it's the browser. An analogue might be having incompatible implementations of libc--you wouldn't blame the C compiler for the problem, would you?

There's also a developer problem. People see the C-like syntax and start coding as they would in C. Javascript is functional language, and it makes use of that in significant ways. Worse, the expected semantics of block-level scope differ from C, and that's a very big gotcha for a new programmer.

That's not to say that Javascript is without problems. There are numerous quirks which I consider errors in the specification. Nonetheless, it's really quite an elegant language for the most part, and it's certainly possible to develop libraries to handle the quirky cases.

I would argue that the two reasons to choose a language these days are A) Syntax B) Libraries. If the syntax frustrates you, is bloated, makes it difficult to write large apps (VB, COBOL, XML-languages, etc), you'll demonize it. However, even if you have a clean syntax, if there are insufficient libraries (that are practically trusted across different platforms), then you can't do much more then trivial hello-world apps, or bind tightly to a handful of platforms.

But javascript is an awfully convoluted language. Why does it become easy when you put a language like that into the equation?

I don't know, I used to think javascript was a mess, but having learned a good bit more of it recently, it's really a much more elegant, flexible, and well designed language than a lot of people give it credit for. Personally, if I wanted scripting built into my Desktop I would choose python for the documentation, ease of coding and power, but you could do a lot worse than javascript.

(You can save yourself the trouble of creating an HTML file by using this page [tlarson.com] to test it.)

Prototype-based OOP can do almost everything Class-based OOP can do, except that it is far more flexible at runtime. You lose some compile-time checking, but it's been found that strict compile-time checking doesn't offer nearly as much error-catching benefit as was originally envisioned.

If you're absolutely in love with compile-time checking, consider a Javascript 2.0/ECMAScript 4.0 compiler [jangaroo.net]. That will help catch typing errors up front while still creating code that's deployable in Javascript 1.x/ECMAScript 3 VMs.

Unless you can come up with a thoroughly researched and peer-reviewed paper which accounts for all the more recent approaches to static type checking (like typeful programming, dependent types, etc.) supporting this, I'd say you were full of shit.

I'd say you weren't paying attention. I didn't say that static typing never catches errors, I said that it does not catch as many errors as originally envisioned. As nearly any programmer can attest, it's a rare treat to have a program operate correctly after the first compile. More often than not, you need to perform iterative development and debugging to ensure the correctness of the code. The unfortunate result is that developing for a statically-typed vs. dynamically-typed language makes little difference to this process.

That being said, there are some advantages to a statically-typed language. e.g. There are no untested code segments with typing errors waiting to blow up. The code may blow up for other reasons, but typing won't be one of them. (Unless you force a cast, that is. Casted objects can blow up quite nicely.)

The other area where it's a good idea is when your code will share out or access a standard interface outside your project. In such cases, typing can create a contract that ensures the correct use of all APIs.

That's why I suggested the use of a JS2.0 compiler for times when typing is important. JS2.0 is a softly typed language. Typing can be defined to ensure correctness, but typing is not required. This allows for interfaces and APIs to be exposed properly while leaving the individual developer a free hand to design his code in a classless fashion.

I'm not going to write an entire dissertation on this, so see Bruce Eckel's excellent article [mindview.net] on this issue for a decent introduction to the compile-time checking issue.

Besides, JavaScript gets what is arguably the most important feature of any language, namely scoping, completely and utterly wrong.

You've just pointed a finger at the very thing Javascript gets absolutely correct. While the scoping system may seem weird and even incorrect to coders with experience in other C-style languages, the Javascript scoping system is what welds the OOP aspects of the language together with the functional aspects of the language. If you changed the scoping system, you'd completely destroy Javascript's object system in addition to gimping its functional aspects. Which would leave you with the just the crappy procedural code that your average web developer creates for neat webpage effects.

I'd say you weren't paying attention. I didn't say that static typing never catches errors, I said that it does not catch as many errors as originally envisioned. As nearly any programmer can attest, it's a rare treat to have a program operate correctly after the first compile. More often than not, you need to perform iterative development and debugging to ensure the correctness of the code.

I'm not actually sure that's so. Yes, with many popular statically typed languages that's the case. On the other hand I've heard plenty of stories of Ada, Haskell, and other language programmers finding that, indeed, if they can get it to compile it works as they intended. This, of course, raises its own issues: you may have to do some hoop jumping and work to manage to get that compile to actually work. That means those languages might not be so ideal when you just want to muck out or quickly protoype some

The wheel may have been invented, but different variations have their use - steering (car), scrolling (mice), tires (car), entertainmetn (ferris, of fortune, roulette), you know - let me check something... yup it exists: http://en.wikipedia.org/wiki/Wheel [wikipedia.org] - the people like you would have stopped after the first wiki page;) (j/k - aiming for funny here my friend, don't think anybody has been enlightened by this post though some may have been flamed (by accident)).

Besides, you know people, they have to keep re-inventing the wheel, in their favorite color.

Its why we never get anywhere.

Well, really, the reason why we never seem to get anywhere is because accomplishing meaningful tasks is hard. Putting together a good app is a lot of work. Getting people to use it is more work. And without clear leadership, there could be a dozen people trying to solve the same task - and as a result, coming up with different solutions and competing with each other. "Getting somewhere" depends on clear leadership. Someone has to be able to take the available coding talent and steer it into a useful di

It'd be quite nice for people like me who _can't stand_ python. Purely a personal preference, but I just can't stand any language that has specific rules about where you can and cannot put a space. Hell, I had a python script I was writing the other day that wouldn't run because in one place I had used a tab to indent after an if statement rather than a series of spaces. I like my damn curly braces! They're easier!

The source of your problem was the inconsistent use of tabs, not Python (and is a problem you will continue to experience until you do what every top notch developer eventually does in their career, which is to stop using tabs and ban them completely from all of your projects.)

I used to joke about Python: "friends don't let friends use white space for a control mechanism". I had the exact same attitude as you about Python but, since I am fairly intelligent, I was able to figure out that the curly braces are

I don't know Python, but the grandparent post seemed to indicate there was a syntax bug because of the presence of a tab in place of spaces. This means that two programs can, visually, be absolutely identical and yet not behave consistently? Who would design a language like that?

No. Inconsistently mixing tabs and spaces is a syntax error. The program simply won't run, and it will inform you as to why (much like how your C compiler will propably let you know when you've used a lower case l [wikipedia.org] rather than an upper case I [wikipedia.org], even though they look the same in your font).

Actually, the reason for the brackets in print() is that it is now a function where it used to be a statement. All the other statements are still there (like return or def, etc). The decision had nothing to do with delimiters. The point the GP was trying to make is that most people who have coded for any amount of time indent consistently as part of their routine. I have been tripped up many times by code that was indented to look different from the actual flow of control. But then I prefer begin end t

I discovered recently that the grudges I hold against Javascript (I used it when CSS was still science fiction) are no longer valid and that a lot of the features I like very much about python exist as well in Javascript:

a={}a["this"]="is a hash table"l=[]l.push(a)

this is both a valid python or javascript code. Both languages are very similar. Javascript's image suffer from its earlier implementations. It is now a much more convenient language than it used to be. Python is fine for a lot of things and is still my language of choice but Javascript has been promoted from "over my dead body" to "preferable to many other alternatives"

JavaScript is actually a sweet programming language. It's got a very clean design, nice straightforward syntax, and good support for both OO and FP. (I think people get a bad impression of it from seeing it used by people who learned to do stupid web page tricks from JavaScript for Dummies. Also, people who believe in the One True Way of OO tend not to like it because it doesn't do OO the same way as Java. There are also many horrible problems with DOM incompatibilities, none of which have anything to do with JS per se.)

The thing is, I don't understand the logic of using JS for high-level tasks and Vala (basically glorified C) for low-level stuff. The thing is, JS is a very small, austere language. The whole advantage of having a high-level language is lost when you use something as bare-bones as JS. JS is also much, much slower than Perl and Python, so you'd end up having to do only a very small percentage of your programming in JS, and the rest in Vala, in order to get decent performance.

To me it makes a lot more sense to write 100% of your program in, say, Perl. (s/Perl/Python/g or s/Perl/Ruby/g is that's what turns your crank.) You pull in some CPAN libraries, many of which have the time-critical stuff written in C for good performance, but you don't have to touch the C. If there does turn out to be some very time-critical loop that you really want to optimize, and it's not something generic that's available in CPAN, then you write it in C and interface it to your Perl program. You end up writing 99.9% of your own code in a nice high-level language, and 0.1% in a crufty low-level language, and you get good performance.

To me the most interesting part of the whole article is the idea of using Vala rather than C as a low-level language. Manual memory management sucks.

But, JavaScript for desktop GUIs? That just gives me an odd feeling inside...

Maybe you have not heard of this new little app... It appears that there are a few people out there who even like it. It's a bit hard to find, because for various reasons it has been renamed way too many times... The last name, IIRC, is Firefox. Google should be able to find it for you.

I actually have been doing this for a number of years and I have done commercial projects using it. I started with Lua but lately I have been using Javascript via Tracemonkey in an attempt to get more buy-in. Javascript looks good because of its widespread web use. Javascript is still a pretty crappy and convoluted language that will probably never be able to perform as well as Lua(JIT) though.

I use it for Windows apps as well. I have my own custom bindings for Win32, FLTK, Gtk+, and Qt. Qt is my favorite right now since they're making it LGPL.

Don't kid yourself though, it will not perform anywhere near as good as an old fashion C/C++ application. I still use C or C++ when I need top performance. A lot of applications don't need it though and the end-user can't tell (my scripted software runs fine even on old 266 Mhz laptops with 128 MB of RAM).

Javascript is still a pretty crappy and convoluted language that will probably never be able to perform as well as Lua(JIT) though.

Actually, the modern JITs like V8 and Tracemonkey are a whole lot better than they used to be; V8 actually approaches the speed of LuaJIT for some tasks.

Of course, V8 uses about two orders of magnitude more memory, and Mike Pall is currently working on LuaJIT 2 which will be faster and more lightweight, so it's very much damning with faint praise. But, while Javascript is still pretty awful, it's not nearly as awful as it used to be.

Gnome guys; don't consider extending or improving XUL. That javascript+gtk widgets model hasn't managed to produce anything worthwhile [wikipedia.org], now has it? Obviously, you can do it better! Probably 3-4 times over the next 15 years. Good luck!

The Qt API has QtScript, an ECMA scripting engine. While by default Qt isn't fully scriptable, there is a "bindings generator [trolltech.com]" that makes the whole Qt API available to the scripting engine.

Jakub Steiner has recently been wanting access to a CSS-type method of styling GTK apps rather than using the traditional widget-mangling stuff. I totally agree...in fact, look at some of the web apps out there that have already far eclipsed desktop applications in visual design, usability, and just overall experience.

If the desktop is going to make a comeback, things like this JavaScript effort and ideas that have their roots in web-team-on-a-deadline-style efficiency are going to have to be ported over

Embedded scripting isn't anything new even in the GNOME environment. Scheme is the scripting interface generally available for GNOME applications. This development just allows javascript to be embedded in applications. I think this is a good thing. First, javascript is a very good language. Most problems people associate with javascript have to do with the browser and NOT javascript. Second, javascript is known by a lot more people than scheme. It's probably the most well known and used scripting languages in existance. Combine that with the fact that we now have three high performance javascript implementations that are still improving and I think you have a pretty good case for javascript on the desktop. This will only make extending GNOME applications easier. I think GNOME is in good hands if development focuses on Vala/Javascript application programming.

Javascript lacks a clear way of enforcing interfaces. Any part of the program can extend or modify the prototype of any other object on the fly and wreak havoc by invalidating reasonable assumptions which other programmers had about that object. Javascript also lacks multithreading support (no way to synchronize in the language itself). Closures are nice if you know how to use them, but otherwise they are a serious memory leakage hazard. Last but not least there's the problem that Javascript is the VisualBa

Javascript lacks a clear way of enforcing interfaces. Any part of the program can extend or modify the prototype of any other object on the fly...

Many of us would consider that a feature, not a bug. But, actually, you can enforce interfaces fairly easily with closures, if you really need them.

More importantly:

invalidating reasonable assumptions which other programmers had about that object.

If you're making assumptions you want others to be aware of, the right place to do so is in documentation. Otherwise, you've got the unsolvable problem of idiot-proofing your code -- they will always build a better idiot.

If people are deliberately breaking the rules you've laid out, you're going to have problems anyway. No language can actually

Javascript lacks a clear way of enforcing interfaces. Any part of the program can extend or modify the prototype of any other object on the fly and wreak havoc by invalidating reasonable assumptions which other programmers had about that object.

There's a reason it's called "JavaScript" and not just "Java", you know...

Too many exceptions and too vague. No reliable standard implementation.
Semicolons as line endings, for example - in C++/php/perl you need them always, in python you don't. In javascript, sometimes you need them, sometimes you don't.
I really tried to give JS a shot, but when you have the guy that created it saying this or that part of it is a hack, it kinda dims your confidence.
Many programmers I have met, myself included, will pick up php, python, even perl or C++ faster and easier than JS. And it was

You don't need semi-colons at the end of lines--you need them at the end of statements. And in Javascript, they're always required for the end of statements. What happens is that if the Javascript interpreter encounters an error condition in the parsing of the code, it will go in and see if adding a semi-colon would make the error go away. If so, it will automatically add the semi-colon for you.

This was a design decision that was implemented to make Javascript easier to use.

The main purpose of this project is to enable easy embedding of Javascript into a GNOME application for scripting purposes, on the basis that lots of people know javascript so it makes a good extension language. The fact that you can write entire applications with it is just a (disturbing) side-effect.

But if you really want to frighten yourself notice that these applications are run just like any scripting language in unix - with a shebang header line. So javascript init scripts are now yours to have.

Like JavaScript, Perl and Python (and Ruby, TCL and even PHP, for that matter) are interpreted, object-oriented languages that you can quickly build simple applications with. However, they're all a lot nicer and cleaner than JavaScript (with the possible exception of PHP). JavaScript desktop applications are trying to solve a problem that's already solved in a superior way.

In the more common case, if it takes a hundred cycles to perform an operation in C, and ten thousand cycles to perform the same operation in something else, it still took less than a millisecond. I don't know about you, but if a GUI app responds in a tenth of a second, it's fine.

Do you have any specific examples of interpreted programs that are "too slow"? Are you sure it's due to being interpreted?

What part of "Active Desktop" was a good idea? Why are we attempting to recreate that?

Seems more like Active Desktop was a bad implementation of a good idea. (For other examples, see UAC -- I use sudo, and I like it fine, but I can't stand UAC, which is the same idea.)

But this isn't even the same idea -- it is not about setting your desktop background to some website. It is about writing new applications in a different language.

At the very least, I hope steps and measures are taken to ensure that there is NO code that can be hidden and that a complete console allowing the viewing and editing of all Javascript code complete with the ability for users to DISABLE it.

...why? Do you expect the same thing from C or Python?

Because, as I understand it, that's all that's happening here -- you can develop a desktop application in JavaScript, just as you can in C, Python, Ruby, or whatever else.

The problem with Active Desktop wasn't the Javascript language, it was the mixing of the OS and the Browser and the security problems that came from that mixing. Javascript had nothing to do with it; in fact, many of the malware related to ActiveDesktop was written with VBScript, not JS.

The article is talking about using Javascript as a scripting language to help build GTK applications. This is no different than using any other scripting language, such as Perl or Python. Just because it's Javascript does

Are you nuts?
You obviously don't understand how this framework is designed. This isn't even about web apps.
Where is there any hint that this will run untrusted? Since when does anything in Linux run untrusted?
Silverlight for great justice?

The problem with Javascript + GTK is that it isn't portable to other platforms, like Windows. At least that I am aware of.

GTK+ [gtk.org] is [gtk-osx.org]. Webkit certainly is (see Chrome -- webkit-based, Windows-only browser -- or Safari, which runs on Windows, OS X, the iPhone...) I can't confirm right now whether seed itself has been ported, but I see no reason why it wouldn't be.

Is Qt currently more portable? Maybe -- last I checked, Gimp, at least, was shipping with a GTK+ for OS X which required X11, while Qt has a native Mac GUI. But to say that either one "isn't portable to other platforms" is just willful ignorance.