Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

JThaddeus writes "An article in TechWorld Australia summarizes the latest opinions on JavaScript from ThoughtWorks: 'There is no end in sight to the rise of JavaScript... "I think JavaScript has been seen as a serious language for the last two or three years; I think now increasingly we're seeing JavaScript as a platform," said Sam Newman, ThoughtWorks' Global Innovation Lead.' The article touches on new additions to JavaScript tools, techniques, and languages built on JavaScript. As the fuller report (PDF) says, 'The ecosystem around JavaScript as a serious application platform continues to evolve. Many interesting new tools for testing, building, and managing dependencies in both server- and client-side JavaScript applications have emerged recently.'"

I don't see how that needs to be the case. Everything except IE pre-10 supports Web Workers. Firefox is moving toward the process-per-tab model popularized by Chrome, and the refactoring project is codenamed Electrolysis.

yet, any god damn thread that hogs a CPU still locks the entire god damn UI of Firefox because it's not multi-thread capable. Fix the fucking UI and other bugs that sill haven't been fixed insted of trying to keep up with Googles insane numbering.

I'll get hate from the programmers but fuck it, truth is truth. Wanna know what is wrong with JavaScript? The entire damned concept, that is what!

I mean think about it, if I told you "I have an idea! Just to get some information you are gonna have to run strange code from over a dozen places, you don't know these places, if they have malware, you have to trust ALL these places or you can't have the information" you'd say that is seriously fucking STUPID, wouldn't you? That is how modern ad driven web works, calling shit from God knows where with zero control or fine grained permissions, its all run by default....STUPID.

What we need is either a new language built from the ground up with the realization that there are seriously nasty people out there trying to fuck your systems up, or JavaScript seriously needs a rewrite with the modern situation in mind. As it is now everybody keeps trying to fix the bullet wounds of failed design with bandaids like sandboxing, low rights mode, but at the end of the day the very concept of clicking on a link and suddenly getting code from a dozen sites you don't know and executing that code? its just a dumb idea.

Heck, you're right and didn't even have to go into how f-ed up JavaScript is as a language... "dynamic scoping"... really??? That idea sucked in Lisp and I thought we got past it in the 60's after that cock-up. And now we're supposed to take this bastard child serisouly?

To be fair, that isn't really the fault of JavaScript in and of itself, it's more of a failing of browser security policies.

If a same-domain-only policy was enforced similarly to as with frames, that would remove a *lot* of the issues you're taking about. Of course, that would introduce quite a number of problems, but I think it would solve more (like, for example, sites don't take ten fucking minutes to load a single page because it's reliant upon a thousand different ad servers, tracking systems... seriou

Why should having many tabs open in separate processes cause any problem? No modern operating system will actually require the entire browser codebase to be loaded 20 or 200 times in that situation, and any overheads for context switches should be imperceptible with this kind of application.

In general, the code itself should automatically be shared between those processes. The main things kept separate would be the data in each tab/process and the system resources and permissions.

As someone who actually has 2 decades in the industry you don't sound like someone with that kind of time at all. Us middle aged folks look at NoSQL and think "Network database" or CODASYL databases are coming back. We weren't attached to Java-EJBs and expected a lighter weight version, Ruby seems like a scripting language...

Stop trying to pretend to be older. Older people saw shit when you were young that isn't used anymore and that's hard to fake.

I started back in 1978 and I remember someone coming in to pitch a database technology in 2006 for which they had patents pending and it would replace relational databases. They kept describing it using catch phrases and turning rows into columns and I just couldn't grasp WTF they were talking about. I finally asked them to draw a picture and they mapped it out on the white board.

I then asked if they had ever read about IMS and hierarchical databases. They had not. I wished them good luck on their patents and sent them packing.

I have decades of industry experience in Silicon Valley as well. And really, your post comes off more as a bunch of whining than an actual critique. There are always going to be new technologies, and the people who are heavily invested in the previous generation of technologies will always groan about these dag-gone kids with all their newfangled ways of doing things. Heck, I've been guilty of this kind of thinking myself.

But you know what? When we entered the industry as young whippersnappers, the previous generation of programmers said the exact same kinds of things about us.

"We would truly be better off without JavaScript, without NoSQL, without Ruby on Rails, and without the hipsters."

It's all very well and good to have opinions, but you make some strong specific complaints here without actually giving reasons for those opinions. Many people disagree with you.

What is your problem with JavaScript? JavaScript DOES, in fact, have some very serious problems. For a classic example that was linked to here just yesterday, try

[5, 10, 1].sort();

I happen to share your dislike for JavaScript, but I can give actual reasons for having that opinion.

NoSQL has its problems as well. [theregister.co.uk] While for some narrow uses it does offer great performance, if you step outside those narrow bounds even a little bit, you are forced to code many of the functions normally handled by a relational database, manually. I have never had to implement a huge distributed high-performance database, so in my case the NoSQL use-case is limited to very simple local non-relational data stores. But again, I can articulate a reason for not caring for it: the theoretical basis for it is weak, what actually constitute proper use-cases for it are rare, and it creates a lot of manual coding that relational databases handle on their own.

But now we come to Ruby and Rails (which are 2 different things). Yes, Ruby is a dynamic language, with all the (well-known) shortcomings of dynamic languages. But other than that, what do you have against them? Please be specific. We already know you don't like them. The question is: why?

NoSQL has a great future. The article posted with true ACID will be a blessing.

Here is the problem with a crappy ultra expensive solution from Oracle or Microsoft. It can't scale and prices go up on an expontential basis if you try to make it do so. Zdnet a few years ago put in a price tag for running youtube.com on Oracle's database instead of Google's NoSQL solution. The price tag was almost $8,000,000,000!!

NoSQL does not mean no sql. It means not only sql. For quick webscale performance you need low late

Words have meaning and purpose. What words would you use to express the following concepts?

- a collection of tools which allow you to build a new component through their leverage, while not contributing significantly to the overall effectiveness of the tools (or won't particularly be used in operation)

- a collection of functional components which you will use as part of the operation of a new component

Currently, the words are "platform" and "system". I'm happy to switch to other words if t

But did that really have anything to do with it being a byte code interpreter, as opposed to a lousy implementation of one? Or perhaps applets having access to library functions that were too difficult to make secure?

In other words, is a byte code interpreter necessarily less secure than say a JavaScript interpreter? I honestly don't know, and would be interested if anyone can explain this.

It isn't the byte code interpreter that's the problem. It is running in a low level state. Let's take a simple example division by zero. If you send a division by zero to an x86 processor something has to catch that exception or the entire system crashes. So either:

a) You have to have a wrapping layer which prevents virtual instructions from causing exceptions in the kernel or OS (Javascript) which is slowb) You have to have a complex system where things move up and down abstractions layers freely (C+

Why do the worst technologies that are just barely able to solve the problem always make it? Is the developer community collectively really this stupid? I fear it is...

Because technologies that just barely solve a problem allow people who can just barely do the job to barely solve the problem. People that can barely do the job are less expensive than people who do the job right. Unfortunate enough people are willing to live with "just barely".

I am not a java guy (I have the basics, but I still don't 'get' java like some fans do). I asked a guy who works at a hardware company (that does some java accel stuff) what the big draw was. his reply - pertaining to server-java code in business, at least - was that it could tolerate bad programming and still 'run ok enough' for business use. lots of cheap *very bad* programmers in the world and java works well for them. malloc and no free - wow! you can be as stupid as you want and your app will stil

They don't always make it: Many do not make it at all. Survivor bias and all that.

JavaScript thrived because the alternatives were arguably far worse. Java applets were terrible. ActiveX a platform specific disaster. Flash is heavy. JavaScript allowed you to do the very minor things most web developers wanted at the time without having to turn your website into a plugin that disregarded base web technologies.

Yes, the language design is pretty silly. The function declaration syntax is silly. It tries to look like a member of the Algol tree, but its internals behave more like Lisp. The automatic type conversion system is the source of many jokes. But it still beat it competition at the time, because it was built into the browser, it talked directly to the page's DOM, and the competition did not. Today we'd have little trouble designing a better language than JavaScript for what we currently do with it, but our best bet to get something like that working is to build a language that compiles to JavaScript and then hoping browsers start building VMs for that language directly, skipping the JavaScript step. Still, not bloody likely.

Because it's easy to mistake one's personal biases for sound judgment about what is "good".

I won't sit here and defend every design decision in javascript, but it's a lot more sophisticated than meets the eye. I think of it more like Lisp than Java; it encourages (among skilled programmers) a functional programming style, which turns out to be both under-used as a programming paradigm and very nicely fitted to the kind of event-driven tasks people use javascript for.

If you aren't writing higher order functions in javascript (functions that take functions as values or return them as values) you aren't fluent in javascript and aren't qualified to pass judgment on it.

You are modded funny, but if this really is a joke, then there certainly is some melancholic undertone to it.

The web is being developed for "the average" developer that just wants to get things done. This is a pity. Because if the web were developed for the advanced developer (read: computer scientist), those developers could lift the web to a higher level, and there would be much more traction.

We just have to wait until Javascript gets sufficiently powerful to function as an intermediate language.From that

if the web were developed for the advanced developer (read: computer scientist), the "web" as we know it would be confined to a few geek niche markets and the rest of us would be using some Microsoft- or Adobe-pushed proprietary technology instead.

I thought loading faster was the difference between the user staying on a document and the user hitting the Back button to return to someone else's document. Web search engines have recognized this and have started to penalize slow-loading documents.

Actually users care a great deal. Especially as we start moving to touch. There is nothing in nature that causes your visual image of your finger to lag your finger. Latency over 1ms are distressing to human subconscious under those conditions. They experience negative emotions. 0% of the current systems are that fast even using low level languages.

Latency over 1ms are distressing to human subconscious under those conditions.

Do you have more info on that? Seriously, no snark, but I'll admit that I'm skeptical. I've never heard of human perception time being less than 30ms. I hate slow responses in UI's with a passion, but 1ms?

Do you have more info on that? Seriously, no snark, but I'll admit that I'm skeptical. I've never heard of human perception time being less than 30ms. I hate slow responses in UI's with a passion, but 1ms?

See, for instance, John Carmack's analysis of head mounted display latency. Under 20 ms is acceptable for most people. 30 ms is too high, and leads to motion sickness.

For ~1 ms latencies, experiment with a mouse on an old CRT running at 180Hz. Try out a hardware rendered mouse, then a software rendered mouse. You can indeed see the difference.

Also for time perception that accurate, see a competent analysis of a fighting game. Many fighting games feature reaction windows no more than 5 ms wide, and some

Many fighting games feature reaction windows no more than 5 ms wide, and some have windows ~1 ms wide. And players can hit them.

I've heard of fighting games (e.g. Street Fighter) with 1 frame windows, but the games were 60 fps ~ 16.67ms. That is more in line with your figures from John Carmack's experimentation. Your 1ms figure does not sound credible, can you provide a reference?

I just started on a web project that is heavily javascript (even though it's an ASP.Net app). It feels like I stepped back in time 10 years. I'd have code not execute. Why? Some error somewhere in the code that was ignored. I had a typo on a property set somewhere else. No error. Why? Because javascript created a new property with the typo and set *that*. And every change requires me to run the app to see if it works, because I don't have a compiler to check the basics out ahead of time. And then, because I'm debugging an app in one window, when I'd open a new one to read the news, I'd hit other, non-ignored errors on those pages (advertisements) because 90% of commercial pages these days have error-laden javascript because people rarely check for error conditions.
Horrible language.

It implies that the user is a filter that takes input from the chair to input it into the computer, and does not use the monitor feedback system at all. Go ahead and draw the control flow diagram some time.

You might want to look at TypeScript if you're already using Visual Studio. It infers types, type checks your code, is open source, and supports writing plain JavaScript. When using Visual Studio, you can do the things you're used to doing like "go to definition" and "find all references". If you decide to annotate your definitions with types, it can do type checking and catch errors which is really useful when you need to refactor a lot of code. The video at the bottom of http://www.typescriptlang.org/ [typescriptlang.org] is

Try getting a good IDE, like PyCharm (which does javascript as well). It gives you pretty good static checks on things like var name and property name typos. Not 100% of course since that's undecidable.

Sounds like you are not using the correct tools and environment, which makes debugging harder. And using that same setup to read news with a coding environment is just stupid. I don't read news with my development database or web server, so why would I risk infecting my development environment from known malware vectors of even legit advertising?You conclude with "horrible language", but your comments support "horrible programmers" all the way around.I'm making no defense here, only pointing out that your o

my monkey programming in C is going to leave at least one dangling pointer. Does that make C a horrible language?

The difference is that there are good reasons for that design tradeoff in C. The point of C is that it's fast as hell and gives you almost complete control. You need that to implement a kernel, which is what C was originally designed for. The downside is that you have to watch out for things like dangling pointers.

In my 35 years of professional programming, getting good at dozens of languages, I've only run across 2 I've actively disliked. Javascript is one of them (tcl was the other).
JS is a crap language that IMHO can't be fixed. If they ever add an honest garbage collector to the base language then most programs will delete themselves upon execution.

In my 35 years of professional programming, getting good at dozens of languages, I've only run across 2 I've actively disliked. Javascript is one of them (tcl was the other).
JS is a crap language that IMHO can't be fixed. If they ever add an honest garbage collector to the base language then most programs will delete themselves upon execution.

Lol. And the garbage collector would then send out an email to every web designer who says "I know how to code in Java" when they mean Javascript and clean up that confusion once and for all.

I've been forcing myself to get good at writing JS lately (if only because Node looks like it'll make all my other skills irrelevant in the web development market). It.. just.. feels.. wrong. Nothing in the language lends itself to building architectured solutions. Maybe the testing tools have caught up with other la

Arguably, it should have been Python, which is a better language. But Python has a problem. Python's little tin god, Guido von Rossum, is in love with his own implementation, CPython. CPython is a naive interpreter. (A "naive interpreter" is one which does the specified operations in the specified order, with little optimization across operations.) In CPython, everything is a dictionary and a lot of time is spent doing lookups. This allows everything to be dynamic. In Python, one thread can patch objects in another thread while both are running. Objects can gain or lose elements on the fly. Even running code can be "monkey-patched".

The price paid for that excessive dynamism is that a Python compiler is hard to write, and an optimizing Python compiler can't optiimize much. Google tried in-house to make Python faster, and their "Unladed Swallow" failed humililatingly. (A different group at Google then developed Go, aimed at the same problem of producing something good for server-side processing.) The PyPy crowd has tried, hard, to make an optimizing Python compiler, and with an incredible amount of complexity under the hood, has made considerable progress, but not enough that PyPy is used much in production.

Pascal went down for a similar reason. Wirth was in love with his elegant recursive-descent compiler. But it didn't optimize, couldn't handle separate compilation, and had no way to handle errors other than aborting. Python seems to be headed for similar irrelevance. It hasn't even been able to replace Perl, which ought to be as marginal as "awk" by now.

Javascript has the same dynamism as Python, why is Javascript still much faster?

I suspect it is because it is easier to identify a consistent low-level type interpretation — including clearly delineated points where you need to throw the code away and recompile — with Javascript than with Python. That's what you need to do a decent compiler.

Python has definitely replaced Perl where I work. It's also very quickly catching up to R (statistical/quantitative analysis), but from what I hear there is still a network effect keeping R on top for bleeding-edge research.

I'm kind of surprised that there is not one good comment about the benefits of javascript up above this yet. I mean you can off load sooo much data to the client cpu. With the latest in webstorage and the sqlite port to JS I can actually create a friggen database server running on the client. WebRTC and WebSockets are seriously about the change everything in the next 1-2 years....I'm curious how many of the above posts are done by folks who actually do web development? It is pretty much indispensable th

All the features in JS that the parent is talking about are very important. But they're important and good for web applications, not web sites. Everyone here seems to assume that all people do with HTML/CSS/JS is write web sites, but that's no longer the case. Application which were traditionally written in WinForms/WPF/C++/Mobile apps are starting to get offloaded to web, which means we need those features in JS. I mean, why do you think the canvas tag and others were introduced?

Platform is the wrong word. Something like node.js might be considered a platform, but not JavaScript itself. JavaScript is flexible, C-like, has first class dictionaries and JSON makes them super simple to serialize. It's one of those languages whose flexibility can actually be a hindrance because you have end up having to get pretty deep to find the structure... maybe it is a platform.

My beef with JS is that it is a regression from the time of PASCAL or Ada (or should I say ALGOL ?) where good programs are nicely structured by means of proper data structures.This means:1.) Reliability is unecessarily low. You never know how many feet an object in variable "monkey" might have. Usually two and sometimes just one, zero or 312.2.) Security: A lack of type-safety regularly results in insecure programs. Plus, it is quite difficult to reason about program safety in absence of strong typing.3.) Hard to optimize: Strongly typed languages result in programs which can be quite easily optimized for execution efficiency. Compare that to JS, where the optimizer has to "infer" "typical runtime type" and then optimize for that "dynamic" type constellation. Of course, exception code must handle the "differently typed corner cases", too.That implies: Bloated optimizer/JIT compiler plus all the nice zero-day exploits which result from this. Donning my tinfoil hat I would say JS is a godsend for the TLAs.

In summary: JS is a regression from the state of the art of 1970s. "IT" is actually forgetting the great achievements in Software Engineering that resulted in Algol. With JavaScript, computing has gone even further down the path of sloppiness and anti-reliability. Let's face it: 99% of people are in this profession for the love of money, not for the love of constructing reliable and correct systems.

Dipl.-Ing.(BA) Informationstechnik Frank GerlachGäufeldenGermany

As a counter-concept, look at this invention of mine:http://sourceforge.net/p/sappeurcompiler/code-0/HEAD/tree/trunk/doc/manual.pdf?format=raw

If you are going to argue for a serious counter proposal you may want to get an account.

Regardless the battle between typed and untyped languages on the web was lost during the CGI days when Perl replaced C. Too much of the data coming in is untyped. http://happstack.com/docs/cras... [happstack.com]

As for Saupper reading the manual you don't seem to be considering the problem domain at all just creating an alternative strongly typed language with some different features than C++ or Java.

One thing I personally like about Javascript is that it covers all three of the currently most popular programming paradigms.

You want an imperative style of development? Javascript can do that, check.

You want an object-oriented style of development? Javascript can do that, check.

You want a functional style of development? Javascript can do that too, check.

Some would argue that by covering so many different paradigms, it ends up covering none of them as well as languages that are designed for a specific paradigm from the ground up, and I wouldn't really refute this point... but it easily does all three of them well enough to still be profoundly productive when developing in any of them, and this means that a programmer is relatively free to pick the paradigm that best models the original problem when designing a solution. This, in my experience, results in shorter development cycles, and frequently much less buggy code.

No, you're absolutely right - being able to choose a mode of programming is neat and Java does lend itself to doing neat things. But it still feels like a language that someone quickly hacked together. And the freedom to pick a paradigm means your fellow coders get to pick whatever happens to be in their clue bucket for the day. At least with a language that focuses on imperative or functional coding you can be reasonably sure that the guy sitting next to you has a similar view of reality as you do. "Multi-paradigm" is a bit like saying "post modern", with all the positive and negative connotations. I prefer my languages neo-classical:)

It's sad we're still using a single client side language instead of having the option of running bytecode in the browser. Obscurated JS is just as difficult to read as bytecode, and the browser can also have an automatic bytecode to text "viewer".

The complaint, though, is that people aren't using Javascript as a carpenter uses a hammer. They're using it like a carpenter would use a hammer to mill and tool a high-performance aluminum engine block. Of course your first thought would be "DUH! A carpenter's hammer's the wrong tool for that job, you need a CNC mill for that.". Which is the complaint's point. And we already have plenty of the right tools for that job, the problem is that Javscript developers go "But those tools are too complicated!". No,