Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

eldavojohn writes "A very lengthy and somewhat meandering essay from Crista Videira Lopes has sparked off some discussion of where new programming languages come from. She's writing from the viewpoint of academia, under the premise that new languages don't come from academia. And they've been steadily progressing outside of large companies (with the exception of Java and .NET) into the bedrooms and hobbies of people she identifies as 'designers' or 'lone programmers' instead of groups of 'researchers.' Examples include PHP by Rasmus Lerdorf, JavaScript by Brenden Eich, Python by Guido van Rossum and — of course — Ruby by Yukihiro Matsumoto. The author notes that, as we escape our computational and memory bounds that once plagued programming languages in the past and marred them with ultra efficient syntax in the name of hardware, our new languages are coming from designers with seemingly little worry about the budget CPU being able to handle a large project in the new language. The piece is littered with interesting assertions like 'one striking commonality in all modern programming languages, especially the popular ones, is how little innovation there is in them!' and 'We require scientific evidence for the claimed value of experimental drugs. Should we require scientific evidence for the value of experimental software?' Is she right? Is the answer to studying modern programming languages to quantify their design as she attempts in this post? Given the response of Slashdot to Google's Dart it would appear that something is indeed missing in coercing developers that a modern language has valid offerings worthy of their time."

>Van Rossum was born and grew up in the Netherlands, where he received a masters degree in mathematics and computer science from the University of Amsterdam in 1982. He later worked for various research institutes, including the Dutch Centrum Wiskunde & Informatica (CWI), Amsterdam, the United States National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, and the Corporation for National Research Initiatives (CNRI), Reston, Virginia.

Wrong premise.

http://en.wikipedia.org/wiki/Yukihiro_Matsumoto

>He graduated with an information science degree from University of Tsukuba, where he was a member of Ikuo Nakata's research lab on programming languages and compilers.

Pretty much every C compiler with the partial exception of gcc compiles C code into a functional-style imaginary assembly-like language, because that allows more optimization algorithms to work.

Really? Because I've worked on several C compilers, and that's not something I've ever seen. Unless you're talking about SSA form, in which case you're wrong on several counts. First, GCC does use SSA form and has for about five years. Second, SSA form is usually quite restricted: memory is not SSA, for example, so it's not very like a functional style at all. I'm not even going to talk about your conflation of vtables with object orientation.

What on earth are you on about? The language has nothing to do with threading, thats down to the OS

Nonsense. Well, sure, if you have 1024 threads doing totally unrelated things then the language doesn't matter, but then you may as well be using separate OS processes and getting some isolation for free.

Back in the real world, threads need to communicate and they need to share data. How the language represents this has a massive impact on scalability.

Exceptions too. If you're parsing a complex data file several layers deep then error handling will make C code enormously complex. With C++ you just throw an exception and let stack unwinding free all the temporary data for you.

Except that C++ exceptions are tricky beasts; this is a classic "hard to shoot yourself in the foot, but if you manage it you'll blow your leg off" situation. Aside from how easy it is to get exceptions wrong (e.g. when your exception types are part of an inheritance hierarchy), there are also hidden "gotchas" like this:

SomeClass::~SomeClass(){log.print("Destroying SomeClass Object");}

See the problem? Wondering why this is relevant to exception handling? The body of this destructor might throw an exception, which is OK sometimes but deadly if the destructor was called as part of the stack unwinding process that resulted from another exception being thrown (which causes abort() to be called).

The C++ standard library also has (or at least as of C++98, had) poor support for exceptions. You must explicitly activate exceptions in some classes; "new" may or may not throw exceptions; the number of exceptions that might be thrown is very limited. There are some parts of the C++ standard library that require you to check return types or to check class members on your own (sometimes this is a good thing -- throwing an exception at the end of input would be horribly annoying).

C is not much better, since error states are advisory and people often ignore them (how many times do you see people fail to check the return value of printf?). What C++ needs (and perhaps this is in C++0x) is a better definition for exceptions, one that does not cause programs to abort (which is even worse than checking return values) and to make better use of exceptions in the standard library. Unfortunately, this would create all sorts of headaches for compiler writers, who would have to rethink their code generation strategies, so I do not think it is likely that we will see this happen any time soon.

Sounds like someone's mad that they spent a lot of time getting a PhD only to find out that a PhD wasn't necessary to be successful in computing.

From TFA:
"[T]here appears to be no correlation between the success of a programming language and its emergence in the form of someone’s doctoral or post-doctoral work. This bothers me a lot, as an academic. It appears that deep thoughts, consistency, rigor and all other things we value as scientists aren’t that important for mass adoption of programming languages. "

If someone doesn't come from a scientific background it's simply impossible for them to have deep thoughts and rigor in what they produce? I doubt it. I'm fairly certain that deep thought, consistency, and rigor were put into the programming languages mentioned in the article.

We need less elitism like that shown in this article. I may be jaded by the fact that at my university, our professors were horribly clueless about seemingly every modern computing concept and were still using Pascal to teach programming. When all academia taught was Pascal and even worse languages and old concepts, it's no wonder that people created other languages to get things done.