Posted
by
Soulskill
on Friday December 09, 2011 @03:55PM
from the use-once-then-discard dept.

snydeq writes "Fatal Exception's Neil McAllister writes in favor of new programming languages, given the difficulty of upgrading existing, popular languages. 'Whenever a new programming language is announced, a certain segment of the developer population always rolls its eyes and groans that we have quite enough to choose from already,' McAllister writes. 'But once a language reaches a certain tipping point of popularity, overhauling it to include support for new features, paradigms, and patterns is easier said than done.' PHP 6, Perl 6, Python 3, ECMAScript 4 — 'the lesson from all of these examples is clear: Programming languages move slowly, and the more popular a language is, the slower it moves. It is far, far easier to create a new language from whole cloth than it is to convince the existing user base of a popular language to accept radical changes.'"

Seriously, choices are always better. My tool (normal tools not software tools) contains two different types of hammers, two different wooden mallets, several different screwdrivers....... If you learn to use the right tool for the job, the different choices make since. If you are stuck on the mentality of "All I need is a bigger hammer" and "All I need is XXXX programming language" then you probably are not using the right tool for the job.

I have to respectfully disagree that progress would stall if there weren't new languages all the time. One can't help but wonder what progress would be made if the effort spent in trendy languages was invested in established languages. If a concept is good, it will appear in whatever the languages the industry is actually using, regardless of whether or not a trendy new language implemented it first. That said, testbeds are certainly a good thing to have.

No language is perfect. The idiocy of language designs stem from the fact that few, if any programming languages were designed by anyone who had ever read a book on psychology, ergonomics or human factors.

There's a saying floating around the internet that "Languages should be easy to read and understand and incidentally be compilable by computers." That about sums it up.

THE COMPUTER DOES NOT MATTER. It is a means to an end. It's only purpose is to serve humans. The languages designed to provide a system level interface to that machine need to be designed around what a human understands, the way a human understands it. Slavish devotion to a hardware design, or even an object model is plain stupid if it makes your product nearly unusable (e.g. the WPF datagrid).

For web server side scripting, you can replace say. PHP with Python or Ruby with relatively little pain. Sure, you're rewriting all of your logic, but, in the end, moving languages is only as massive as the size of your projects.

For stuff that's more bare metal, replacing anything with anything else this is true too; assuming the linker gives you a binary in the required binary format. Not a big deal right?

The problem with Javascript is that it's the only language we have for web frontend development, and it's horrible too. It's deceptive. It looks simple, but making dynamic changes to HTML entities requires having some idea how classes work so you can do operations on the DOM. Sure, there are frameworks that might simplify this stuff, but, for artistic and creative people(read: largely bad at math), this is problematic. It's very CS202 and having to think rather linearly.

Just because you're using C++ doesn't mean you need to write some glorious object-oriented dynamically-dispatched exception-throwing operator-overloading dynamically-dispatching self-reflecting monstrosity. C++ provides several very fundamental features which make it hugely superior to C: inline functions, better const semantics, reference types, and templates. If you don't want to write enterprisey crap, don't. But don't chuck out the baby with the bath water.

If you read the story, you'll note that the COBOL programs in question have been around for three decades or so. Most programs which have been continuously used for 30 years tend to be pretty solid regardless of the language.

Yes. In most languages, objects are implemented like C (or even assembly language) structures. The language just adds a hidden pointer parameter to the object's methods. Sometimes method calls are made through indirect pointers. All of this is perfectly compatible with the way real-world CPUs work, including their built-in hardware stacks.

Functional languages, OTOH, are big on closures and the like. These don't map onto hardware stacks, and there are huge numbers of elaborate hacks in functional language implementations to try to cram the high-level concepts onto the procedural machine without taking the massive performance hit of allocating every value on the heap.

One of the problems with new languages is that everyone starts out stupid.

You clearly don't have a CS background, but rather are a programmer. If you understand the fundamentals you're not going to be "stupid" in any language. Programmers are simply trained to use one or more tools. I have a cousin, for example, who has a Master's degree in Music. Even with an instrument he's wholly unfamiliar with, like an obscure tribal instrument, he can generally figure it out and play it. That's the difference between him and some guy who taught himself to play guitar.

Of course it does. Every programming task has to care about performance. What's changed is that the most important type of "performance" is different for every task. Most of us aren't doing large-scale numeric simulations.

If you're programming desktop GUI applications, responsiveness is usually more important than throughput. If you're programming mobile devices, battery efficiency is more important than any other consideration.

I think it was P.J. Plauger who pointed out that if the program to process the monthly payroll takes three months to run, it's useless.

What I think you meant to say is that for most programs, whether or not they meet their performance criteria is not limited by CPU cycles. That's certainly true. Most programming tasks can afford to spend some cycles if in return for correctness, programmer productivity or ease of maintenance.

Performance doesn't matter any more. Correctness and quick development does. FP provides that in abundance. (Of course, correctness is just another way to say "quick development" nowadays, but whatever...)

Really, performance doesn't count, that must be nice. The two worlds that I have lived in (control systems and financial transaction processing) have performance as king because in both cases, meeting specific performance numbers means large explosions or large fines from networks. Those are naming just two areas, there are quite a few other areas, but I can only speak of the two stated above.

1) Two words: undefined behavior. You'll find it around every corner in C or C++ (two very different languages, of course) -- this leads to unreasonably hard-to-find bugs. In C++ it's also extremely hard to avoid such behavior consistently -- compilers are happy exploit it for optimizations, but somehow can't provide warnings for all cases where you are (unwittingly) relying on UB.

I have found that ~90% of the "undefined behavior" is caused by people not properly checking argument values. That is the nature of imperative languages, if you don't know or understand that, I question whether you should be writing code then, sorry.

2) Really? Haskell or Ocaml do not rely on any of those things you mentioned. Difficult? Perhaps, but see my point #1. Besides, who would you like making your software... someone who's just "learned java" or someone who knows what the fuck they're doing?

See the above point of my argument...and nice language.

3) So all FP languages which don't perform as well as C (or order-of-magnitude at least) don't perform as well as C. What an insight. Btw, Haskell is also within OoM of C. Also, see the top of this post

Sarcasm really doesn't help make your point here.

4) How hardware works is fucking irrelevant. If compiler for language X can optimize "fib N" to a constant expression it doesn't matter if your C compiler can generate code which executes a million iterations of a fib-computing loop per second. Certainly, we're not quite there yet, but in the C world there's no hope of doing this beyond *really* simple examples (aka not fib), but FP could conceivably get further. (TC is a barrier, but you can still do useful computation even without TC.

Actually, I have found that understanding just how hardware works makes finding solutions to problems a whole lot easier. Computers function in a particular manor, and I have found that they mirror life more closely than functional languages. Now granted, that is my perception, but the fact that functional languages are still used only in a few disciple sure enforced my opinion.

After rereading the parent comment, I think your perceived attitude of the author is way out of line. He stated his case clearly AND WITHOUT PROFANITY. I have been developing software for 17+ years, and after all that time, paradigms come and paradigms go, languages come and languages go just like management styles. What matters the most is the person at the keyboard designing and developing the solutions. I can't even count the number of languages that have come and gone through the years, but C and C++ have always been there. I have stopped fighting the fight of "..this language is better because..." and just learned to use both of those languages better. I produce products faster with far fewer defects so I am happy.

Guess at this point I just need to yell "GET OFF MY LAWN" to complete my old grumpy statements.

Pure FP is not winning, but elements of FP have sneaked into all major imperative languages of the day. C# has lambdas for 6 years now, VB for 3 years. C++ has just got them, and Java is getting them in the next release. All these also have (in case of Java, will have) their equivalents of map/filter/reduce.

Only in fantasy-land has SQL implemented "write once, run anywhere". You hint at this problem yourself where you say "with a few mods, it works on nearly any relational database".

Whilst there is an SQL standard, implementations of SQL vary massively. Professionally I've used SQL-Server, Oracle, MySQL, PostgreSQL and SQLite - all have profound differences, and moving anything beyond absolutely trivial SQL code from one to another requires rewriting the query. We're talking about a language here where major implementations don't even agree on string concatenation syntax...

Every SQL implementation has it's customisations and variations from the standard. It's almost impossible to write any kind of decent SQL code without making use of these custom variations, and thus ruining the portability of the SQL code.

You will be better off if you learn a range of programming paradigms. Knowing how to solve programming issues in a variety of ways will help your problem solving in whatever language you are using, even if it does not support the most appropriate paradigm for the job.

Having said that, the best way to learn different paradigms is to use languages that are different from each other. Learning only languages that share paradigms will not stretch your abilities that much. For example, in the big picture, C++ and Java are not that far apart.

My personal experience is that Lisp/Scheme is different enough from any of the C derived languages that it forces you to learn to think a new way. Learning Scheme will make you a better C++ coder. I still haven't spent the time to learn Haskell, but I plan to do so. I think it will improve my abilities no matter what I am working on. Lazy, strict functional programming is far enough removed from what I normally do that I expect to learn a lot of new ways to think about coding.