Posted
by
timothy
on Sunday March 20, 2011 @06:37PM
from the user-friendly-is-hard dept.

An anonymous reader writes "Java is performant, widely adopted and eminently portable, however, its syntax is largely inherited from C++ along with some of its esoteric unfriendliness. Mirah aims to place a friendly face on Java through the implementation of a syntax whose primary concern is developer friendliness (think Ruby/Python/Groovy), and route of least surprise. The result is a truly cogent alternative syntax delivering readability, expressiveness and some compelling new language features."

"Prepend" isn't a word either, but technical people use it a lot because there is a specific meaning there that needs a word -- to append at the beginning. Strictly speaking you could use "prefix" as a verb, but that word has a connotation of adding a small fixed string to the beginning of one or more items. "Prefix all international phone numbers with a + symbol." "Prepend the header before sending the request."

Similarly there is a need for a concise expression meaning "of adequate performance" without stretching to "high-performance" (especially since High Performance Computing [wikipedia.org] has a specific meaning of its own). Unfortunately, in the modern language of hyperbole, terms like "adequate" and "acceptable" have negative connotations along the lines of "not really good enough but better than nothing". So, we, as an industry, have invented a jargon word "performant" to express the idea that a thing has a level of performance sufficient that you don't need to worry about it and can look for optimisations elsewhere in your system.

Nothing bothers me as much as the incorrect usage (well, what I consider incorrect usage) of works like biweekly, bimonthly, etc, to meet twice per X. These words mean every two X, not twice per.

Unfortunately, years back the dictionary added these incorrect and ambiguous usages so that now these words can "officially" mean either twice per, or every two. Two conflicting definitions makes them useless. (For those of you who may be wondering, the proper way to say twice per X is with the prefix semi-, as in s

Hate to tell ya, but once the dictionary adds a definition to a word, that definition is no longer incorrect.

Dictionary definitions reflect modern and changing language usage by the public. It's like a democratically elected office for words: members of the general public choose words based on misinformation, stubbornness, and the way their parents, friends and teachers choose their words. Dictionary editors are like the electoral college: they look at what the general public has chosen to use for words and definitions, then choose for themselves which of those make it into the dictionary. Not everyone will agree on the new word or definition, but in a few generations it will be history and no one will care. When was the last time you heard anyone bitch about Grover Cleavland being elected?

Prepend is a neologism true enough, but it has a clear meaning and a unique definition, said definition being easily accessible to both inquiry and contextual meaning.

Now lets look at "performant", it seems to be an adjective. Adjectives ending in -ant, -ance, or -ent (depending on the preceding consonant) generally mean "meets a minimal requirement". See "conformance" and "compliant" etc. These words also carry a requirement of a secondary reference to an external entity. One must be conformant or complian

"Prepend" isn't a word either, but technical people use it a lot because there is a specific meaning there that needs a word -- to append at the beginning.

"Prepone" is a similar one, meaning "to move a scheduled date forward" (as opposed to "postpone", where the date is moved back). It's commonly used in India among English speakers, but most North American English speakers I know have never heard it.

Well, "performant" is a word in French and in Dutch, and it exactly means what you'd think it means.

Are you so stupid that I really need to remind you that the article was not written in either French or Dutch? It was written in English. I'll accept that perhaps the author is not well-versed in English, and makes a few gramatical errors. But only a fool would suggest that because a similar word exists in another language, English speakers might understand it. For a very simple example, the German word "doktor" does not really mean what most English speakers would think.

I use performant since I speak/write english, that is perhaps 25 years. What word would you use instead if you want to say this is more performant than that? Or if you simply want to say "a" is performant?

Perhaps we europeans "invented" that word by simply assuming it is existing in english... as your parent pointed out, the word is not only French and Dutch, it is also German.

This is just ridiculous. "Ain't" is a word culled from common usage, and there is no way formalizing it will overcome its common usage meaning, which is [negation of] ["to be" conjugation]. The conjugation of "ain't" to any pronoun is "ain't". And if you lived anywhere that "ain't" is actually used in common speech, you would know this. "He ain't" is perfectly valid, if "ain't" is valid.

Your explanation is reductionist, and exactly what would be expected from someone who wants English to be a "living langua

Mirah looks to me so far like a waste of effort. It has somewhat nice syntax, granted, but if you really want to use Ruby syntax with the JVM, there already is something that does that: JRuby.

If you just want simplified syntax, Groovy is just as simple and looks more familiar to Java programmers.

If you want simplified syntax and powerful new programming tricks, Scala and Clojure do this far better. If you ignore the Scala libraries and half its features, you get everything that Mirah was designed to do.

The language designers should do a better job explaining why this is worth paying attention to.

From what I gathered on their website, Mirah was created by one of the main JRuby developers in order to create a language that fits in better with the JVM capabilities and Java ecosystem than a strait port of Ruby. The end result should offer better performance and cleaner integration with existing Java code then JRuby, while providing nicer syntax than Java.

Groovy is slow as snot, and I wouldn't use it for anything other than perhaps a user scripting language for a java application, and even for that I think there are better options. No clue how Mirah compares to Scala. That was my first question as well.

"Groovy is slow as snot, and I wouldn't use it for anything other than perhaps a user scripting language for a java application, and even for that I think there are better options. No clue how Mirah compares to Scala. That was my first question as well."

You use Groovy if you like to do some dynamic stuff. I don't think it will be faster with anything else anyway.If you don't want the dynamic stuff but still like Groovy, there is Groovy++.

Mirah is statically-typed, like Scala or Java, so there's no late binding. That was an explicit goal in designing the language. The fact that after looking at examples you didn't realize Mirah's statically typed makes me smile, since it shows the syntax truly does cover up the typing.

Do you honestly think that after taking on Goggle for the non-Java-but-Java-like innards (I forget the name... Davlik?... something like that) of Android they are gonna let cacao or any of the other JVMs slide?

This part of your (apparently) ADD inspired rant is beside the point. Languages that run on top of the JVM are orthogonal to whether or not some particular JVM is legal. Even without open source JVMs (gasp) there will still be performant JVMs from Oracle and IBM. The world goes on.;-)

Most likely the Dalvik tempest in a teapot will be resolved via a typical cross-licensing arrangement between Oracle and Google.

Scala significantly extends the Java object model. Consequently, only a few basic Scala notions map directly to Java and can be used from there. A lot of advanced stuff is not easily accessible.

Clojure is Lisp - 'nuff said.

Mirah seems to be mainly about syntactic sugar. Judging by the few samples on the front page, it brings Java roughly to the level of C# 4, except with a nicer syntax. But it's still strongly typed (unless you use "dynamic"), and its notions

The original goal of Mirah was to create a language that looked nice, compiled down to a form as direct and fast as Java, and did not require you to drag a runtime library along with you. You take Mirah code in and get JVM bytecode (in.class files) out. There's no extra dependencies; you're not shackled to an extra jar file just because you wrote "hello world".

Mirah has much of Ruby's syntax only because we liked Ruby's syntax. The Ruby class libraries are not there, and Mirah is not Ruby. It's statically typed, with Ruby's clean syntax and some of Ruby's surface-level features (like simple iteration and closures).

I guess you're right, we need to do a better job explaining why it's useful. I have an article coming that emphasizes that this is simply a "javac" alternative that happens to have Ruby syntax, and hope to clean up the web site too.

I think the main defect of Ruby is its library. This library lacks of naming consistency and suffers from confusing name aliases. The Java library is a lot better.
The syntax of Java is poor, it is not very concise and the syntax of recent additions (generic, enumerate) is not very clear.
I think a next step would be to write a translator from java to Mirah, to stop using jvm (either switch to dalvik or openjdk), then to say good bye to Oracle.

One "new" language (for some value of new, it's a few years old but was only recently released on a large scale) is Gosu [gosu-lang.org], which has some very nice features. It is closer to C* syntax, which is nice from my perspective (I understand maybe not from others lol).

I'm a fan of terse, maximally expressive languages as I think they maximise productivity both for development and maintenance. Gosu seems close to a sweet spot.

If the goal is "to place a friendly face on Java through the implementation of a syntax whose primary concern is developer friendliness (think Ruby/Python/Groovy)", then perhaps the "route of least surprise" would be to offer some assistance to Jython [wikipedia.org] (python running on Java), as it's been around for over a decade already, but seems to have been a bit neglected in the past couple of years.

My understanding is that Mirah is designed to compile down to Java byte code, while Jython runs a Python interpreter on top of the JVM. There are a number of philosophical and technical differences between these paradigms.

What's more, JRuby exists, and Mirah doesn't compete with JRuby. In fact, it seems one of the motivations for its creation was to make it easier to hack on JRuby itself, which is, after all, quite a lot of Java code.

Coming from someone who writes a lot of Python and currently has a lot of Python code in production, the Python language is a steaming pile of shit. Yep, I said it. It's as if Guido specifically went out of his way to choose the worst possible option at every design decision. The only things Python have going for it are a comprehensive library, "better than Ruby" performance, and mindshare.
I would much rather code in Ruby, Java, C/C++, Lisp, Scheme, Smalltalk, or assembly than Python. Sadly, about 50%

well...what languages you started with. Looking at that, the c++ based code is perfectly readable, but I can't make heads or tails of the other. In fact, reminds me of perl and objectiveC - just start hitting all those shifted characters - they each signify something special.

Really? I mean, you're missing such basic things as do you want the ghosts to catch Pacman every time, or do you want him to be able to escape? Do you want Pacman to keep moving after the joystick is returned to center, or do you want him to stop? The problem with every simplified language I've seen so far, is that you still need to be able to express what you want precisely, and that adds complexity to your language. You have to figure out what you want the computer to do in the corner cases.

Unless you have AI behind your language, but then it's not really the same.

In other words, the real problem of programming is usually not the language.

The real problem tends to be that computers have to get information that's far more specific than humans do. For instance, pretend you're moving to a new apartment, and you tell your buddies who are helping you to "put the dresser over there in that corner." Simple, clear, instruction, right? But your buddies will figure out for you that they can't put it right up against the wall because there's a radiator there, that the drawers n

Is it just me who finds ruby even more cryptic than perl? Reading why's poignant guide [uniqpath.com], I loved the presentation of the book, and really wanted to love the language, but every time he said "read this code out loud, it makes perfect sense, doesn't it?" all I could think of was "you, my dear little cartoon foxy friend, have clearly been snorting too much of the good white stuff. I'm going back to python now":-(

I don't remember where I read it, but I read somewhere that Ruby works as a "programming skill amplifier." As in, if you're a great programmer, Ruby allows you to write beautiful code, but if you're a poor programmer, Ruby will allow you to write the most hideous thing that your processor has ever seen.

And I agree. For better or worse, I think it's a testimony to the power that lies in the language.

Yes, that's right, I prefer C++ syntax and coding style. Efficient, does exactly what I tell it to and makes sense in my head. All this Python/Ruby etc just makes my head blow up. I often code in those languages and I see them as very useful (and easy to work with), but I still prefer C++ syntax.

I am a guy who loves computer languages. I had a lot of fun with Ada for gawd's sake. I am pursuing erlang at the moment. Thought about haskell but it was just too big to play with...

So keeping that in mind?.... Java and the JVM was a non-starter for me.

Every time a new object oriented language comes out the purists start with "we don't need multiple inheritance" and so on, and they always end up having to hack it back in as some half-conceived junk (see "interfaces" providing, at a later date, all the much shat upon "complexity" of multiple inheritance with none of the ability to provide a default implementation, so then you add delegation which is all the default implementation with none of the inheritance etc...). "Java doesn't have pointers" my pasty white behind, every object is a pointer in java, you just can't use them properly, but they do manage to use pointers to prevent first class object copying, so then they added clone() etc.

Then they "didn't need" proper destructor behavior, we have a finalize that would run at some time in the future, but really the code out to know when the last object reference is going out of scope so it can call a destructor manually if it wants. That was a stopper for me.

See they coded their "every other language should have remained pure" hubris into their virtual machine, they encoded it into their hardware, from inception they designed their system to be limited and resistant to repair. So no thank you. So now Ruby fans want to take their niche language and cram it into the fundamentally flawed Java VM. Ooooohhh sign me up!

I swear the language feels like it was designed on a dirty napkin by someone who had no grasp of scope or symmetry.

Might as well be Python (I am old enough to remember RPG and COBOL Coding Forms, saving one apparent character, e.g. "}", (because in Python ":[newline]" is the same as "{" so what did we save again?) for the privilege of using white space, and counting tabs, as a control structure. And save the "but now we have editors to help us so that doesn't matter" tripe, we had coding forms to "help us" etc. And I suppose its okay to hack off a foot because you can alwasy get a peg-leg to help you too?

Why is it that each new generation of "language designers" insist on reinventing the same old square-wheels of the previous generation and calling it new?

Now get of my damn lawn... (yes, this rant makes me feel old, but come on people, imagine where we would be going if you would just stop trying to reopen the same tapped out mines...)

Java doesn't have pointers" my pasty white behind, every object is a pointer in java,

Every object is a reference in Java. There is a world of difference between a name which refers to some object and an integer which might refer to some object, but you can still do integer math, and the object might not even be there anymore...

This is almost as if you're trying not to see the advantages. No more segfaults. No double-frees, no crazy-ass debugging where the wrong method gets called because your pointer is pointing to the wrong (or a corrupt) vtable, and you really have to try to get a memory leak.

they do manage to use pointers to prevent first class object copying, so then they added clone() etc.

And the number of times I should've just passed the original object, vs the number of times I really didn't want it to implicitly clone something? Again, I have to give this one to Java, with the caveat that the interface to clone() kind of sucks. Ruby has dup, and all objects have it by default. Implementing clone() in Java is a pain, and if an object doesn't implement it, you're SOL.

the code out to know when the last object reference is going out of scope so it can call a destructor manually if it wants. That was a stopper for me.

Really? This?

Think back to all the destructors you've ever written in C++. How many of them can you count that did more than free memory? In other words, how many destructors have you ever written which aren't entirely replaced by the garbage collector?

I can pretty much count them on one hand. Filehandles, DB handles, etc. Yes, it sucks, but having to close a filehandle vs having to free every bit of memory I ever allocate? I'll take the filehandle.

So now Ruby fans want to take their niche language and cram it into the fundamentally flawed Java VM.

Wait, what?

You haven't mentioned a single issue with the JVM. Your complaints have been about the Java language. Surely you can tell the difference?

Oh, alright, you had one other complaint: You don't like the lack of proper destructors. Guess what? Ruby doesn't have them either. Ruby has finalizers, just like Java. Of course, I don't see anything about the JVM's design that prevents a language from implementing destructors anyway.

It's also funny how you, like most of Slashdot, seems to have missed the point: JRuby exists, and is pretty much neck and neck with the official C Ruby implementation in terms of performance. It's just as stable, and almost everything that works in one implementation works in the other -- kind of like how you can have multiple C compiles.

This article was about Mirah, which is not Ruby, nor trying to be. It's a way to make Java suck less, at least syntactically. If your gripe isn't with the syntax, you probably won't care about Mirah.

And for a bit of balance, here's the features I really, really miss in Java:

Closures

Better setters/getters

Operator overloading (or something other than the retarded handling of == vs equals)

Issuing signals on the _last_ release of a mutex that is coupled with a condition variable instead of on every release of a nested mutex.

Unlocking and dismissing shared object libraries (e.g. undoing dlopen()) when, but not before, the last instance of any/every object dependent on the shared object file goes out of scope.

Preventing "Cruft" in my heap by doing "deep" memory frees of complex structures as soon as I no longer need them instead of at "some random time in the future if ever".

Changing modes and states on devices using ioctl() etc. (e.g. when the last "raw" use of the controlling terminal goes out of scope you put the terminal back into line mode until/unless you need to bring it back into raw mode.)

Resetting hardware on last use.

Emulating devices and subsystems that, by definition, reset themselves on last use.

Doing all of the above with "exception safety" without having to write a ass-ton of "finally" blocks (though I _do_ whish C++ had "finally" 8-).

Doing all of the above in "deep structures" so that my objects are true active objects instead of just nested hunks of memory.

You are like a blind guy asking "when was the last time you really used your eyes for anything but reading" because you have never heard of art, when you presume destructors are "really just for freeing memory" you demonstrate a horrific limitation in your understating of object, functional, and event driven programming.. Just because you don't understand the non-beginner ways to use a construct doesn't mean the construct is only used the way a beginner would use it.

Meanwhile:

I lived through the "P-System" and "P-Code" the shortcomings and costs overheads and raft of annoying assumptions built into the JVM are a "given" to me. Sorry for not doubling the size of my rant to make you happy. I work with too many system internals to think the JVM is a win. You go back to treating your heap as executable, and over-stressing your CPU translation look-aside buffers, and leave me alone... 8-)

In counterpoint:

I think closures are overrated, but I don't disparage them because I recognize that the fact that just because "they have never been particularly necessary or useful to anything I have done" doesn't mean that they are unnecessary or useless to persons other than I. Plus you can pull the same thing more or less in C and C++, for a limited number of variables, by returning a pointer to a nested function, but like eeew.... Closures do typically require an executable data segment, which I find impure, but all of java requires an executable data segment so who am I to judge. Closures are just the latest brand of secret sauce to allow people to throw memory at a problem instead of logic. 8-)

You are like a blind guy asking "when was the last time you really used your eyes for anything but reading" because you have never heard of art, when you presume destructors are "really just for freeing memory" you demonstrate a horrific limitation in your understating of object, functional, and event driven programming.. Just because you don't understand the non-beginner ways to use a construct doesn't mean the construct is only used the way a beginner would use it.

Also, freeing memory manually in C++ is almost always the wrong way of doing things. There are plenty of adaptable containers in the standard library, and when you want to write you own and actually do need to keep track of memory directly there are smart pointers that deallocate whatever object or array they're pointing to when they go out of scope. Raw pointers and manual deallocation are only needed in very special cases, like when you're writing lock-free data structures or have to deal with C APIs.

C++ really isn't about manual memory management, it's about scope-based memory management. You don't have to free things manually, and yet you can be completely certain of when a certain chunk of memory is deallocated.

I agree that most of the C++ I have seen has been pretty abysmal, but that's not the paradigm's fault, nor a fault of the language. The average professional in _any_ field is, by definition, pretty average, and tautologically therefore not "excellent".

I can say the same thing for overly clever C programmers. There is a reason that the Obfuscated C contest is more popular than the obfuscation contests of any other language.

This I don't get. If you want "developer friendliness" and "least surprise", use a syntax with the (relatively) minimal set of keywords/tokens to accomplish your task. Ruby has basically incorporated the syntax and conventions of every major programming language of the last 30 years...

And I guess you could call it "developer friendliness" if you want to let people freeform program in whatever style they want, with no two developers tending to use the same syntax for the same implementation - but at this point in my career (ie having worked for half a dozen companies and realizing what you write now may exist for decades), I consider a major component of "developer friendliness" as "easily comprehensible and maintainable by the next developer".

g() wont compile because "std::vector" inside g(), is not explicitly coupled to the template definition of g() so it could be anything. Add "typename" and then it is unambiguous. Might as well ask why "any car" and "every car" are not the same sentiment in English.

struct s is in no way dependent on class T, so there would be no difference between an s and an s, so the "template " has no meaning or purpose. The fact that you have to remove the syntatic no-op noise is not that complex and every lan

I may be the only programmer in the world that is willing to admit it, but having programmed in Java since version 1.0, I really like the syntax. And yes, I do find Java a lot of fun, especially for serious enterprise development.
The thing I presently hate the most about Java is that new 'Oracle' thing.

I like big brackets and I can not lieyou Python programmers can't denywhen a module's checked in with a bitty interfaceand a dozen different kinds of braceyou whip out static code analysisand do a little logical bris...

If C/C++ were so developer un-friendly you would not find them under the hood of pretty much every new language that claims 'developer friendly,easier,etc etc etc ' that came out in the last 20 or so years. Take Java ,.NET etc all have some sort of C under the hood.It's not that C/C++ is not developer friendly. It's that a lot of developers(apparently) are simply not that good to begin with. While Java , C#, JRuby etc etc etc make things 'easier' (according to some people at least) they ALSO hide a lot of things that a GOOD developer _should_ know in orderr to understand what is going on under the hood.

I see and hear a lot of kids fresh out of University that never even used C/C++ (which frankly baffles me to this day) ask question like " What is a refference","Oh i thought the GC would take care of that"

If C/C++ were so developer un-friendly you would not find them under the hood of pretty much every new language that claims 'developer friendly,easier,etc etc etc ' that came out in the last 20 or so years. Take Java ,.NET etc all have some sort of C under the hood.

To be perfectly fair -- The first thing you do with a new chipset after you have an assembly compiler is to create a C compiler. Once you've bootstrapped the C compiler, subsequent versions of the C compiler can be written in C. Only then, do you begin building the other languages on top of C (unless you have a good cross-compiler -- in which case I suppose you can compile Ruby, Java, Perl, etc right for that chipset -- but the device drivers on that new hardware won't be written in one of these languages

This kind of syntax is great if you're doing scripting. Small programs. For anything large though, that looks absolutely horrid. I can't see myself using a language like that for anything more than a few dozen lines, and even then I'd rather use regular Java.

Seriously, what is it with people wanting to get rid of the braces, for example? I can't tell you how many times I've been trying to debug Python code and determined that the whole damn program was failing because someone forgot a single space. Usually the compiler doesn't even complain. There's a reason C and Java and so many other languages use the braces. It's not because they wanted to piss off programmers.

Mirah has semicolon trouble. It's an "you don't need semicolons, except where you do" language. Note that some of the import statements in Mirah end with semicolons. Either there's some obscure reason for that, or the code they're showing doesn't really work.
How do they do multiline statements? Please, not backslashes at the end of the line again. (Or worse, a syntax where a backslash followed by invisible whitespace, then a newline, has different semantics than without the whitespace.)

Python's indentation-based syntax seems to work out better, At least since the compiler got smart enough to understand when mixed spaces and tabs introduced visual ambiguity.

Mirah also seems to have a "you don't need declarations, except where you do" mindset. Historically, that's a bad decision. Almost every language that started out without declarations later backed into having them. FORTRAN, BASIC, C, and Perl all started out without much in the way of declarations. Python is one of the few typed but declaration-free languages that has succeeded. Arguably, Matlab is another.

Is Mirah really a front end to Java, or simply a dynamic language that targets the Java virtual machine? You can do something similar with.NET, running IronPython to the.NET VM.

10-15 years ago, I would definitely agree with you. Unless you are building a hugely complex application, Java is fine for lightweight to mid-range software. For instance, I'm currently building a fairly lightweight cross-platform Java-based frontend for GnuPG (something that has been sorely needed for a long time) and my application runs as fast as anything else on the system even when it has several threads running at onc

i would find it easier to believe Timothy is Australian, based on the hours he posts and the fact that 80% of his stories involve Australia somehow (not complaining here, but it definitely stands out).

I'll skip on Java being a horrible language, everybody who isn't a Java fanboy and has used a fair number of languages already knows this.

But nearly as fast as C++ or just 10% slower ? That's not my experience, at all. Of course, first I could comment on how calling a language slow or fast is at best strange, as it often really depends on the compiler / interpreter / assembler (ok not that one), and what exactly you are trying to do with it.

He did share them... stop trying to "solve" the problem of programmers needing to know how to program by writing languages that try to cover for dummies. We have plenty of languages that dummies can use safely. The problem has been solved.

Computer languages are like power tools. They can be "so safe, but no safer" before they start losing function. They just _don't_ make a band saw that any 12 year old can use in complete safety. Same for joiner, circular saw, planer, or just good old knife.

Tools are _not_ supposed to be _safe_. They are only supposed to be "no more dangerous than necessary". We make cheap plastic toy versions of tools for kids to practice with, but eventually they either go away or they have to learn to use the real thing.

I will believe in these "for dummies" versions of language the day a contractor shows up at my house with a Fischer-Price nail gun that will actually hold up sheet rock but is safe enough to hand to my neighbor's 5 year old.

The "clever new idea" is that people don't _deserve_ to operate in fields where they are unwilling or unable to learn the skills required. This is just as true of my profession (programming etc) is it is true that my programming acumen doesn't mean that I should be able to walk into CERN and have a go at the LHC even though I know squat about high-energy physics.

Objective-C and C++ were created at the roughly same time, with the early work done without any knowledge of each other. OPPC (the early version of Objective-C) was written around 1981-82. "C with Classes" (the early version of C++) was written around 1979-83. The first books documenting Objective C and C++ were released in 1986 and 1985 respectively.Objective-C was standardized (as part of OpenStep) in 1994, while C++ became an ISO standard in 1998.

Apple didn't choose to use it because C++ wasn't different enough; they chose to use it because that is what the NextStep was written with back when Objective-C and C++ were both still in their infancy.

Apple didn't choose to use it because C++ wasn't different enough; they chose to use it because that is what the NextStep was written with back when Objective-C and C++ were both still in their infancy.

There were better reasons than this. C++ is not well suited as a systems programming language. That's why there's still not a single major OS that uses the C++ object system at the system level. BeOS was the single failed experiment in this regard - remember the loveliness of "reserved slots"?

Objective-C's dynamic features make it a much better foundation for systems programming. Its relative simplicity is also attractive.

I like static typing when I'm developing applications. I like dynamic when building web apps. There may not even be a rhyme or reason to that beyond the fact that it's just the way I've always done things.

That said, loose typing (PHP, JavaScript) always gives me more of a headache than it solves.

I'm not sure you'd call this an "argument", but I do most of my coding in Python. Now that I've gotten used to its syntax, whenever I have to use a bracketed language like C++ or Javascript I get annoyed with having to deal with all these superfluous braces. Maybe it's just personal preference, but I don't miss them in Python.

Problem is: when I look at your python code, I don't know if I'm looking at spaces, or tabs, or some combination of both. Not without a hex dump, or something. And one invisible character out of place, and god-only-knows what sort of unexpected results I get.

Also, I probably can not cut-and-paste your code into mine, and have it work, without substantial modification.

Then there is the serious issue of emailing code, or cutting-and-pasting from a web-site.

Too bad that practically everybody on slashdot thinks of BASIC as GW-BASIC. Most versions of BASIC, that are less than 30 years old, actually have it right - no curly brackets, no counting spaces and tabs either.

As for the bracket columns, why does it really matter to you if somebody else does it ? I personally prefer the opening bracket on the end of the first line (assuming the first line is short, not long) because it saves a near-empty line, so (a) more code fits on my screen without messing with readability and (b) when glancing over code I don't mistake it for a white-line, which would usually indicate some logic break (much like paragraphs in a "real" language). I see no real problem with

Slashdot, Wikipedia, and Google are "proper nouns" as such they have a definition based on their usage. Names (e.g. proper nouns) are magical in languages. They define themselves. That is, there is no uncertainty when a new string of letters or phonyms are assigned as the proper name of a unique entity.

As used, "performant" is apparently an adjective. When used without a proper preceding or trailing contextual definition it has no actual meaning. We are left to infer a meaning because of the similarity of t

Hey Fuck off! There is nothing wrong with Modern Pascal Implementations like Delphi. Its OOP Model is better then C++ or any of those other bastardized bits of cruft. If you want real OOP like it was envisioned then stick with LISP or SmallTalk, but if you want to get work done...