Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

snydeq writes "Charles Nutter, Rich Hickey, and Gavin King each discovered that 'simplicity' doesn't mean the same thing as they developed Ruby, Clojure, and Ceylon, respectively. 'Languages that are created with similar goals in mind may yield highly disparate final results, depending on how their communities understand those goals,' writes Andrew Oliver. 'At first, it surprised me that each language's creator directly or indirectly identified simplicity as his goal, as well as how differently the three creators and their languages' communities define what simplicity is. For Ruby, it is about a language that feels natural and gets out of your way to do what you want. For Clojure, it is about keeping the language itself simple. For Ceylon, it is a compromise between enabling the language to help, in King's words, "communicating algorithms to humans" and providing proper tooling support: the same general goal, three very different results.'"

Where is the information about Ruby or Ceylon? There are Clojure code snippets designed to illustrate the Clojure philosophy of "simplicity," yet no equivalent Ruby or Ceylon code. Overall, this article seems to be devoid of content...

instance_methods returns an array. select iterates on it calling a code block on every element. If the block returns true it adds the element to the array it will return at the end of the loop. |var| is how the element is passed to the block. Blocks can span over multiple lines, it's usual to wrap them inside a do..end but { } still work.

On the other hand, that's one of the nice things about Ruby. There are different ways to get things done. There are a few people who might consider that a flaw, but that does not seem to be the general consensus.

Well, we only just started on Ceylon and are not even finished yet, so it's a bit early to start comparing how many times it gets mentioned in relation to the rest, but people have to start some time:)

If it gains traction, then it will have to deal with feature creep (keeping up with the new hot languages), standard library bloat, backward compatiblity, and differing interpretations of the spec by compilers and developers. Then it becomes no longer simple.

Java is the classic example. It's hard not to giggle or mutter "WTF?" when you read Sun's original positional paper claiming the language was "simple".

"Wasn't C once considered a relatively "high" language when it first emerged and is now more of a "middle" language?"

No, C has never been considered a "high level" language in the Computer Science world, when compared to its predecessors such as BASIC and PASCAL. (Say what you want about BASIC, but it *is* a high level language, and vastly more so today than when it first appeared.)

People who insist that C a "high-level" language (you did not do that) make me cringe. At best, C is a "mid-level" language, lying somewhere between a high-level language and Assembly.

I'm not sure that C is that old. FORTRAN, COBOL, and LISP (and a number of others) are all older than C and are higher level than C. Not to mention that the LISP enthusiasts probably consider all other languages to be lower level languages.

"I'm not sure that C is that old. FORTRAN, COBOL, and LISP (and a number of others) are all older than C and are higher level than C."

This is precisely what I was saying. It came later, but was lower-level.

And it was, for the specific purpose of being more performant. It is not easier to use, it is not "simpler" to learn to use wisely. It is just more efficient at the compiled level, which being drastically less "efficient" at the code level. Personally, I can barely stand to look at it.

It must have been a real long time ago as I did wrote win98 vxd in C and in high level assembly (assembly with complex macros, but not more complex than recursive substitutions). My C driver, that did the exact same thing as my HLA one, was bigger and less understandable than other one. It was that way since function pointers get rapidly ugly. A big contrast to assembly where there are no types, only words sizes and alignment.

According to my anecdotal knowledge, C is only an ubiquitous portable assembly lan

"Win98" and "a long time ago" have no business being remotely close to each other -- when we're talking programming language lineages, many of the developments still interesting today happened between the 60s and the 80s, and some (LISP) date back to the 50s.

"According to my anecdotal knowledge, C is only an ubiquitous portable assembly language; nothing more, since a good set of macros in assembly can be more terse than C if you target only one type of CPU."

I would tend to agree with this. That was exactly the purpose of C. It wasn't "high level", but it played well cross-platform while Assembly did not. So it was "higher level" than Assembly, in that it abstracted out much of the hardware interface.

One way I look at programming is as a form of decision compression. Instead of writing a zillion "if then" statements to solve a problem, you write a lot fewer statements.

Just as there is no compression algorithm that's best at compressing all data, it will be unlikely for anyone to come up with a "decision compression language" that will be the best at compressing "everything". To make things more complicated, you often need to change certain stuff in the future, so you shouldn't pack everything too tightly, even if the language allows it.

Last but not least, I prefer a language not because of the code I need to write, but because of all the code I won't need to write ( and debug and document etc). In other words - the libraries and modules are important. Even if a language is very good and simple, and you only have to write one third the lines to do something, it still is not as good if you have to write everything you may need (database connectors, xml parsers, web clients, big number support, strong crypto, etc). In contrast a language that is 3 times more verbose but has libraries for nearly everything you need would actually result in you writing a lot fewer lines, and if the libraries aren't crap, supporting, documenting a lot fewer lines.

So a language that makes my life simple, isn't necessarily a simple language;).

Use of functional programming, and macros to build dsm's reduces the code you need to write, and can simplify things.

You then need good ffi (foreign function interfacing) to utilize external libraries.

My favorite system (currently) is Gambit-C Scheme. It supports define-macro as well as hygenic macros. It compiles to C, so the ffi is simply writing "in-line" C code if needed. Best of all is it has a 20 year history behind it.

I prefer perl instead of Lisp for reasons I gave in my second last paragraph. I find it easier to find and use the libraries (CPAN). Yes there's cliki but it's far from as good - problem/domain coverage, documentation etc.

This makes me mad to see a link to an article about "9 top languages", in which some major (established) players of the field such as Haskell or OCaml are not mentioned, while languages-to-be get some nice coverage.

Creating a programming language boils down to being fashionable, rather than doing something neat.

And yet none of these languages is actually simple. Ruby is readable and consistent, Clojure is sparse but confusing, and Ceylon is unknown (ok, maybe it's simple, but I'm not going to learn it to find out).

Its syntax is nothing like Java's. Its structures are nothing like Java's. Both being high-level languages, of course it is almost inevitable that they deal with some of the same data structures, like strings and arrays. But program structure is markedly different.

When it was first becoming popular outside Japan, various comparisons showed Ruby to average about 20% as many lines of code to get the same job done as Java programs that did the same things.

Simple leads to different results because it usually means something more like "quality". Simple is in itself not an absolute value. Instead, the simplicity of something is a ratio of its value to its sucking. So what they're really saying is "I'd like to achieve high value outcomes with the least amount of sucking along the way." There's a lot of ways to do that.

He's a smart guy and that's a good talk, but his arguments about simplicity are a little weird. Trying to use the etymology of the word 'simple' as a justification for design choices? I found Clojure to be a language where first you had to make the leap-to-lisp before it was easy, and making that transition wasn't helped by having the JVM/standard lib/Java syntax as prerequisites (not to mention lein/ant/maven/ide configs/etc...).

Heh, consider what this description says about clojures simplicity: "Leiningen

I've seen it as well, and I recommend seeing everything Rich Hickey on the web. He's incredibly insightful and a great presenter. However, if someone only watches one talk of his, I think the talk from Strange Loop would be the best. Second talk I'd recommend is Are We There Yet? [infoq.com]

I can hardly wait for Ruby 2.0. They have promised at least the ability to bytecode-compile scripts and that should do a great deal to promote it for making desktop apps. Currently, it is not usually used for desktop apps because your code is 100% exposed.

Even MacRuby does not really compile, but exposes your raw code. JRuby does, of course, but it's not 100% compatible and requires JVM. JRuby is an admirable project, don't get me wrong... but having native bytecode compilation would be tremendous.

"It's trivial to decompile Java bytecode, and even decompiling machine code isn't all that hard. It really doesn't matter, just use Ruby for desktop apps if you like it."

It is trivial for knowledgeable people to decompile bytecode. It isn't trivial for the majority of commercial software customers. And it is far more trivial to simply read the raw code from non-compiled programs.

Further, don't confuse the task of decompiling with the task of making sense of the decompiled code. In most cases there are no meaningful variable names; instead they get named things like "integer0", or whatever the decompiler decides is a good designation. And it is not generally well-formatte

That's interesting. The creators of C# have a somewhat similar philosophy: they say that they would like it to be a "pit of quality", it should be easy to write correct code. But that doesn't mean they removed features that can be abused.

As a consequence, the things you mention (pointers, gotos, operator overloading) are all included. But for example in the case of pointers they are "hidden" (they have to be in an "unsafe" block).

On the other hand, for example fall-through switch cases are not allowed in C# at all, they thought those are not worth all the bugs they cause.

Maybe that would be a good idea in an ideal world. But in reality, such behavior would be deeply confusing for people who know C, C++ or Java. And I think that "it should be easy to write correct code" applies to people who already know another language too.

Also, from my experience, fall-through is not that useful anyway. I don't think I ever wrote code in C# where it would be useful. Having two cases for the same code sometimes is useful, and C# does support that.

I actually kind of like that. It also enables you to have 3 or 4 cases that all need different minor initializations (say they all want to initialize a starting condition to different values) to then jump to a common case, which was actually a frequent pattern in assembly programming that's unfortunately difficult in modern languages.

That's why there is no easy to explicitly do things such as pointers, gotos, and operator overloading

The reason there was no pointers was that pointer manipulations were highly machine dependent. Java emerged out of Oak and the slogan "write once run anywhere" was key to its popularity.

Goto -- came from the whole philosophy that goto leads to bad code.

Operator overloading and multiple inheritance are both examples where subtle shifts in code can lead to enormous shifts in how the compiler views the code. One of the key aspects of Java was making sure that side effects to changing code were contained.

"Operator overloading and multiple inheritance are both examples where subtle shifts in code can lead to enormous shifts in how the compiler views the code."

That and I've still not really seen many, if any convincing arguments where multiple inheritance is a good idea. We've had a few MI zealots extoll their virtues here and give us examples of why MI was essential to their project. The problem is, each time they've done so so far on Slashdot at least, they've only served to prove they have absolutely no id

Program in Java? Everywhere you see interfaces, that's multiple inheritance, they just restricted you to only inherit from the interface, not the implementation. Which means every class that implements it has to rewrite that code. Depending on the interface and the class, that may or may not be a good idea. But I'll frequently find myself writing very similar code for multiple classes that implement the same interface.

What they really needed to do was just block diamond inheritance- inheriting from two

"Program in Java? Everywhere you see interfaces, that's multiple inheritance"

Yes, but when most people talk about MI they're talking about the ability to perform actual inheritance of real classes, rather than the ability to implement interfaces - it's that that I'm referring to.

"Which means every class that implements it has to rewrite that code. Depending on the interface and the class, that may or may not be a good idea. But I'll frequently find myself writing very similar code for multiple classes that

Interfaces are about type checking and making your compiler happy, MI is about sharing implementation (which is mixed with type-checking in most of programming languages). Part of the confusion about interfaces and MI in Java is caused by tutorials and docs which wrongly mentions interfaces as a way of doing MI (Sun training I'm blaming you!).

The biggest problem with MI is the implementation: it leads to lots of "exceptional" cases in the handling on instance variables and method dispatch. If you google for

There are cases where the system libraries define something as a class when it should have been defined as an interface, perhaps with a default abstract implementation. InputStream / OutputStream spring to mind.

Sure, but in an OO language it makes far less sense, which is what I was referring to - specifically why the lack of support for actual MI in Java isn't a problem - apologies for not making that explicit.

In an OO language you'd recognise that algebra defines actions upon an object, and so you'd simply implement algebra the interface against a Matrix class, and define the Matrix specific implementations of algebraic actions there. Then anything that is a matrix, is a matrix, and anything that is a matrix, ca

I don't see how OO changes it. To me you would run into a problem with not wanting to implement everything twice or 100x. For example the connection between sin(x) and the Taylor series I would have in Algebra not Matrix. But if I want to compute a sin function on a matrix space I'm going to want to use the Taylor series. Why should I have to re-implement all that code?

___

Or to be less mathy. I have "imports" and "cars" as classes why not inherit both for Toyota objects?

I don't see why you'd have to implement it many times, you only implement it if it changes, and then you have to implement it.

Most of the solution revolves round breaking down the problem, and reusing those broken down chunks. If you're implementing a functon that has many many lines of code and then complaining about needing to reimplement that then the chances are that you could have broken down that function more and reused parts of it.

"Or to be less mathy. I have "imports" and "cars" as classes why not inherit both for Toyota objects?"

I don't really understand this example. Are you saying you might have imported cars? If not why not have Toyota inherit from ImportedCar, which inherits from Car? If your assertion is that you might have other types of imports than cars, then your base class is Import, from which Car inherits. I don't really see the problem?

"And with that structure how do you handle a GM or a Brooks Brother's Suit? You can't have Import as a base class for Car because not all cars are imports. You can't have Cars fork into Import and Domestic because you also need Import / Domestic for Suits and no Suits are Cars."

This is the problem though, you can keep on adding cases until you break a solution and say "Hey look, I told you multiple inheritance was the solution!" but you're still completely wrong. In this case you need to then question wheth

"This is the same case from the start. Its just taken several rounds for you to see why you can't use a single hierarchy."

But I've done exactly that. I've given you a solution. You're using a really weak argument here.

"I don't want a taxes data structure I want hundreds of methods and objects having to do with imported vs. domestic. Imported objects may have far service offices. They have shipping times. They may have multiple conflicting law sets that apply to them."

In an OO language you'd recognise that algebra defines actions upon an object, and so you'd simply implement algebra the interface against a Matrix class, and define the Matrix specific implementations of algebraic actions there. Then anything that is a matrix, is a matrix, and anything that is a matrix, can have algebraic actions performed upon it.

While I can't speak for Haskell's implementation of alebra, I'd have to say that mathematically, an algebra is far more than just a definition of "actions upon an object" in the OO sense. An algebra also defines the results of those actions, while in OO, an interface only defines the type signature of those actions. So you can happily define your interface as "supports-add-and-subtract" without defining that "x - x must equal zero". This is only half of an algebra - if that.

"While I can't speak for Haskell's implementation of alebra, I'd have to say that mathematically, an algebra is far more than just a definition of "actions upon an object" in the OO sense. An algebra also defines the results of those actions, while in OO, an interface only defines the type signature of those actions. So you can happily define your interface as "supports-add-and-subtract" without defining that "x - x must equal zero". This is only half of an algebra - if that."

That and I've still not really seen many, if any convincing arguments where multiple inheritance is a good idea.

There is value in declaring that instances of a class can participate in a some pattern or other. The concept of interfaces is a way of doing this that is used in Java (and in variations in a number of other languages too). However, it uses a different dispatch model to direct inheritance: straight indexing into a vtable won't work (the point of dispatch doesn't have enough information at compile time, so a more complex — and somewhat slower — lookup is required).

"There is value in declaring that instances of a class can participate in a some pattern or other. The concept of interfaces is a way of doing this that is used in Java (and in variations in a number of other languages too)."

Agreed, you at least need the facility to do that.

"Ontologically, multiple inheritance is not a problem either: it's just the is-a (well, strictly the is-a-specialization-of) relationship."

Indeed, but how often is something actually two things? that's what multiple inheritance implies.

I tend to think "simplicity" was part of the goal in not including those things. As you point out it was more simplicity for the compiler, but still simplicity for the developer flows out of it in this case.

Take language A and use libraries from language B where A and B have totally different ideologies about everything. We need a good LISP and we need a good set of modern libraries for that LISP. But yeah mixing them kinda sucks.

I'd rather see Ada come into common use. Ada actually has a lot of uses, supports some of the more exotic programming paradigms, it's easy to read and it'll smack you in the head if you write something in a bad way (as in, it won't even compile).

Comparing Ada to COBOL is an insult. Yes, it was based on Pascal, but you should educate yourself about the language before you wish to make a conclusion about it. Ada isn't meant for your fancy programming. Ada is meant to get the job done. And it's very good at that.

Ada wasn't bad and certainly capturing more bugs at compile time is wonderful. One of things I love about Haskell is that generally if the program compiles it does what you wanted it to. I save a ton of time with debugging.

A new Ada like language prodecedural with light object orientation, static and strong compile time checks with extensive libraries and financial backing would be good.

As an aside, Ada doesn't have closures, it doesn't have tail recursion... even in the 1970s this was the reason ironica

And that's the common miss conception. The latest standard versions of Ada are fully capable of dealing with objects.
Actually Ada pretty much fits the description you've given except for the extensive library. The financial backing is already there due to the military projects attached to it. If you want reliable software for potentially dangerous things Ada is the only acceptable choice.

And Ada doesn't support tail recursion for the simple reason that well written software shouldn't need recursion. Addi

And Ada doesn't support tail recursion for the simple reason that well written software shouldn't need recursion. Additionally it's actually terribly inefficient, Ada was also meant for embedded systems. Do you realise what happens every time you call a function? Your processor puts the program counter and other registers on the stack and then jumps to the function call.

When a language supports "tail recursion" that actually means it does "tail recursion elimination". Which means that the processor does NOT put anything on the stack per iteration. There is nothing inefficient about tail recursion when the language supports it.

Tail recursion to me simply means using the output of the function it was called from as return value of a function. It doesn't imply optimization.
So you might want to look at how rewriting what the programmer did is against the Ada philosophy. You should keep in mind predictability is important. Hence using tail recursion really would lead to recursion in Ada, result in possible stack overflows.

And Ada doesn't support tail recursion for the simple reason that well written software shouldn't need recursion. Additionally it's actually terribly inefficient, Ada was also meant for embedded systems. Do you realise what happens every time you call a function? Your processor puts the program counter and other registers on the stack and then jumps to the function call.

That's what happens without tail recursion which is why you want tail recursion. What happens with tail recursion is the call gets rewri

Tail recursion doesn't imply the compiler rewriting what the programmer wrote to me, it simply means using recursion in the return line. And the problem with that is that Ada will do exactly what you wrote. That's pretty much the entire point of the language. If you read over the code it's completely predictable what will happen on execution. Tail recursion as you state it will lead to unpredictable behaviour. As such Ada will not allow it.

Tail recursion doesn't imply the compiler rewriting what the programmer wrote to me, it simply means using recursion in the return line.

A "tail recursive call" is one where recursion happens in the return line.For any language that allows recursion at all, "Tail recursion" as a language property means the language rewrites tail recursive calls as iterative during compile or execution.

In any case if the language wanted to allow recursion and offer reliability it could just flag on recursive calls it couldn

Ruby's syntax is exactly why most people who like it, use it. And it's not as if its syntax were unusual; almost all of it was "borrowed" from existing languages, though in a generally consistent way, so it is still coherent.

But okay. You don't like the syntax. That's your prerogative. About being "slow and unreliable", however:

It is no slower than other modern "dynamically typed" languages. While it is generally true that as a group they are slower than compiled languages like C or even Java. (And e

Ruby is far from consistent in my opinion. But that's subjective so I'll skip that one.
Chaining dot operators doesn't add to the readability of a language as some people seem to think it does. Yet for some reason the Ruby crowd seems to assume this is a good idea.

It actually is slower than the competing languages. Python (also dynamically typed) is a lot faster. In fact in certain cases even PHP beats Ruby at speed of execution. And your assumption that Java can only be run by a virtual machine is just p

"Ruby is far from consistent in my opinion. But that's subjective so I'll skip that one.
Chaining dot operators doesn't add to the readability of a language as some people seem to think it does. Yet for some reason the Ruby crowd seems to assume this is a good idea.... It actually is slower than the competing languages."

It must all be kept in perspective. If you think Ruby syntax is inconsistent in comparison to Python, you need your head examined.

And I will grant that Ruby is -- a little, not a lot -- slower than Python on benchmark suites, code maintainability is important and if you want to compare readability (and, as I mentioned, actual syntactical consistency), Ruby is the clear winner. Significant-whitespace languages simply aren't as readable as other infrastructures. I know some die-hards disagree, but blind st

I'd say in terms of readability it might actually be ranked below SPARC assembly (and that's not a compliment).

What do you have against SPARC assembly? It is an extremely straightforward three-address instruction set without complications. The only slightly challening part is register windows. It is by far my favourite instruction set to write for (admittedly I don't really write assembler anymore...)

I don't have much against SPARC assembly in fact. I'm simply not fond of it's syntax. Its not easy on the eyes, very hard to skim over to get a rough idea of what the code does. Often confusing naming combined with abuse of %. It just doesn't add to the readability. More than once I ended up having to rewrite code simply cause one symbol was missing. Often one that didn't seem to have much use other than to annoy the programmer. So the first language I think of in when comparing something in terms of readab

I agree that was the case on the older systems. Luckily with the increase in processor speed it's not that much of a problem any longer. But I must say I have noticed that Java does seem to work smoother on Intel processors. Probably might have something to do with the compiler Sun/Oracle has been using.

I agree that was the case on the older systems. Luckily with the increase in processor speed it's not that much of a problem any longer.

I can only assume you've never used Eclipse.

Dumb programmer approach to Java:

1. Don't even think about memory allocation. Forget to free up references to objects you're not using and lose references to objects you still need, so your program leaks memory or randomly stops working.2. Notice that the program freezes regularly when the garbage collector runs.3. Increase the amount of RAM allocated to the program to stop the garbage collector running and compensate for those memory leaks you forgot to clean up.

The slowness in Eclipse is down to SWT, which doesn't even use the garbage collector. For proper Java code, the incremental garbage collector taht has been the default for many years now prevents the kinds of slow downs or pauses that used to be common (and afflicted other languages like most Common Lisp implementations).