Java is definitely the #1 language to hate on these days. C, and to a lesser extent C++, is like the retro-hipster language now, but Java is the corporate language. It's the Man's language. If you want to be in with the cool crowd, talk about how awful Java is.

The trendy academics these days are all about Haskell. And the more illegible you can make your code with obscure operators that no one knows the meaning (or precedence) of, the better. The ultimate goal is to write your entire program using nothing but operators.

Ok, on a more serious note, having a comprehensive standard library is extremely important. A language can have the best design and features in the world, and it won't mean a thing if I find myself having to re-implement basic algorithms from CS 101. And my reimplementations are likely to be slower and have more bugs than what I'd get in a standard library.

Other features:

Having a clean and readable syntax is nice. That means no Perl or PHP. Python is great here. Syntactic sugar can be great (and should be used) when it's done right, but can be ugly done wrong.

I've bounced back and forth on static versus dynamic typing. My current view is that static typing is worthwhile if only for self-documentation. This becomes increasingly more important as a program grows larger. A hundred line script probably doesn't really need type information, but a thousand line program definitely does. This is mostly for documenting the types of function parameters and return types and class members though. For local variables, type-inference is great. In any case, strong typing is a must. The only exception to strong typing I like is automatic conversion to string (using a toString function) for string concatenation.

Automatic memory management is great 95% of the time. For the 5% when manual memory management is necessary for performance, smart pointers like C++11 has are much better than manual allocating and freeing.

List comprehensions (and related comprehensions) are godlike. This is probably the feature that I miss from Java the most.

Generics are essential (Go, what were you thinking?). I shouldn't have to reimplement the same data structure multiple times or use type hacks.

Lambdas are pretty cool. Mostly for supplying short functions to things like sort.

I have recently warmed up to the idea of monads (though the way that Haskell presents them is awful), mainly for one use in particular: The Maybe (or Option or whatever) monad is great when you you're trying to get something, but the process of getting it can fail at several steps along the way. Instead of having to check for each failure individually, you can write your code like everything succeeds, and then check and handle failure one time at the end.

So grading on this list (eight points), Java would get a 7/8 (lambdas and monads were added in Java 8, though I've haven't tried them yet), C++ would get 4/8, and Python would get a 6/8. This doesn't really reflect my actual opinions of the languages though.

A good programming language is different from other programming languages. There are no "better" vs. "worse" differences, though there are trivial vs. profound ones. Great programming languages are profoundly different.

mousepostureI once developed an assembly language and corresponding computer architecture that, rather than operating on random access registers for working memory, used a circular register list.The language and computer architecture was perhaps unique because this was an interesting yet terrible design decision.

By your criteria for a good language, I suspect that my assembly language was truly a great one.

Derek wrote:I've bounced back and forth on static versus dynamic typing. My current view is that static typing is worthwhile if only for self-documentation. This becomes increasingly more important as a program grows larger. A hundred line script probably doesn't really need type information, but a thousand line program definitely does. This is mostly for documenting the types of function parameters and return types and class members though. For local variables, type-inference is great. In any case, strong typing is a must. The only exception to strong typing I like is automatic conversion to string (using a toString function) for string concatenation.

My problem with types is that most languages have too damn many of them. I don't give a flying fuck if this is a short or a long or an int or a float or a double. It's a number! (OK, maybe there are some uses where delineating integers v. non-integers could be useful, but that doesn't excuse how many types of number there are in many languages [yes, I get that it's for legacy/high-performance considerations, but for what I use them for, it's cruft]). Personally: string, number, and array are all you need. Maybe have a separate integer for counting purposes.

frezik wrote:Anti-photons move at the speed of dark

DemonDeluxe wrote:Paying to have laws written that allow you to do what you want, is a lot cheaper than paying off the judge every time you want to get away with something shady.

I think a true integer type is essential; e.g. I think the Javascript "everything is a double" is terrible. Two reasons:

First, on a subjective note, I basically could not write the software I work on every day in that system because we have to be able to represent integers that can up to 64 bits, so a double doesn't have enough precision. (If you give me char* I could do it, but only by pretending that's a 64-bit integer.) If we wrote in Javascript, we'd literally have to define our own number class and call add(a,b) everywhere instead of a+b.

Second, I think the "right" way for numbers to behave is to have arbitrary precision integers. And if you have that, then you either need to have separate integer/float types or go the "everything is an arbitrary-precision rational" approach. Which I guess you could do if you really want, but it seems like overkill by my book.

There are a lot of cases where you want fixed-size integers. Graphics was already mentioned, as well as anything performance critical, not to mention memory addressing, but also many ciphers require arithmetic in modulo 2^32 or 2^64. If you can't implement Salsa-20 without mod operators fucking everywhere, then it's an annoying language. Ada is nice in that while it doesn't provide you with pre-defined unsigned integers, you can define your own unsigned ints, e.g. "type u64 is mod 2**64;".

So...

1) Static typing or dynamic typing are both have their advantages, but type safety is required. 1 + "1" should be an error2) Bounds checking is a must3) There should be libraries available that support your needs, and that aren't a goddamned mess4) Some sort of clean way to handle errors (i.e. not go to)5) It should be consistent and readable, but not too verbose, i.e. neither "@->?=+7" nor "if a equals 86 or b equals 99 then" should be mistakable for valid code.

Tooling. The machine should be doing as much of the grunt work for me as possible. At minimum it should be doing the formatting, detecting dead code, telling me when I'm reimplementing standard language features, getting rid of unused dependencies, and linting.

Perf. I don't particularly care how fast the core language is. It's probably fast enough, and if it's not I can hack out the slow bits and impement them in another language. What is essential, though, is the ability to figure out what the slow bits are. A good benchmarking tool is a requirement. And the bottleneck is IO, likely as not, so let's not forget about that.

A consistent design philosophy. Don't try to be all things to all people, language designers *coughscalacough*. Have some courage of your convictions. Reject feature requests that don't belong in the language.

Not too much syntax. Cancer of the semicolon and all that.

A good concurrency model. That's potentially very good for perf, obviously, but I also find it easier to reason about medium-to-large programs if they're broken into time-independent chunks.

Numeric types depends on what the purpose of the language is. For a primarily scripting language, big ints are good, but for a primarily systems language fixed-width types are important. You could still have big ints, even as the default int, but fixed-width types need to be available. I don't like the C/C++ method of making the size of integers depend on the compiler target though. An int should always be the same size, no matter how the code is being compiled. Making the size explicit in the type, like int32 or uint64, is a plus.

In any case, integers and floats should be separate types.

Thesh wrote:2) Bounds checking is a must

This goes without saying these days. C is the only language that still doesn't really do this.

4) Some sort of clean way to handle errors (i.e. not go to)

I agree. I think classic exceptions or a similar approach are the best way to go. Go's solution of multiple return values, one of which is an exception that must be explicitly checked, is not acceptable. It leads to too much boilerplate.

I don't like Java's checked exceptions though, they don't work well with polymorphism, or even simply modifying the code in the future. I've written way too much code consisting of "catch all exceptions and wrap them in a RuntimeException". I would make all exceptions unchecked instead.

I don't see how that's significantly different from traditional exceptions, unless the except is required for every call (no passing the exception up the callstack?).

The problem with the Go model is that every function call that can cause an exception has to be followed by an explicit and manual error check. And if you can't handle the error and want to pass it up the call stack, then all of your callers have to have an explicit and manual error check. This makes writing correct error handling code an O(stack depth) operation, that's a ridiculous amount of boilerplate.

And we already know that we can't trust programmers to do this. How many times have you seen a call to malloc without checking if the return value was null? Those programmers are going to forget to check the error value of Go functions too, and now what good is your exception handling model?

With my proposal, you are forced to use "call" if a function can result in an exception; if you leave out the except, it is the same as having it but leaving it empty. It is basically two return values, as they don't bubble up, forcing you to check if an exception occurred, but with each return value being mutually exclusive and the check handled by the mandatory syntax.

Thesh wrote:With my proposal, you are forced to use "call" if a function can result in an exception; if you leave out the except, it is the same as having it but leaving it empty. It is basically two return values, as they don't bubble up, forcing you to check if an exception occurred, but with each return value being mutually exclusive and the check handled by the mandatory syntax.

See, no, I think that's terrible. Because now let's say that I change some function ten calls deep. Maybe it's not even my code, it's some library I use and I can't change it. Or maybe it is my code, but it's called by a framework I don't own. Now everything in between has to be changed to handle the exception, or the exception is going to get dangerously ignored. Neither option is acceptable to me.

If a function didn't have an exception, but now does, it is basically a different function and requires code changes to add the "call" syntax or it is a compilation error. The point of the design is that you can't accidentally ignore exceptions, and you can't use the result of opening a file without verifying the open function was successful.

Thesh wrote:If a function didn't have an exception, but now does, it is basically a different function and requires code changes to add the "call" syntax or it is a compilation error. The point of the design is that you can't accidentally ignore exceptions, and you can't use the result of opening a file without verifying the open function was successful.

The problem is that it leaks implementation details to higher functions that don't and shouldn't need to care about it. I should be able to write a mid-level function that doesn't care whether it's operating over a file, network, or local memory, and any failures are simply passed along to higher level code that does care and can deal with the error. Mid level code doesn't need to explicitly check for error, but If opening a file fails then an error handler at some higher level is executed and the result is never operated on by the mid level function.

If you don't support this sort of operation, then programmers will just make macros for try-catch-rethrow. And if your default behavior for not writing a handler is to silently ignore the error without propagating it, you're going to get serious problems when programmers forget to rethrow the error.

I don't think it's as big of a problem as you make it out to be. It's not much different from C exception handling; if you have a module that is prone to failure due to permissions or the like, you probably won't have those dependencies hidden so deeply within the libraries, but instead they will be major parts of your program. The most common exception you will need to handle in one of those libraries probably out of memory errors, which should probably just be fatal in most cases (and I can't even think of the last time I ran out of memory).

Thesh wrote:I don't think it's as big of a problem as you make it out to be. It's not much different from C exception handling; if you have a module that is prone to failure due to permissions or the like, you probably won't have those dependencies hidden so deeply within the libraries, but instead they will be major parts of your program. The most common exception you will need to handle in one of those libraries probably out of memory errors, which should probably just be fatal in most cases (and I can't even think of the last time I ran out of memory).

But C exception handling is bad. In particular, it's extremely prone to programmer errors. You don't want your exception handling to be "not much different from C exception handling".

And the reason I can say that this does actually matter is because I have written,

far too many times in Java to "handle" checked exceptions, where all I actually want is the exception to be logged at the top level without having to mark every function and every interface along the way as throwing every possible exception.

Thesh wrote:There are a lot of cases where you want fixed-size integers.

I have a couple responses.

I'm not saying that arbitrary-precision is the only thing the language can have. Ideally, it would provide the building blocks for one to be able to create custom fixed-width types, and provide them in a library; assuming that compiler optimization complications could be satisfactorily dealt with.

But I definitely think that the "default", "what you get if you don't ask for something different" int type should be a real integer; if I say 3100, the result should be 515,377,520,732,011,331,036,461,129,765,621,272,702,107,522,001.

Thesh wrote:With my proposal, you are forced to use "call" if a function can result in an exception;...

I am maybe willing to accept this under the condition that it is possible to write something like foo(bar()), where either or both can throw an exception, as an expression. The inability to do that with return codes is a major reason that, while I dislike exceptions, I despisenot having exceptions. I also think that it should not start a block; i.e. in your example, x should be visible after the call block is done.

But I'm not sure how much I agree with Derek about intermediate functions. I do kind of agree that I should be able to write functions internal to a module without worrying about that.

Thesh wrote:It's not much different from C exception handling;

In some ways that's a true... but I don't view that as a good thing. C's error handling is terrible almost top to bottom. Return codes are awful (and used about as consistently as your typical politician's platform), errno is a disaster, goto being the best option in many cases is not comforting, ...

EvanED wrote:But I definitely think that the "default", "what you get if you don't ask for something different" int type should be a real integer; if I say 3100, the result should be 515,377,520,732,011,331,036,461,129,765,621,272,702,107,522,001.

I guess it depends on what you are doing with your code. At work, I pretty much use integers as identifiers, counters, hashes, indices, etc. Any math I do is limited to decimal types (currency), and I'm not dealing with numbers larger than a few hundred billion (and that's for reports where only 4-6 significant digits matter). That said, if you are doing math heavy/sciencey stuff I could definitely see arbitrary precision as useful; maybe that makes more sense for dynamically typed languages, leaving fixed width integers for statically typed languages where you have to specify anyway.

EvanED wrote:I am maybe willing to accept this under the condition that it is possible to write something like foo(bar()), where either or both can throw an exception, as an expression. The inability to do that with return codes is a major reason that, while I dislike exceptions, I despisenot having exceptions. I also think that it should not start a block; i.e. in your example, x should be visible after the call block is done.

Yeah, I figured you could have any number of "except" statements with any number of types in the call block. There is no reason why mixing functions that had exceptions with those that don't would be a problem either.

EvanED wrote:In some ways that's a true... but I don't view that as a good thing. C's error handling is terrible almost top to bottom. Return codes are awful (and used about as consistently as your typical politician's platform), errno is a disaster, goto being the best option in many cases is not comforting, ...

Well, I'm talking purely from the point of how C doesn't have errors that bubble up - that's not the problem with error handling in C, it's the fact that you aren't forced to handle errors, and it's the fact that you have to manually cleanup (lack of RAII/Garbage Collection) that makes C really annoying with exception handling. Having a block of code that says "Everything in here is absolutely peachy, no matter what" is the main objective.

EvanED wrote:But I definitely think that the "default", "what you get if you don't ask for something different" int type should be a real integer; if I say 3100, the result should be 515,377,520,732,011,331,036,461,129,765,621,272,702,107,522,001.

I guess it depends on what you are doing with your code. At work, I pretty much use integers as identifiers, counters, hashes, indices, etc. Any math I do is limited to decimal types (currency), and I'm not dealing with numbers larger than a few hundred billion (and that's for reports where only 4-6 significant digits matter). That said, if you are doing math heavy/sciencey stuff I could definitely see arbitrary precision as useful; maybe that makes more sense for dynamically typed languages, leaving fixed width integers for statically typed languages where you have to specify anyway.

So I agree that they only help in edge cases. But at the same time... edge cases are exactly where bugs lurk.

Bounds checking eliminates the main cause of severe problems from integer overflow (where you perform some arithmetic operation and then allocate a buffer whose size is the result -- and in the case of an integer overflow get a memory block that is too small and wind up with a buffer overflow), but in general I think that if we, as an industry, want to start writing software that isn't terrible we need to start accepting more help from the system in terms of correctness. And I think that things like arbitrary-precision integers (and decimal floating point, while I'm at it) are enough a step in that direction to make them worth it.

Thesh wrote:As I said above, failure to allocate memory should just be fatal anyway, especially if you are allocating a small amount of memory. For the vast majority of use-cases, there is no way to recover.

Usually, yeah, but there are some applications that may want to fail more gracefully than that. For example, a program could log memory profiling information before failing.

Moreover, there frequently is a more sensible way to handle an out-of-memory error than just choking and dying. I still remember getting deeply involved in a project in Audacity and trying to apply an effect to a large chunk of audio, only to have it run out of memory for its temporary buffer, crash, and lose all my unsaved work. Wouldn't it have made more sense to just cancel the goddamn operation!?

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrupwww.commodorejohn.com - in case you were wondering, which you probably weren't.