Posted
by
Soulskill
on Wednesday June 19, 2013 @05:32AM
from the hello-world dept.

Aardappel writes "Lobster is a new programming language targeting game programming specifically, building on top of OpenGL, SDL 2 and FreeType. The language looks superficially similar to Python, but is its own blend of fun features. It's open source (ZLIB license) and available on GitHub."

Amusingly this is somewhat the answer to your question - most programming languages will avoid unicode characters because it then runs a greater risk of transmission of code between systems because unfortunately there are still all too many applications, sites and programs that don't properly support unicode which means bugs could arise in source code for no reason other than loading it up, manipulating it, and saving it in the wrong text editor.

But I agree, it's a sad state of affairs that we can't rely on the existence of unicode even now.

This is one of my favourite things about.Net. All strings are unicode (utf-16) by default. You don't have to do any fancy trickery to get the language to interpret your string as UTF, and all the functions (assuming no bugs) work properly for international characters. In most other languages, you have to remember to precede the string with some character to signify that it's unicode, and the strange things start happening when you mix unicode and non-unicode strings, and have the functions don't work properly with unicode strings to begin with. Same thing goes with base-10 decimal numbers. It's a native type. You don't have to import some library and a= b.add(c) every time you want to add a couple numbers (gets really messy with more complex math).

Bull. Microsoft's refusal to interpret byte strings as UTF-8 is the problem. The fact that you have to use "wide characters" everywhere is by far one of the biggest impediments to I18N.

Unicode in bytes with UTF-8 is *TRIVIAL*. Look at the bytes and decode them. Variable length is not a problem, or if it is then you are lying about UTF-16 being so great because it is variable length as well! And if there are errors you can do something *intelligent*, like guess an alternative encoding (thus removing the need

There's an interesting reason, though. Consider building a Trie around Unicode chars. Granted, this may not be a major reason, but UGH! There's a lot of advantages to having a small alphabet. The early languages didn't usually even allow both upper and lower case. Well, memories have expanded, processors have speeded up, etc. But Unicode is still too verbose for many algorithms to work well. And using bytes and utf-8 yields different problems.

Any data structure programmer worth their paycheck knows that a trie is an abstract structure which can be realised in many different ways. It is logically a tree of "nodes", where each "node" is a map from a digit (where the key is expressed in some radix) to another node. That map can be implemented in multiple ways. The simplest is an association list (sorted or unsorted), but it could be a simple array, a binary search tree (often realised as a terna

OK, I'm not a compiler builder. And a hash table would be better for a symbol table. And I was thinking about a slightly different representational problem, for which a Trie would also not be the correct data structure to use, but which seemed to have the same problem. (It's actually an n-dimensional list structure...though less general than that implies. And I'm probably going to slap a restrictive upper limit on n...at least if I can figure a way to do so that won't choke things up.)

Been there, done that. Look specifically at APL [wikipedia.org] in the 60s. Functions were represented by single characters which you needed a special keyboard to type. For example, instead of typing the string floor, instead it was represented by what is now Unicode Character 'LEFT FLOOR' (U+230A) [fileformat.info] and required a special terminal to reproduce them. This limited where you could input and also display APL code.

One evolution of APL was the A+ [wikipedia.org] language leading finally to K [wikipedia.org] in the 90s. Having these special character requirements was too much of a pain in APL so all special characters were replaced by tuples of ASCII characters that were already common. In K, 'floor' was now expressed as _: which is no easier to guess the meaning of if you don't know the syntax, but now you need only standard ASCII to represent it.

'Son of K' was Q [wikipedia.org] which comes full circle replacing _: with the keyword floor.
Iverson's argument in developing APL was that the terseness achieved by using notoation (single characters) meant that you could express concepts more conciesely. This in turn meant that complex concepts were easier to visualise. There's a lot to be said for this, but I think Q now provides a much happier medium between the two perspectives.

shut your mouth when grownups are talking.;)/what's wrong with perl? It only looks odd to you because you don't know the language. To me ( 20 year C programmer and 10 year perl programmer ) it's extremely straight forward.

You've been programming for at least 20 years. That means you've started when things weren't buried behind seven layers of abstraction but had to be done by hand. In languages that didn't help you all that much, but didn't get in the way of letting you get things done either. So, like me, you've seen things those young whippersnappers wouldn't believe.

Anyway, about perl, I've never seen why it got such a bad rap for excessive punctuation. The sigils on variables aren't that weird, even BASIC used them when

"You've been programming for at least 20 years. That means you've started when things weren't buried behind seven layers of abstraction but had to be done by hand. In languages that didn't help you all that much, but didn't get in the way of letting you get things done either. So, like me, you've seen things those young whippersnappers wouldn't believe."

Those languages still exist, and real programmers learn them and even more powerful stuff that makes you a far better programmer... Like Assembler.

My boss would have defined better as the 10x programmer who got done in 1 month what I'd said would take 10 but left zero documentation, unit tests or comments; and code so brittle that the slightest deviation from spec brouht the entire mess crashing down around our ears. Sure he was 2x as expensive and it took me nearly 12 months to sneak something past the powers that be that reduced my daily support request queue back to what it was prior to him coming and working his magic but go

The strangeness of perl at times is that the excessive punctuation changes meaning in context. It makes sense once you know the rules, but even then you can run across something very strange and head scratching if you're not actively using it all the time. Ie, there's a lot of overloading. So almost every time I use perl I still am referring to my dog eared O'Reilly quick reference guide and manual, despite having used Perl since 1989.

How is $storage in perl any less ambiguous? It might be a simple scalar, a scalar reference, an array reference, a hash reference, or an object of indeterminate type. At least in Ruby you know - for certain - that it is a class instance variable.

It hides formatting information in whitespace, something that no sane person would do.

It also ends lines at the new line rather than at a;, which means that you're in a position where you can end up with long lines at times, where normally, you would just hit enter and continue on the next line.

In general though, any language that depends upon white space for anything other than separating elements is just asking for trouble.

It also ends lines at the new line rather than at a;, which means that you're in a position where you can end up with long lines at times, where normally, you would just hit enter and continue on the next line.

Python uses newline as a statement delimiter only if all bracketing constructions (...) [...] {...} are closed. The arguments of any function call, for instance, can be split over multiple lines, as can the elements of a list or dictionary or a long expression. And back when print was a statement (Python 2) as opposed to a function (Python 3), it was my common practice to do something like this:

Python only forces you to indent in the way any sane person would indent anyway. That's not evil.

It is when you have to send code through a channel that strips whitespace from the start of each line. With languages that use curly brackets or BEGIN/END, you can pass the code through something like GNU indent to restore the sane indentation. With Python, the block structure is just lost. And if you have your Slashdot posting preferences set to "HTML Formatted" rather than "Plain Old Text", Slashdot is one such channel, as <ecode> loses indentation in "HTML Formatted" mode.

I agree that the meaning of this one liner is not easy to guess but there are other more fundamental things that bother me in Lobster. One is why they should make a difference between = to assign and:= to define & assign. The first assignment should define. Most languages just do that and everybody is happy. The second rant is about the pythonish end of line colon. The : is ugly. It still hits me as a bad taste when writing Python: if a statement looks complete at the end of the line, then it's should

I agree that the meaning of this one liner is not easy to guess but there are other more fundamental things that bother me in Lobster.

I think you're agreeing to something the GP didn't say. By virtue of the subject, he's referring to the number of times you have to use the SHIFT key to type up that line, slowing your programming down. Understanding the line is a different question.

Actually, having something like `len(x)` instead of `x.len()` has some benefits. Check out Guido's rationale [effbot.org] for why it was done that way in python:

There are two bits of “Python rationale” that I’d like to explain first.

First of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:

(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.

(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.

Saying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that.//

Yes I know his argument. I just don't buy and I could build counterarguments but it doesn't matter. At the end even programming languages are not completely rational and our tastes are even less so.To me Python looks similar to C and tastes of '80s, with all those unnecessary double underscores (Guido should have uses a keyword for that). Maybe that is why it is getting successful as a system language. But there is the inconvenience of paying attention to spaces when copy pasting code around, which is a pai

Yea ultimately it's a matter of taste. I am happy and productive programming in Python so I know that I like it =). I agree with the spaces thing. That's the one downside I can think of for having the space-indentation.

Thanks, but no thanks, I prefer to stay with statically types languages. I know that the "kewl" kids love dynamically types languages, but it becomes a horror for maintenance. Ill be sticking with UDK in the meantime

It really depends what you are doing. For many projects, scripting with some OOP is good enough (all those web projects, RoR, etc.). Having short code in an expressive language leads to less bugs.

Static typing is extremely useful because it catches all mistakes of a certain class. However, other mistakes you still have to unit test for. So if you are unit&integration testing well, the benefit of static typing is small, and you are capturing more mistakes than static typing would.

For projects where you have contract-like, long-term stable interfaces/APIs, yes, use static typing. But don't pretend it's for every project.

A robust, statically typed language is for the framework and core functionality.Dynamic typing is for scripting languages. As the name implies - for running short, often modifiable scripts in a well defined context.

It's the same reason why people use virtual everywhere, or make every class a template: It's the latest 'trick' they've discovered, and they think it's the silver bullet solution to everything. 12 months down the line, the painful maintainence nightmares they've created will encourage them to do things differently next time.

"It really depends what you are doing. For many projects, scripting with some OOP is good enough (all those web projects, RoR, etc.). Having short code in an expressive language leads to less bugs."

Are you sure you're not conflating two different things here? It sounds like you're saying some languages are better for short, more expressive code, but that's not the same as static vs. dynamic typing.

The only increase in code from static typing is explicit conversion, but I do not see how this extra code can increase bugs, on the contrary, it's what often decreases bugs in applications written with static typing because the developer has to explicitly declare and perform the possible conversions. In contrast, with a dynamically typed language you're relying on the interpreter to guess, which is much more error prone.

If you perform a conversion in a statically typed language and it's wrong, you know the second you try and execute, but in a dynamically typed language you may not know there's a problem until you hit some edge case input, which is more likely to get out into production due to the subtle nature of it.

Do you have any examples of the classes of problem you believe dynamic typing avoids but static typing doesn't? You make the assertion that if you unit and integration test a dynamically typed language you capture more mistakes than you would with a statically typed language. I don't think that's ever the case, because static type makes capture of certain errors explicit in the implementation, the faults are unavoidable when you attempt execution, whilst dynamic typing relies on you stumbling across the error during execution, which means to capture it with unit tests means it's only as good as your unit tests which will rarely be as good as explicit and inherent capture of errors.

I agree that dynamic code has it's place - where you want to make quick changes, dynamic changes and want to see change instantly or where you don't care about code quality because you're just doing prototyping or proof of concept. But I think dynamic code is always inherently more error prone, I think it's a fallacy to pretend otherwise and I've never seen any evidence to suggest dynamically typed code is less error prone than statically typed code so I'd be intrigued to see it because I don't see how inherent ability to capture a certain class of errors coupled with tools for finding every other class of errors can ever be worse than no inherent ability to capture that class of errors with the same tools to find the other classes of errors. It just doesn't make sense.

If you perform a conversion in a statically typed language and it's wrong, you know the second you try and execute, but in a dynamically typed language you may not know there's a problem until you hit some edge case input, which is more likely to get out into production due to the subtle nature of it.

Dynamic typing doesn't mean those languages are typeless. Type errors like trying to add a string to a number still get caught at runtime. Unlike static languages, where a wrong cast can make the code compile and the program will never complain afterwards, leaving you wondering where those segfaults are coming from.

Do you have any examples of the classes of problem you believe dynamic typing avoids but static typing doesn't? You make the assertion that if you unit and integration test a dynamically typed language you capture more mistakes than you would with a statically typed language. I don't think that's ever the case, because static type makes capture of certain errors explicit in the implementation, the faults are unavoidable when you attempt execution, whilst dynamic typing relies on you stumbling across the error during execution, which means to capture it with unit tests means it's only as good as your unit tests which will rarely be as good as explicit and inherent capture of errors.

Static error checking is a shallow way to test your code, and will only catch simple syntactic errors, that usually don't even occur in a dynamic language with a less complicated syntax. Regardle

Are you sure you're not conflating two different things here? It sounds like you're saying some languages are better for short, more expressive code, but that's not the same as static vs. dynamic typing.

The only increase in code from static typing is explicit conversion, but I do not see how this extra code can increase bugs, on the contrary, it's what often decreases bugs in applications written with static typing because the developer has to explicitly declare and perform the possible conversions. In cont

I'm not sure that a lot of those things are really static typing overhead, for example even in some dynamic languages you still have to type var.

A lot of the things you mention are more related to OOP than specifically a result of static vs. dynamic typing, for example most dynamic languages still have interfaces. Take point 7 also for example - C++ doesn't require a mandatory class container for static methods, constants and globals, this is entirely a language specific thing.

Hm, count me among the skeptics, too. The problem is that "dynamic typing" creates principal performance bottlenecks - not good for games. The golden rule is to compute as much as possible at compile time using a strong type system, including type checking, type inference, bounds checking, overflow checks. Heck, with a strong enough type system you might even be able to avoid most of runtime exception handling (see e.g. the design goals of Parasail). What you want is to encourage the programmer to use very

How many games have you written, exactly? I've worked on AAA games from 1995 to today, and most of the industry is using dynamically-typed languages for scripting, and has been since the days of QuakeC. The iteration time is so much faster because the compiler doesn't have to work all that shit out up front. Iteration time is king in game production. Runtime is important too but we all know (right?) that only 10% of your code is reponsible for 90% of your runtime. The other 90% of your code can bloat b

We're talking about a new language; the claim that fast easy development cannot be combined with strong typing and and compile-time checking is totally unjustified. There is absolutely no reason why a language with "dynamic types" is, could, or should lead to easier development or faster development cycles, particularly not if automatic type inference is available. In fact, the opposite is true due to improved error checking at compile time in a strongly and statically typed language.

Thanks, but no thanks, I prefer to stay with statically types languages. I know that the "kewl" kids love dynamically types languages, but it becomes a horror for maintenance. Ill be sticking with UDK in the meantime

As the project becomes larger, you get more and more of the code devoted to converting values between different type systems and serializations and all that stuff. It's boring code, but often just slightly too complex for a computer to do for you without some oversight. Going to a looser dynamic type system greatly reduces this overhead.

That's not to say that strict static types are useless; they're very useful when developing the components that the dynamically-typed language sticks together. Indeed, using

Alternatively you could just use the Python OpenGL bindings [sourceforge.net] (r pick your favourite language). From the project home page I can't see any reason why this language is better than many existing, stable, and optimised languages for accessing OpenGL.

Why do people keep reinventing the spoon? Is it all CS-majors that feel they need to make a mark on the world?

So that they can delude themselves that their also-ran game programming language is going to catch on and become all the rage, as if all the big game developers are going to throw away their uber-expensive proprietary development environments and rewrite their engines in some shitty new open-source language that has shit for documentation, a billion bugs, no IDE support, and a micro-fraction of the libraries available for even the lamest existing language.

Why do people keep reinventing the spoon? Is it all CS-majors that feel they need to make a mark on the world?

So that they can delude themselves that their also-ran game programming language is going to catch on and become all the rage, as if all the big game developers are going to throw away their uber-expensive proprietary development environments and rewrite their engines in some shitty new open-source language that has shit for documentation, a billion bugs, no IDE support, and a micro-fraction of the libraries available for even the lamest existing language.

Throw enough shit at a wall and eventually something will stick. I'm pretty sure that's how PHP got any use at all.:)

Is compiler design still part of a healthy CS diet? That means thousands of languages are being pumped out every semester. It takes a special kind of ego to think any of them are worth a damn.

Actually, the wide variety of commercial home-use 3D printers out there seems to follow this model pretty closely. Get smart enough to make one and all of a sudden they think their version is so relevan

I just looked up some Fortran code. It doesn't look very C-like. No semicolons, no curly braces, some functions take bracketed paramaters while others do not, and the example code on Wikipedia contains a lot of things that just make no sense to me. Like 'IF (IA) 777, 777, 701' - What does that do? There is no variable I can find called IA. It may be a good language once you've learned it, but it doesn't look remotely like C. If anything, I'd say it shows some simularity to BASIC.

They author should be commended for creating and releasing publicly rather than the whining and complaining found here. As a personal project, it may be improved, abandoned, rewritten, or simply enhances skills that will lead to other contributions.

All of us have half-finished, useless projects out there, which have potential to be something nice if we spend another 30 man-years of effort and rewrite them few times. Nothing wrong with that. But posting ninja self-promoting submissions to slashdot about them... thats pathetic.

All of us have half-finished, useless projects out there, which have potential to be something nice if we spend another 30 man-years of effort and rewrite them few times. Nothing wrong with that.

That is not always good. Finishing your projects properly is a very important skill for an engineer, artist, or anyone really. Half-finished stuff gives a bad impression of your work and makes yourself feel uncomfortable about not completing them.

Just spec your projects before starting and assess whether you can realistically complete them, and you're good.

great, another one of those wannabe languages.. There are already a lot of other alternatives out there..Just use one of the classic languages with the same libraries as this one uses, you'll be glad you did..

Why not? Experimentation is useful and gives many languages that die quickly but also ideas that spread and end up in languages that stick. Just imagine saying use the classic languages at the time of Cobol and Fortran, or at the time of C later on. No C, no Perl, no Python, no Ruby, no Java, no PHP (oh well...), no JavaScript. All of them got ideas from other languages and spread their ideas into newer languages or into contemporary ones (PHP has traits nowadays).

It makes sense if language explores new ideas or has groundbreaking implementation. There is no reason to experiment with languages which have both design and implementation sub-par to multiple of existing ones.That said, everybody should try writing their own language at least once in lifetime - it is very good experience and you learn a lot about why other languages have certain quirks. It is just that you should not try to sell your 'baby' on slashdot...

"why another language?" - because I can? I can't wrap my head around the thinking that creating new languages is somehow a problem for our development ecosystem. Noone forces you to use them. And like others have so kindly already mentioned, this one will probably die in obscurity, solving your problem before it even started.

"what's the point when it's not a major innovation?" - Better mainstream languages is an evolutionary process of designs