XML literals was a fad that I'm glad Java didn't follow. Lambdas are as much a fad as functions themselves are. Heck, they're more battle-tested than this newfangled "OOP" thing everyone keeps talking about.

For the point of rapid development and prototyping, Python is often better, especially since properly written code can document itself. (Python is basically pseudocode) However, for more complicated (game) graphics stuff, Java is better since it has simpler, more up-to-date libraries. (Clutter is a great library, but it is meant for visualization not gaming.) LWJGL is good for OpenGL (and has some abstractions) and canvases are good for simple 2D games (or as viewports for other renderers).

Lambdas aren't inner classes, and because they're not specified or implemented that way, there is potential for significant performance improvement.

I have been thinking about this for a while, but I couldn't make up my mind, so I'll ask: Which use cases are there in which anonymous inner classes are actually a performance problem? I'm using them for listeners and such, and it never was a bottleneck or even a minor problem for my programs.

Maybe you can show me some examples where anonymous inner classes are a performance problem and how lambdas will help there.

There aren't really any performance issues with inner classes - they're just classes after all, like everything else in Java - but the lambda stuff is supposedly going to make certain bits of really complex code that deals with multithreading for performance be much easier to write (and get right).

As for Occam... yes, you can easily see why it failed, but IMHO the problem with programmers wanting coherent memory models is that they/we've been brought up to believe that that is how computers are supposed to work, whereas I'm pretty much certain that computers would be much better off having non-uniform local memory architectures from the ground up and we send data by passing messages (a la VBOs in OpenGL for example). A language designed around this paradigm is quite easy to get to grips with. Provided it doesn't look completely stupid, like Occam.

There aren't really any performance issues with inner classes - they're just classes after all, like everything else in Java - but the lambda stuff is supposedly going to make certain bits of really complex code that deals with multithreading for performance be much easier to write (and get right).

Lately I've been experiemneting a lot with multithreading, and at the moment I think that functional code is much better suited for concurrency than object oriented code. So I assume that features from functional languages will indeed help there.

There aren't really any performance issues with inner classes - they're just classes after all, like everything else in Java.

Except lambdas!

There's a bit about performance of lambdas vs inner classes in the second slides links I posted above - pages 31-34. In a lot of current uses of inner classes I guess they're not really a performance issue. However, tied in to other things that are coming in Java 8, such as internal (and therefore parallelizable) iteration of collections, they could be.

Yeah, that is a good read and explains how it might speed things up by a few microseconds a call; I suppose of more interest to people depending on multicore computing in the future. (Will be a few years yet before it's really relevant to gaming according to the Steam hardware survey)

Reading this, in my head develops the impression that Java 8 will not be a Java anymore as we know it, but a critter of new and old functionality with a confusing mix of options. Just like C++ is a beast, offering so many ways to do things, and requiring so much knowledge to use it well.

I like(d) Java because it was pretty straightforward from the offered concepts.

There were primitive types, and objects.

Now they add a third category, Lambdas. It seems this will have deep impact, because APIs that were built on the old two concepts now don't look well anymore, and there is a force to change them. So not only the core language changes, but there is force to change all the libraries too.

I'm very careful with changes. This case looks like a change which has almost unpredictable impact on the future development of Java.

Stability is also virtue for a language. Call me old-school, late adopter, and living in someones basement. My opinion at this point of the discussion is that lambdas have the capability to change Java very thoroughly, and I feel sad about losing a simple and still mighty object oriented language this way

You have to admit there was a certain elegance in the simplicity of the language back in the 1.4 days. There was very little magic, just simple components that were very easy to understand how they worked.

Or put it this way: I am still constantly amazed that even though Java is about as simple as an OOP language can get, that truly basic fundamentals such as how (and why) Java's object model and how the GC works are often totally beyond many C++ programmers.

Lambda's have been around since the 30s..this ain't new-skool stuff. In "real" usage since the late 50s...truly not new-skool stuff. Since I haven't said so in this thread: I could care less about java having lambda's or not because it doesn't address any of my personal major concerns. But it's been a glaring hole in the language...and it only seems fad-like, because C++ & java have been the rare exceptions of not having lambdas.

You have to admit there was a certain elegance in the simplicity of the language back in the 1.4 days. There was very little magic, just simple components that were very easy to understand how they worked.

Certainly!And although I really do appreciate using some of the additions to java (and I wouldn't want to go back to 1.4 for generics alone, flawed as they might be), they often come with some 'gotchas'. They often obfuscate what you're really doing while not really adding anything worthwhile beyond code brevity. They often feel like workarounds for problems that are inherent to the platform and arguably shouldn't be fixed by just hiding them in a nicer syntax.

And imho annotations are often really abused (especially in some enterprise stuff) to the point that it opens up a whole pandora's box of potential runtime issues that could/should have been caught by the compiler (or be in configuration instead of source code). But hey, we do test-driven 'agile' development now, right? That makes *everything* better

But I digress.

Quote

Or put it this way: I am still constantly amazed that even though Java is about as simple as an OOP language can get, that truly basic fundamentals such as how (and why) Java's object model and how the GC works are often totally beyond many C++ programmers.

Maybe that's because C++ isn't necessarily an OOP language, while java aims to be strictly that. As such I actually like them both but for very different reasons.I often learn a lot of non-OO but really useful things from C++ programmers too

I want them to release a Java that breaks backwards compatibility... adds operator overloading, structs, unions, removes the generic erasure bull crap and implements it the same way C++ templates are (minus the HORRENDOUS syntax - keep that the same). Several core JAVA API developers have mentioned how the chains of being backwards compatible is going to restrict Java from competing with new languages that have learned from their horrible design decisions. And for god's sake, give me some function pointers... I know that is ACTUALLY coming in the next version of Java (under a different name).

@princec: I have zero doubt that we would be much better off with transputer-like configurations. The cost vs. power of GPUs vs. CPUs demonstrates this. But I can't see it happening as it's too much of a paradigm shift...see how people get their knickers in a knot over tiny changes like (fill in the blank of any recent or near-future java addition). Transputers for the desktop requires software to be completely rewritten (from the base languages & OS up) and the same for the hardware. And back to the language issue...it seems really tricky to me. Taking sproingie's suggestion of message passing. OK, you could do some actor based language (for instance) and that could address a sub-set of use-cases (pretty good for general purpose programming) but it doesn't seem like you'd get good coverage of, say, signal-processing. For stuff like this it seems like you'd really want some data-oriented language. Could the two be merged? Perhaps. The cost of RD and the horror stories of PS3 programming certainly seem unlikely to motivate some company into adventuring down this road. (The rumored PS4 specs indicate they've gone back to a classical architecture.) As for GPUs exposing something along these lines..that's more likely, but to do so means exposing some hardware details which are currently hidden behind the scenes. Since supercomputers these days are being built out of f*ckton of high end GPUs...never say never.

@ClickerMonkey: forward compat is a bigger problem than back-compat. Jigsaw and defender methods should greatly help with both. Unions? As-in-C style: can't happen...not type safe. Structs: On the table in some undefined way. Operator overloading: too much resistance...but doesn't really matter because you can roll-it yourself (or use an alternate JVM language). Getting rid of type erasure: in the works. C++ templates: not at all related to generics. Templates are poorly designed macros...having macros would be very nice.

both 'doSomething' with every 'Foo' in 'fooList', but the first has stricter requirements than the second. The first must be sequentially processed in the natural order of 'fooList'. The second does not. Additionally chained statements can be transformed if written in the second style, where they cannot if written in the first.

both 'doSomething' with every 'Foo' in 'fooList', but the first has stricter requirements than the second. The first must be sequentially processed in the natural order of 'fooList'. The second does not.

I'm wouldn't say that naturally follows: doSomething would have to be a pure function to be reordered like that, which would make a foreach kind of pointless as opposed to a map. Scala requires you to specifically state your parallel intent with "par":

fooList.par.map(_.doSomething())

. Even Data Parallel Haskell requires you to use parallel arrays (and gives you syntax for them) instead of doing it automagically. The only language I can think of off the top of my head that's parallel by default is Fortress and it was designed that way from the start.

I suppose you could say GLSL is also parallel by default, but more in the sense of executing multiple instances of an otherwise serial shader program. Still, what with the output capabilities increasing with things like transform feedback, that might just be good enough.

The cost of RD and the horror stories of PS3 programming certainly seem unlikely to motivate some company into adventuring down this road.

I'm not sure you can call a PS3 a transputer, or even close to that.I might be wrong, but my understanding of PS3's Cell processor is that it's basically a traditional IBM PowerPC-like CPU but with 8 satellite cores that are good for signal processing and such. I though these horror stories had mostly to do with the state of the development tools in the early years, and that the GPU was sort of underpowered and needed 'help' from the Cell processor to offset that, which meant going hardcore low-level programming Cell's SPUs (which was obviously not on many developer's CV).

@ReBirth: Looks like lots of languages. @erikd: The PS3 model isn't transputer based, but is much closer than what we currently have and has similar issues. And yeah a big part of the problem seems to have been tools...and I want to think that the estimates on R&D costs were around 2 billion USD...they probably would have been better off spending a bit more on the software side. The failure of a non-traditional architecture isn't going to help encourage folks to take the huge risk of walking that path. Esp since Sony is moving back to a traditional architecture system...it's a pretty strong statement that the experiment was a failure and that it's too risky continue down that road.

And a (non-embedded) transputer based system is a harder nut to crack since it would have to deal with scalability (as opposed to a fixed hardware embedded system), such as changes in scratch memory sizes per cell, number and configuration of the communication channels between cells (say going from 4-way planar, 6-way planar slices, or 16-way hypercube), potentially moving from individual cell configurations to blocks of cells with common resources, etc. etc. Now I don't think these issues can't handled but it would require (as I said) ground-up retooling including the base languages which would be a major paradigm shift for programmers (whom never resist change). Couple in the R&D cost and risk and I can't see anyone attempt it. Except potentially if GPUs, which are ever increasing being used from general purpose programming, are getting close being able to perform basic global illumination in realtime and again usage by supercompters cause them to start making baby steps in this direction.

@sproingie & nsigma: Yeah I doing a poor job of saying what I'm attempting to say. I think the root of the problem is that to understand why lambdas/closures are interesting requires personal experience. Like how do you explain to someone without deep experience with LISP, why it's so powerful? You could say: "Well code really IS data" and "Well it's trivially meta-circular"..but that doesn't really explain anything does it. Well lamdbas allow another mechanism to treat code as data. And behavior can be passed as variables to methods. No good..sounds like function pointers. Ok in my first example the action (iteration in this case) is handled by the user of the type's code...in the second it's handled by the exact type of the 'fooList' in question. If multiple types are called at that site, then how the action is performed is type dependent or if you change a type, then you don't have to rewrite your all of the call-sites to change the behavior, which you would if you go the first route.

I'll make another attempt which is doomed to failure. In my trivial example the action being performed in the first case is user-code on some type. In the second case is a mixture of language dependent (what transforms can legally be applied with a specific runtime & specification) AND the concrete type of the 'fooList'. In my example the code is mandating the action is sequential for the first...in the second it is not. (remember this is a trivial example poorly attempting to illustrate a point). The implementation of 'forEach' of a given type is free to do whatever it wants to preform the action..it's the programmers job to choose the given type (so it could use join/fork as a single example). Clearer? I doubt it.

As for Java 2.x, I really just don't see that happening. The only thing likely to break backward compatibility is going to be a whole new language, and I'm afraid that's likely to be some form of Javascript

Let it break backwards compatibility! It is a curse not a blessing. The mindset that languages must grow while maintaining source level backwards compatibility is ridiculous. It is not as if compilers for the old language versions disappear or that you could not write tools to at least partially automate conversion. On the other hand, if you have an ever growing "standard" language definition, you will not only have a poorly designed ad hoc language (like C++) but you will have a single de facto "official" compiler with lots of bloat and corporate lock in. (Think of Oracle's Hotspot being so irreplaceable and the battle of industry titans to prevent write once compile anywhere become a reality for anything besides their own basically proprietary technologies. (Flash, ObjectiveC, HTML5, C#.)) New changes involve making code harder for humans and computers to read. The alternative is to make language changes that can express programmer intent better, so code is still easy for humans to read (even if it is now only marginally harder to write the first time) and easy for the compiler to optimize (even if you had to add extra reserved words.)

So instead of having stateless types with static methods to mirror classes and interfaces, you get lambdas. Instead of structs, you get escape analysis (which is great, but would be expected even if the language had structs.) Instead of improving Generics, you get type inference. All these things make Java more complicated and make alternative runtime implementations harder. It is irritating that de facto standard makers waste their time pursuing half baked improvements when the same amount of time invested in improving the language's old features would solve the same problems in a straightforward way.

I agree with the sentiment that Java should not look like Javascript, but make super paranoid assumptions about the direction of Java and incompetence of Oracle. Could Java 1.X (as X approaches infinity) turn into that very same language? Wouldn't it be worse if Java gave you no option because there was never a fork?

Ideally Java 2.X would look like Java 1.7 without the mistakes and annoyances that are obviously undesirable in hindsight and less like Java 1.∞. Java 2.X could feature conservative changes to the language and major changes to the standard API. Java 1.∞ with it's abhorrent mix of C#, Python, Javascript, and other nightmares could be developed as a new JVM language with a new name. (Hey, that's a good idea!)

However, one has to consider that the credibility of this language is already at a low point. Completely ditching the old framework of backward compatibility will shut off a lot of customers. People might even question Oracle's actions even more as to why they had to ditch the concept. If anything, a whole new language should develop from Java under a different name with the changes you described, and leave this version to sink with the ship.

The biggest problem with forking anything is the support is also split. A little bit of people will update the system to accept the new changes. The major majority will just run older versions of the code, expecting that everything works as it should. You can see this happening with OpenGL. Half of the user base has 3.x, and the rest are running 2.x and below. Splitting the user base like that makes it twice as annoying for developers, because now we have to work twice as hard learning both ways just to reach the entire user base.

So, in one way, Java's decision to uphold backwards compatibility is a noble one. It keeps the user base fairly relative so we can be sure the programs we create hit a vast majority of the people. It is one of the major reasons I'm standing by Java, because you know your code is going to work when you distribute it.

A lot of features Java has today though, is really taken for granted. I actually approve of the little improvements here and there, and I accept the risks Java is taking to keep the user base relative across all platforms.

That is a good point. Maybe Oracle should be ditched. Of course, backwards compatibility is only helpful in the short run. C++11 is backwards compatible with B by virtue of being backwards compatible with C++ being backwards compatible with non-standard C++ being backwards compatible with C being being backwards compatible with pre-standardized-C being backwards compatible with B. Some backwards compatibility issues of Java are hurting us now and others may hurt us in the long term.

OpenGL is a little different, you need different hardware if you want to code in one or the other and need to code in whichever one your hardware supports. Java would not have that problem, since its syntax is sane and well structured enough that updating code would not be as insane as attempting to fix C code. And you could run both on the same computer or the same VM.

I usually think of a language as being a tool. It's okay to have more than one tool in a toolbox. I think that I would rather use multiple languages in the same project (for example, Java + OpenGL shaders, or C + SQL, or Java + ANTLR) instead of integrating junk that hinders efficiency just in case you need a hammer that can also file your taxes. Community is something I hadn't considered. Do you think it is worse to have language diversity (for lack of a better/neutral term) or to have internal fragmentation like C++ is infamous for?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org