Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

crabel writes "In Java 1.6.0_21, the company field was changed from 'Sun Microsystems, Inc' to 'Oracle.' Apparently not the best idea, because some applicationsdepend on that field to identify the virtual machine. All Eclipse versions since 3.3 (released 2007) until and including the recent Helios release (2010) have been reported to crash with an OutOfMemoryError due to this change. This is particularly funny since the update is deployed through automatic update and suddenly applications cease to work."

I don't know why they're blaming Oracle. This is clearly a fuck-up made by the Eclipse developers.

If any other piece of software checked the platform it was running on and didn't handle unexpected cases properly, it wouldn't be the platform developer's fault. The blame would rest solely with the application developer.

Yes and no. While it's not the best practice to rely on some field assuming it'll forever remain static, if you read the bug report in TFA (surprise, surprise), you'll find this:

This causes a severe regression for programs that need to identify the Sun/Oracle HostSpot VM such that they know whether the "-XX:MaxPermSize" argument needs to be used or not.

So, the reason they examine it in the first place is to know whether or not they need to set specific values that are supported by the Sun/Oracle JVM. It's not optimal, but I can't exactly fault them for that.

Yes. It is. You shouldn't detect whether you can use a feature based on the User Agent, you should detect based on the presence or absence of that feature. Anything other than that is absolutely the web developers' fault.

While there certainly are cases where you have to identify whether a user is on IE, particularly IE6, your example is less than spectacular since the very link you provided is an explanation of how to handle margins without having to detect the User Agent.

There are still far smarter ways to code something like this. Create a function with a return type YOU can rely on and have it determine the platform specifics. Then if crap like this happens you only have to change one function and suddenly everything works again. MUCH smarter than constantly checking some browser string...

How does that technique solve the problem where a feature exists but is implemented differently?

From the bug reports I gather that in this specific case, the problematic checks were a workaround for other JVMs that did not implement a specific option ("-XX:MaxPermSize=256m") and did not start at all when it was used. Looks like a poor workaround to me, when we've been using installation time checks as a de facto standard for such things (i.e. GNU Autoconf) for more than a decade to avoid such issues. Eclipse could simply have tried to start the JVM with said option at installation time and if that fa

From the bug reports I gather that in this specific case, the problematic checks were a workaround for other JVMs that did not implement a specific option ("-XX:MaxPermSize=256m") and did not start at all when it was used.

According to docs, MaxPermSize is "Size of the Permanent Generation". So... Why does Eclipse care about this? In my experience needing any of the XX options to work, as opposed to Xmx (which sets the maximum memory the VM will allocate), is a sign of the program breaking Java Memory Model

But then, the other ways aren't terribly reliable. I remember, once upon a time, trying to find "The Right Way" to deliver XHTML with an XML mime type for browsers capable of it, and as HTML for everyone else.

There isn't a right way.

The closest I got was the Accept header. The problem here is that every single browser out there sends a */*, because every browser can accept downloads. At the time, I remember one browser (can't remember which, maybe Safari) sent a */* and nothing else -- while others sent a string explicitly mentioning a few and assigning priorities to them.

The problem was, there wasn't any way for me to specify my preference on the server side, and there certainly wasn't a good way for a browser to say what it natively supports, what it can open in external programs, and what it can only download and bother the user about. All I could do is follow the browser's own preferences, and feed it whatever it ranked highest -- and even then, I'd have to prefer text/html (even though I really prefer application/xhtml+xml) for those browsers which don't specify preferring html to */*, but really don't support xhtml...

At the end of the day, my options were pretty much to either stop caring about the standards, or interpret them in a very non-standard way, or use User-Agent detection, or just give up and serve it as text/html.

And that's just getting the thing to render. It only gets messier from there...

So yes, it's my fault, as a web developer, that I might fall back on user-agent detection -- and, in particular, I'm likely to detect IE so I can work around some of its many deficiencies. It's also the fault of the standards for not defining clearer ways to negotiate capabilities. It's also the fault of browsers for not following what standards do exist.

I certainly try to avoid browser detection and focus on feature detection, as you suggest. But your blanket statement, like many blanket statements, is just wrong.

The FBR states that this is a file attribute only present under Windows, which seems like an odd choice by the Eclipse developers:

"An engineering side note: The "Java" property values for java.vendor and java.vm.vendor were never changed in the jdk6 releases and will remain "Sun Microsystems, Inc.". It was understood that changing the vendor property values could impact applications and we purposely did not disturb these vendor properties. The Windows specific exe/dll file "COMPANY" value is what is at issue here, not the Java properties. It came as a surprise to us that anyone would be inspecting or depending on the value of this very platform specific field. Regardless, we will restore the COMPANY field in the jdk6 releases. Note that the jdk7 releases will eventually be changing to Oracle, including the java.vendor and java.vm.vendor properties.Posted Date : 2010-07-22 15:50:53.0"

The application didn't do that deliberately, it was a side effect of launching the JVM with a too-small maximum heap size. Because the option to change the VM size is specific to the individual JVM implementation you can't just guess which flags to put in there. They did the reasonable thing in the case that they couldn't identify the JVM and didn't pass any option; unfortunately that meant it ran out of memory.

Why is a virtual machine being run with a heap size limit by default anyway?

Because the developers are too lazy to read the documentation. I don't know how many Java developers I have had complain about slow performance. Show them how to launch a jvm not using default values, point out the information in the docs, and suddenly I am a genius. Had a whole project group ask for 16 J2EE servers because they used the default heap size; and the web apps were abysmally slow.

Not really. Compilers are probably better at deciding what should go in registers than programmers are. But programmers are probably better at releasing garbage promptly than garbage collectors. So manual memory management will generally need less memory than GCs.

This is why Obj-C now has optional garbage collection for Mac programming, but it's not allowed for iPhone programming.

Isn't this exactly the sort of thing that reflection is designed for? As an analogy, it's like looking specifically for "Microsoft Internet Explorer" when writing web pages, instead of checking if document.addEventListener is available. It's flakey and easy to break when the platform gets updated.

I'm not sure if there's anything in reflection that'll tell you specifics about the virtual machine options or not, but that's not really what reflection was designed for. Reflection is primarily used to find out information about, and allows manipulation of Classes.

I seem to remember some applications not fully working with Blackdown and possibly others facing some breakage with other JVMs. So while it's stupid to rely on the vendor field in general, I can sort of understand why they'd examine it for purposes of compatibility. It goes both ways.

Should they?Sometimes different implementations of a "standard" behave differently, sometimes that is because of a bug, sometimes it's because of an ambiguous specification in the standard. Sometimes it's because of something beyond the scope of the standard (e.g. suns VM needs to be told how much memory it's allowed to use upfront or memory hungry applications will fail).

When this happens you have to identify what you are running on so you can tailor your apps behaviour to that of the implementation it is

Poor planning. Eclipse should not use a 'company' field to be pulling key VM info from. And there should be another more particular way to acquire VM information applications require.
That was a poorly thought out situation from the get-go, but Oracle was mightily short sighted for making this change without much testing of compatible apps. Mind you, it isn't their fault as such, but pissing off all of those using Eclipse is mightily retarded.
While we're on the subject of retarded, automatic updates? You deserve what you get if you trust those. You should be damn sure an update is solid, stable, and won't give you a BOHICA experience before you apply it. No sympathy for auto-update users.... that's just bad planning as well.
So: Oracle: Minor thumbs down. Eclipse devs: Thumbs up overall (except for bloating), but thumbs down for this one. Auto-update Users: Not bothering with a thumb, too busy ROFLMAO.

Use NetBeans. I use both NetBeans (by choice) and Eclipse (by necessity, for work) and find Eclipse to be powerful but it is unstable and has a truly awful interface (that only IBM could love!) from a usability point-of-view. NetBeans is much simpler and straightforward to get things done.

Yet judging from this discussion, it has a reputation for being flaky...?

For me it usually crashes at least every couple of days or becomes so screwy that I have to exit and restart. And it's often refused to start up at all until I just deleted the old workspace and recreated it.

It feels like a rock solid piece of software. Yet judging from this discussion, it has a reputation for being flaky...?

I'm pretty sure it's a case of RealPlayer syndrome.

For years and years RealPlayer earned a special corner of hatred for many sysadmins. It was a pioneer in broken crapware and users who installed it deserved to be shunned if not verbally abused. Now, years later, it doesn't matter if RealPlayer has utterly mended its ways and is the best software out there -- for many experienced administrators it remains the spawn of an infested pool of the lowest scum and has no business being installed anywhere.

Poor planning, perhaps. But I wonder what the deal is with Oracle being so over-eager to plaster their company name all over the place. Wherever you go, java.com, java.sun.com, javadocs, the "ORACLE" Logo is everywhere and this happened only a week or two after the takeover.

But I wonder what the deal is with Oracle being so over-eager to plaster their company name all over the place.

It means that the marketing/branding people in the company carry more clout than anyone with actual product knowledge. This is certainly not unique to Oracle, with most large publicly-traded companies worrying more about their "brand" than the product.

It was poor planning on the specification of the JVM that there is not a standard way to specify the requested heap size. So Eclipse tries to figure out the JVM the best it can so it can pass in the correct parameters to the JVM. In this case, they could not determine the JVM, so I guess they just used the default heap size. I am not sure there is anything Eclipse could do differently (except maybe issue an 'unknown JVM' message, which doesn't help the users any more than possibly running out of memory).

I can remember trying to install programs to D:\ rather than C:\ - That caused no end of problems due to developers hard coding in and just assuming that windows and themselves would be installed on the conventional C: That anyone would ever use any other drive letter didn't seem to occur to them. If I remember correctly this happened to me with a version of matlab (or something in that family).

I remember seeing "how to" articles for various languages there to determine the drive the app is on, and the drive and directory where Windows is. Programmers who didn't learn from these things were ignorant.

Their fix was lying and claiming that the company is still Sun Microsystems, and you think this isn't still news? As far as I can tell, that is an incredibly shitty work-around and the real problem still exists.

Of course, the "real problem" isn't that Oracle changed the company field, it's that "Java programmers still continue to use poor programming practices despite layers and layers of 'best practice' crud". Seriously, isn't the great appeal of Java supposed to be that you can avoid shit like this?

I don't get it. Why would you design the VM to have a fixed size address space in the first place? Anybody here remember the reason? And how come there is no standard option to change that size so Eclipse has to resort to platform-specific hacks to do it? 128M ought to be enough for everybody, I guess...

One reason... security. Prevents a unstable application from growing out of control, causing the whole system to start paging which with a GC becomes a diaster, dragging the whole system to a hault makign it unresponsive. So you set a heap size to "more than you'll ever need" so that it aborts if something goes wrong. There are technical advantages too. But still... I agree. The fixed heap limits are more of a pain than a benifit, especially when the default setting for the client JVM was 64MB until recently because it handnt been changed since around 1997.

Ok, here's some research. First, there are actually two memory areas in the VM, the heap and the Permanent Generation [sun.com]. It is a PermGen overflow that's causing Eclipse's problems, not heap overflow. As I understand from the linked article, PermGen is a place where VM data is stored; stuff like the class structure, method bytecodes (is that just a copy of the executable?), heap content information, etc. PermGen info used to be stored in the heap, but was moved out as a performance optimization, which incident

To Oracle's credit, when Eclipse dev's reported the issue (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6969236) Oracle immediately reverted the change within 2 days (http://hg.openjdk.java.net/hsx/hsx17/baseline/annotate/1771222afd14/make/hotspot_distro). They could have argued that it was Eclipse's fault for depending on the value in the first place and that rebranding their VM is something they should be allowed to do. But they put the best interest of other applications first.
Still, it raises an issue that no one has really bothered with before. There are many Hostpot "vendor specific" options that are very commonly used. Almost every large application would configure heap sizes. There should be a standardized mechanism to define these options and thus avoid these very problems.

Indeed: Oracle now employs my former Sun boss, David J. Brown, who pointed out on the order of ten (10!) years ago that one needed to distinguish between stable and unstable interfaces. He then labeled all the interfaces with the number of their standard document, and all the unstable ones SUNWprivate.

And yes, you can ask the JVM if any of the Java interfaces are supported, so switching on the name of the company that produced them is unnecessary. The company name is neither a necessary nor a sufficient t

There should be a better way, indeed. I used to work doing Java development.

Java was built with the braindead/naive idea that it would run equally on all platforms. So when we found that the implementation on some obscure platform had a bug, we couldn't just #define a way around it like you can in C. Therefore you have to use more hackish tricks like these.

If you really want to horrify Java coders, think about what would happen if all the com.sun libraries got renamed to com.oracle. =) =)

Ignoring the 'one line change', does it seem appropriate that changing a company string should cause an "Out of memory" error? I realize the OOM error happened about 8 stack frames later but I mean, seriously ?

Ignoring the 'one line change', does it seem appropriate that changing a company string should cause an "Out of memory" error? I realize the OOM error happened about 8 stack frames later but I mean, seriously ?

Apparently, the VM sniffing was done to determine whether to use a particular mechanism to adjust memory settings. So, while it probably should have thrown up a "this is an unknown VM and things might not work" type of error at least the first time it encountered the unknown VM, its not entirely surpr

Ignoring the 'one line change', does it seem appropriate that changing a company string should cause an "Out of memory" error? I realize the OOM error happened about 8 stack frames later but I mean, seriously ?

It was a "clever hack" is that turned out to be a bug waiting to happen. This is generally acknowledged within Eclipse. Shame on Eclipse, kudos to Oracle for bending over backwards to help a competitor.

Read the bug. It's not the heap or the stack that is running out of memory (something that is completely within the developer's control), it's the space the VM uses internally for storing class definitions. All big apps( ie those with a lot of classes) that use Sun's VM have to configure the permanent generation space size or they hit this issue. As this configuration is vendor specific and eclipse is designed to support multiple VM vendors, the only way to tell if the custom Sun -XX option should be set is

Shh! Don't tell Oracle that the uname command returns SunOS, or all hell will break loose.

The obsession with removing the Sun name from everything is petty in the extreme, to say nothing of tacking Oracle on where inappropriate, ie. Oracle Solaris. It as if Larry were a kid who felt the need to stamp his name on all of his possessions.

Quite so. It's also a potential marketing error. Sun's hardware and software engineering, pre-Oracle, had one of the best reputations in the industry (even if their sales organization wasn't so highly regarded).

McDonald's owns Chipotle, but that doesn't mean you can only buy McBurritos there, because that would likely send exactly the wrong message. Just like McDonald's, Oracle's brand has various negative connotations. Another example: Microsoft is very careful about this - for example, Xbox marketing mate

Yes, Oracle owns it; they bought it. You may justify it however you want, but it doesn't make it right. What Oracle is doing is dishonest; it is akin to replacing the manufacturer's label with your own.

I don't stamp my name on someone else's product even after I purchase it, and nor do most other companies after acquisitions where they continue to sell products which are clearly not of their own creation. If the company brand has recognition, it is usually kept, and if not, the changeover typically takes

Someone in our company ran into this several weeks ago, and I had kind of a fun time tracking down the problem. The summary and most of the comments are missing a lot of details and nuance, which actually make this problem kind of interesting.

1) It wasn't even running out of memory

Sun/Oracle's VM implementation (HotSpot) has a concept of a permanent generation, which is separate from the rest of the heap and has its own maximum size. This generation holds stuff like the code cache and interned strings. Whether or not this is a good concept is debatable, and as far as I know, they are planning to do away with it in the future as JRockit and HotSpot merge. At any rate, this is the space that was filling up. This probably didn't happen very quickly on a normal Eclipse distribution, but with a lot of plugins installed (and thus a lot of classes being loaded) it crashed pretty quickly.

2) This is only because of somewhat subtle differences between the various VMs

HotSpot is the only major JVM I know of that has a PermGen space - J9 (IBM) and JRockit (Oracle, via BEA) don't have this concept. Thus the requirement to be able to behave differently based on which VM you are using. Being able to behave properly on multiple VMs is especially important for Eclipse because not only do they have a lot of people using it on HotSpot, but because it is the basis for IBM's RAD, they have a ton of people using it on J9 as well.

3) This problem is in the launcher, not Eclipse itself

So, the crux of the problem is that Eclipse needs to start a VM, and has to know the proper flags to pass to it *before* it starts up. A few people have suggested trying reflection or other runtime methods as a better way to solve this, but this ignores a) Once the VM has started up, you can't change the heap or PermGen sizes, and b) As far as I know, there is no way to query the VM at runtime to figure out what its underlying heap structure looks like - that is an implementation detail.

So, while it does kind of suck that Eclipse was relying on a vendor name, it is trickier to solve than it appears at first glance. The only really graceful ways I can think of to solve this problem rely on some changes to the VM spec.

An engineering side note: The "Java" property values for java.vendor and java.vm.vendor were never changed in the jdk6 releases and will remain "Sun Microsystems, Inc.". It was understood that changing the vendor property values could impact applications and we purposely did not disturb these vendor properties. The Windows specific exe/dll file "COMPANY" value is what is at issue here, not the Java properties. It came as a surprise to us that anyone would be inspecting or depending on the value of this very platform specific field. Regardless, we will restore the COMPANY field in the jdk6 releases. Note that the jdk7 releases will eventually be changing to Oracle, including the java.vendor and java.vm.vendor properties.

Quite a lot of software development tools and build scripts also broke when Richard Stallman changed the gcc target "i386-pc-linux" to "i386-pc-linux-gnu". GCC development had long since been taken over by other people but RMS just had to commit his little political agenda to the build, and broke a lot of builds in the process. Same thing here.

This will override the eclipse launcher's default set of JVM arguments with a custom set. The MaxPermSize is the issue. If the eclipse launcher can't identify the JVM, then it doesn't know to specify a larger permanent generation size for the Sun/Oracle JVM.

To those people saying that this was a lousy design decision by the Eclipse devs:

Since a nonstandard switch is required at launch by the JVM, the only way to know what set of switches to pass is to query the JVM vendor string. It's not a clean solution, but it's a solution dictated by the platform.

Open source lets the community fix these breaking bugs only if there's still a community left for the project, and hiring a developer to fix is one way to say "paying for support"... basically, be nice to the developers and they'll be there for you. Let them move on and you've got no support left.

If NT or 2000, look for the DOS prompt program here...If 95 or 98, look for the DOS prompt program here...If XP, look for the DOS prompt program here...

Only problem is that Vista was out at the time, and it's OS string failed on all three ifs, so that led to a fail. Worse yet, this was outside of the domain that I'd be allowed to fix, and the search for who was the maintainer-of-record for this program kept coming up empty. I had to call marketing and tell them to hold off on declaring the whole system Vista-ready because we had a small programming bug and a big organizational malfunction.

Let's see... you can download the JDK for free... which by definition is a 'development kit'. You can obtain a java editor at no cost. You can obtain a java IDE and debugger at no cost. Where do you get the impression that the development tools for Java require licensing, exactly?

Oracle, why didn't you just operate Sun as a subsidiary and brand instead of trying to merge it all in?

That's actually not a bad idea, and I'm surprised they didn't do this. Well, mostly, but it does depend on how, exactly the other company was involved in the merger/acquisition [wikipedia.org]. I wouldn't be too surprised if in a few years, Oracle spins off Sun as a wholly-own Subsidiary. But don't expect much; it's worth more to Oracle to be able to put their company logo on machines sitting in your data center. Free adv

Do you trust apt/yum/portage/whatever on your Linux/BSD distro of choice? Same thing... you trust that the developer's code-signing and key management policies are solid, and they won't dick you by releasing something really bad.

If you're not turning on automatic updates on Windows boxes (and even MacOS and Linux boxes), you might be part of the problem. Yes, you should have centralized patch review and deployment in place for all the machines you manage... but make sure it is all of them. All my company's