Posted
by
Hemos
on Monday August 02, 1999 @09:33AM
from the trying-to-outmanuver-intel dept.

IQ writes "This NYTimes article discusses Sun's latest chip, known as Microprocessor Architecture for Java Computing or MAJC. Looks like a huge, fast MultiDiePackage with a lotta chips.
" Fits in well with Sun's continuing attempt to route around Intel-these chips are look like they are philosphically aligned with Jini. More specs will be coming out later on this month. (Free login required @ NYT).

Thanks for weighing in with a detailed, thoughtful comment on the subject. Like you, I too feel that it would interesting to see specific-language chips and see how well they DO work rather than just speculating on it. -- Michael Chermside

I'm seeing a lot of posts saying that chips designed for a specific language are a Bad Thing. I tend to agree.

BUT I think this argument overlooks something important. A Java chip would not interpret Java at the brace-and-semicolon level, it would read Java bytecodes. Java bytecodes are basically machine language for a microprocessor that exists only in software. It is only logical to make such a chip in hardware eventually.

Furthermore, if the specs for a "Java chip" are open, what is to keep compiler writers from implementing back-ends which write Java bytecodes? I'm not a compiler writer, but it seems like it would be quite possible to implement, for example, a C or C++ compiler which writes Java bytecodes instead of x86/68000/Alpha/Sparc/whatever machine code. Such a compiler would make the "Java chip" usable by people who don't like writing Java.

I seem to recall seeing at least one compiler that takes a non-Java language (Perl, I think) and compiles it to Java bytecodes. Also, I know there is one regular slashdot reader who is doing Java programming at the assembly level -- any comments? If a Java chip sees widespread use, anything-to-bytecode compilers would seem inevitable.

In the current issue of Scientific American [sciam.com] (August 1999), the Oxygen Project is explored. It reveals an approach in making a chip, along with other programming and devices, more efficient and faster by using logic gates and compiling the wires automatcally on the processor. Basically customizing the wiring for each application.

The chip is called Raw. It is covered in the 4th part of the article, Raw Computation.

The EE Times article suggests that MAJC won't interpret Java bytecodes; bytecodes would have to be interpreted, or compiled into native machine code, as on most other platforms. (It also suggests that other languages could be compiled into its native instruction set as well.)

Sun have had processors around for a while now which have been designed to execute Java bytecodes directly...

...such as picoJava.

The interesting stuff is the VLIW aspect...

...but with an allegedly-VLIW (assuming VLIW isn't just being used as a marketing-speak alias for "buy this, it's c00l", as e.g. RISC appears sometimes to be used) instruction set, it appears that this chip isn't designed to "execute Java bytecodes directly".

...lending itself to bytecode environments in general (not just Java)

In what fashion does an underlying VLIWish instruction set lend itself to bytecode environments better than does a non-VLIWish instruction set?

Basically it looks like they're making a stab at a new *style* chip architecture, not just overclocking some knackered design a la Intel.

Well, to be fair, Intel are also working on what they consider a new style of instruction-set architecture, even if it appears that many of the basic ideas for it came from HP.

(I.e., if you were just jumping on the "MS" part of "MSNBC", and inferring that this was some Evil Microsoft FUD Plot, note that the article looks as if it might be a re"print" of a Wall Street Journal article, not something put out directly by MSNBC.)

Common Lisp, like Scheme and most other modern Lisp variants, uses static scoping by default. (Though dynamic scoping is available too.) Dynamic scoping is useful in a few cases, but usually just causes problems. More important is that objects have dynamic extent. Features like function closures, continuations, etc., are what make Lisp Lisp. That and the ()'s...

Okay, I'm not up on my Vague-Tech-Reporting lingo. Does this mean we'll be seeing a CPU that runs Java bytecodes natively? If that is case, kudos to Sun! I'm really impressed with Java as a language, and I would like to see Java programs run at something resembling native binary speeds.

Of course, if this isn't what Sun is proposing, could someone tell me what this means?

The EE Times also has an article about next gen server technology [eet.com] IBM (via Sequent) and some info about Sun's next-gen stuff. As usual, Sun are saying very little. From what I've heard seperately though, Sun are working on both a form of NUMA and something else called COMA (Cache-Only Memory Architecture). They might be doing both (on the same machine) for their next-gen server - Project Serengheti, because NUMA is good for some types of applications, while COMA is good for others, so by doing both, the end-users can choose which memory architecture best suits their needs.

Wasn't the goal of Java to provide a robust cross-platform environment for running application software? If Java performance requires specialized chips, it would seem to defeat the purpose. Sun has no idea of what to do with Java. They have not made it very easy to port Java to different architectures. For some platforms, the porting work has gone on for years with no end in sight. It is the classic ``solution in search of a problem''. It's time for Sun to get Java back to basics and finish what was initially promised.

I was just reading an article about this over at msnbc [msnbc.com]. It was more in-depth than the nytimes article. There were a few things that really caught my eye at msnbc

...Sun officials already audaciously refer to MAJC as the most important semiconductor architecture of the next 20 years. In part, that's because the chip is particularly well-suited, they say, to handling the enormous streams of visual and audio data expected in the multimedia age. In addition, MAJC should yield a family of microprocessors that are easy to program using Sun's Java language, that can be used in everything from cheap consumer devices to Internet server computers, and that over time will grow even more powerful, and more quickly, than rival chips...

...Sun, for instance, claims that within several years, it should be possible to generate an interactive computer-animated movie like Toy Story in real time using a single MAJC chip...

It's too bad the article on MS-NBC is very biased- look at all the words loaded with negative conotations:

"Thanks to an unusual design..." Unusual according to who? The author? Computing industry experts? This must be considered biased opinion unless a source is given.

"Sun figures it can sell cheap versions of the chip for use in inexpensive consumer-electronics" Cheap chips, inexpensive electronics? Cheap has strong negative conotations, while inexpensive is generally considered a Good Thing(tm). It's interesting to note that it wasn't "inexpensive versions of the chip for cheap consumer electronics"

"It's still a risky bet, though..." According to who? This persons' stock broker? To be fair, it backs up this statement with examples of past failures, but printing it as a statement of fact is bad form.

"Intel, which has also moved aggressively into communications-related chips, will also pose a competitive threat."...and so will TI, and ~Transmeta~ (?) and all the other companies not aligned with MS, and not mentioned by the reporter. This statement points to Intel in a positive light, but fails to mention competitors. Another example of possible bias.

"Sun officials already audaciously refer to MAJC as "the most important semiconductor architecture of the next 20 years.""...and Microsoft audaciously assumes that every computer sold will have Windows on it. Audacious is a loaded word, and should not be used in a news story, unless someone else is quoted as saying it.

"Analysts are reserving judgment on such claims until Sun formally discloses the details of the architecture on Aug. 16."...but MS-NBC isn't. It's jumping right out of the gate with lots of unsubstantiated claims, and un-sourced opinions.

To be fair, there is some positive stuff in the article too-

MAJC chips should be able to display complex graphics and handle digital-communications tasks at extremely high speeds -- far faster than a general-purpose Intel chip, for instance.

the chip is particularly well-suited, they say, to handling the enormous streams of visual and audio data expected in the multimedia age.

The main problem with specialized hardware is that the people building it tend to get run over by the Silicon steamroller.

"After two years of development, we're proud to announce the new HyperAccel 3000 CPU with hardware support for Snobol. It runs at 100MHz and provides a 4x speedup over general purpose processors for Snobol applications. What's that you say? Intel makes 500MHz CPUs now? #@!$"

I'm not saying that the above scenario will happen in this case. It's usually the small outfits who can't afford to keep up that get burned, and Sun isn't that small. Further, in some situations, the speed gain is so large, that even if you use previous generation fabrication, you can still win: witness the 3D graphics market. But you are competing against parts that have enormous sales volumes and all that implies. If it comes down to a chip with Java support that gives a 1.5x speed-up vs. a commodity CPU, I'd bet on the commodity CPU. It will probably be available at 1.5x higher speeds at comparable cost.

"MyGarage Software just announced a JIT compiler that is 1.5x faster than any earlier Java compilers for x86 CPUs? #$@!"

Certainly any programming language can be compiled into JVM bytecodes; however, the instruction set of the JVM has been designed around Java's object model. Other languages, such as C++ or Eiffel, that use a different object model are going to thus be at a disadvantage. These language can be (and in the case of Eiffel, are) compiled into JVM, for those cases where such is necessary, but now that we still have a choice (that is now that Sun hasn't completely bowled over the marketplace with Java), we should demand a Virtual Machine that is not predisposed to just one programming language.

Unless I've managed to get this all mixed up, Lisp has dynamic binding, but not dynamic scope. That is, a procedure invocation is always evaluated in the environment in which the procedure was defined, not the environment in which the invocation occurs. Where it makes a difference is when the procedure refers to non-local variables. So, e.g. (this is Scheme, not Lisp):

(define foo (let ((a 1)) (lambda (x) (+ x a)))) (let ((a 2)) (foo 5))

would return 6, not 7, because the invocation of "foo" sees the "a" bound in the first line's "let", not the second, since that's the environment in which the "lambda" was evaluated. Once I was writing a Scheme interpreter (in Java, by the way) and I noticed where with a one-word change I could select between dynamic and lexical scope, by changing which environment to extend when binding the arguments for an application.

That said, I agree that dynamic binding (which I assume is what you meant) makes Lisp incredibly powerful. In fact, it makes nearly all other languages (including Java) seem downright primitive. I mean, imagine actually having to recompile a program each time you want to test a change! In Lisp, you don't even always have to stop the application to apply a patch, let alone rebuild it. Just re-evaluate the definition of the procedure that is changed and code that calls it will seamlessly see the new version. Since symbols are bound dynamically, there's nothing to re-link.

The major argument against Lisp has always been performance, but with moderm hardware that's less of an issue -- to be fair, compare it to Java, not C. Besides, with modern compiler technology, the difference is not as great: I've actually seen a piece of Lisp code run significantly faster than the exactly-equivalent C code.

Now consider the fact that things like maintainability and availability are becoming more important than raw performance. I would think that the ability to apply a patch to, say, an e-commerce server without having to bring the system down, even for a minute would be of a lot of interest to the people running those systems.

For quite some time now, we've all watched the worldwide criticism of specialized hardware that implements a more abstract instruction set, lisp, java, smalltalk (not sure if the latter actually was turned into hardware), etc. Why is the criticism so harsh? I've not yet really seen anyone GIVE IT A CHANCE before discarding it as a toy. First, a disclaimer: I happen to think java is the best language (ok, toolkit, platform, etc.) that's come around in a long time for general-purpose programming (NOT for operating systems, but hear me out here...). Like it or not, the vast majority (i'd venture a guess at 90%) of software written is NOT (and need not be) of operating system calibre in terms of robustness, quality, performance, maintainability, etc. In many cases, the life expectancy of the software is far too short, because needs, requirements, etc. change very quickly, to warrant the additional time spent in development. Now, remember, i'm a purist by heart, but i do have a pragmatic side to me too. Occationally, the costs just don't justify the benefits. Again, like it or not, i have worked with a great many people that are under too much pressure, lack the skills, or simply don't care enough about the quality of their work to do a good enough job with an "easy" language, let alone one that lets them shoot their foot even more effectively... Anyone who truly thinks that java is "too slow" on modern hardware with modern dynamic compilation technolgies really does need to do a bit more experimentation on their own. There are few problems that i've needed to solve in the last few years that i couldn't *easily* solve with Java, and never did i think that the quality or performance suffered (especially now with heuristic compilers). Remember now, I wasn't building 30,000 user systems, maybe 3,000. Is it the best tool for *every* job? Hell no. Does it solve some things VERY effectively? Absolutely. Would i still write any software requiring the utmost performance in c or c++? Hell yes. As history has taught me, profiling my code shows that 90% of my time is spent in 5% of the software. Again, what percentage of the software I've written has requirements demanding utmost performance? less than 5 percent. Now, i'm biased, that's clear. But, seeing in the first 3 posts, not one constructive thing could be said, i felt it my duty to *try* and present a more pragmatic opinion... i, personally, would LOVE to take a shot at using a higher-level-of-abstraction instruction set, just to see for myself whether or not they're of utility. i don't have the experience with them to either condemn or praise them. i wish the same humility were infectious. as always, my opinions are mine alone, i speak for only myself, and i apologize if i offend. Peter

I'm still waiting for OO to become fashionable. The dominant paradigm still seems to be to write an application that wraps itself around some data in a file somewhere and uses it to configure the app somewhat, using a slightly different app for each type of data. So much for data-driven.

When I receive my data as an object that I can query for its fields, because the app that generated it created it that way, then I'll be impressed. Till then, well, how many of you are writing 20 different scripts to parse syslog 20 different ways?

i only learned as much lisp as i *needed* to do my job (specifically, writing some modes in emacs).

if there's one thing that really intruiged me about it was that it is dynamically scoped, opposing just about every other language in common use throughout the world. this, unfortunately, is so confusing to the masses, while being WILDLY useful to those who know how to harness it's power...;)

is it that feature that you can triangulate down to when you think about what *really* stands out in lisp?

again, it comes down to:

**I** believe i can learn any computer language in the world and be productive. It's my hobby. Functional languages are my toy right now.

on the other hand, the folks who are not "into" learning languages & the science of computing don't have the persistence i seem to. it's not that i don't *wish* they would, i just must pragmatically accept that they will not. we have different priorities, and that's a *good thing*.

so, seeing as software maintenance is so incredibly important to me (and plays a significant majority-role in software lifecycles), can i expect most software engineers to quickly acquaint themselves with the paradigms behind java? i feel fairly confident in saying yes, because the language is not *that* different to what the masses are accustomed. i'm not certain i can say this about lisp, as much as it intrigues me.

Actually, Jini is open source... anyone can create a Jini device to use or sell. Java is a programming language... anyone can write a program in Java. It's like C++ or Delphi or any of the other programming languages. These are not "one company does everything" technologies.

By hardwiring aspects of the JVM, the Java programs will run faster. This is nothing new, the CISC and RISC chips all have various functions hardwired in.