Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "MATLAB, an important package of mathematical software heavily used in industry and academia, has had support for 64-bit machines for several years now. However, the MATLAB developers still haven't gotten around to implementing even basic arithmetic operations for 64-bit integers. Attempting to add, divide, subtract, or multiply two 64-bit integers will result in an error message saying that the corresponding method does not exist. As one commentator put it, 'What is the point of having numerical data types that can't be manipulated?'" The post notes that the free MATLAB clone GNU Octave deals with 64-bit integers just fine.

The point is that very few engineers currently want or need this functionality -- if they did, the Mathworks folks would surely be on to it. The native type is defined, abstract methods are waiting there to be defined, and someone who needed it has implemented it and made it available. Incidentally, that package has had 38 downloads since july, perhaps indicating the level of demand. From this thread [mathworks.com], it looks like the company is waiting for the demand before implementing it themselves.

38 downloads may not indicate a lack of demand. The demand may be there, but it may be masked in the confusion of what may be thought to have been a standard part of the program.

When our uni upgraded the maths lab computers Matlab versions to R2008a they installed the 64bit version as well. The logical thought was that the computers are 64bit, they have the option, so why not. Well at the time the basic add-on packages weren't available for the 64bit version, which included the package with the solve() function. Sure they could have looked on the website and found a few basic implementations of Newtonian solve functions written for 64bit, but their response to student complaints was to remove the 64bit Matlab and install the 32bit version.

Much like how a pirated copy does not indicate a lost sale, the fact that only few people have "demand" for the application, does not mean there's a lack of demand for the functionality in the 64bit program. It may just mean that people have tried, found it lacking, and dropped back to 32bit. I'd be interested to see stats of how many people use the 64bit Matlab on computers that natively support 64bit instructions

We are talking about 64 bit integers. Matlab has 64 bit floating point arithmetic. This means you can do exact integer arithmetic up to 2^53. I'd say mathworks has a pretty good idea of the demand for 64 bit integers and it is not that great -- it's not like it is a huge job for them to implement it so they would surely do it if their customers wanted it.

using 64-bit integers instead of floats is a common trick in embedded C for control and signal processing on low power processors. I have experience of four different embedded systems used in commercial products from three different companies I've worked with - three of the four used 64-bit integers for roundoff-sensitive calculations.I was a bit surprised that Matlab can't handle this, but then I've seen the poor quality of the ostensibly production-ready code that comes out of their M2C converter - it was about ten times the code footprint and a fifth the speed of a minimally-optimised C version of the same algorithm.

Honestly, I don't know how anyone can justify paying for this, when R (and even Octave in this instance) is more capable. Where the target platform requires C or asm code, then doing development in Matlab is usually more trouble than it saves. The graphs are prettier, though.

CPU operations are limited to a certain number of bits for their operations. Programming languages like C/C++ perform their basic arithmetic operations at the machine level, so they inherit the same limitations. These bounds are not a limit either through library/template facilities at the C/C++ level, or with basic operations in high level (particular object oriented) languages such as Pike and Python.

I can tell you libgmp is not stuck with bcd. But the bcd aspect will exist because some kinds of uses for extended precision are financial/money based, and conversion to and from an external decimal format is sufficiently frequent that it's easier/faster to just do the arithmetic directly on decimal, even if tightly squeezed into 4 bits per digit. This has been going on since early computers. FYI, the ancient IBM model 1620 [wikipedia.org] computer could do this in hardware. As you can see from the code in the links I posted earlier, a choice of language can hide the fact that the underlying architecture has fixed width arithmetic.

BTW, for fun, compare the speeds of those two programs, which are implementations of the same algorithm.

Given that many values used in calculations are enormous, potentially thousands of bits long, 32 bits is quite sufficient. So long in fact, they can only be stated as 'strings' of integers. So much for different data types!

The concern should be the amount of time required to complete a calculation which MATLAB is very good at. I'd guess MATLAB is optimized for 32 bit. What is to be gained by rewriting everything for 64 bit?

If you're a physicist and you wanted to do calculations that involved a few coulombs of charge worth of electrons, MATLAB would throw out an error as this mathematical operation in particular happens to require the calculation of a ~64 bit integer and not be terribly unreasonable.

If you're a physicist using MATLAB, then you are (a) using floating point arithmetic, not huge integers and (b) more likely to be using Mathematica than MATLAB in the first place. Huge integers are more useful in computer science, doing encryption and data processing and such, than in physical simulations. Says the EE/Physics guy with no background in CS.

I think you're right, and I see the same kind of thinking when I ask about 64-bit integers in R. The people who use R are statisticians who can't imagine why a double isn't close enough. The people who complain about it are the computer programmers who are trying to use 64-bit exact fields to merge two datasets etc.

A 64 bit integer will lose precision when converted to a 64 bit double because the double has to use some of the bits for the exponent and the rest go the mantissa. If 12 bits go to the exponent, then your 64 bit integer now has to be expressed as a 52 bit integer, raised to some power. There are (I think) 2^11 distinct numbers that would have the same representation as a double.

The reason you need the decimal datatype is not because "small numbers get lost in the noise." It's because paper-driven accounting was always done in decimal, and so to keep the numbers matching, the computer needs to round in the same way someone doing it on pencil and paper would do it. It's about compatible rounding and representation, not the size of "ulp" (Units of the Last Place) on any given calculation. A 32-bit decimal float is less precise and has less dynamic range than a 32-bit binary float.

The relative precision is the same over all numbers, because the mantissa is a fixed size. The bits that fall off it might represent billions or billionths, but that depends on the exponent and so the same scaling factor applies to the number and the error.

Except when it comes to packing more data into a limited bandwidth data stream. It is often the case that bit information is encoded into large integers then unpacked when analysis/reconstruction is done for instruments where data bandwidths are limited (e.g. the South Pole and satellite missions). That said most groups working in those environments do not use MATLAB.

Umm, you realize you can do math on greater than 32 bits values in Matlab, just not using the 64-bit platforms's ability to natively handle 64 bit datatypes. After all, I can do make on 64-bit values on an 8-bit micro-controller just fine, it will just take more than a few instructions. And as stated before, this matters little as it is a performance issue, and matlab still offers the best performance of its class, even vs. those who do have this feature.

Yes it is. People who do the kind of hardcore math that MATLAB is good at are the ones who actually need 64 bit computing.

Surprisingly, not all that often. People who work with very sensitive systems (chaotic one in particular) or VERY precise data need 64 bit precision, but for 98% of everyone else, it's just not necessary. Anyone doing really advanced work is going to use a supercomputer, for obvious reasons.

MATLAB's largest audience is engineers, although applied mathematicians and physicists use it often, just not nearly in equal numbers with engineers (who also outnumber the others greatly). Given that engineers work w

Seems like a lot of effort. You can always use the c interface (which itself is weird, considering matlab's roots in fortran...) but then you'd have to learn c. Matlab is a tool for physicists and engineers, not computer scientists. They don't necessarily want to take the time to learn c, or they'd have done that. Some do, anyway, of course, but usually what they produce will be one off functions for a specific goal, not entire libraries suitable for sharing.

Frankly, even equally worrisome is that Matlab doesn't appear to take advantage of GPGPU yet. The concept has been around for over half a decade, and I'd have expected the MAtrix LABoratory to jump on the bandwagon quicker than most. It's a game changer in their core competency, after all.

Frankly, even equally worrisome is that Matlab doesn't appear to take advantage of GPGPU yet. The concept has been around for over half a decade, and I'd have expected the MAtrix LABoratory to jump on the bandwagon quicker than most. It's a game changer in their core competency, after all.

I haven't looked at MATLAB+GPGPU recently, but back in the olden days before CUDA and OpenCL there were a handful of 3rd party matlab extensions that made use of GPGPU. Nothing official, but still plenty functional in their limited areas of implementation. The company's laziness with respect to GPGPU is no surprise (see my other rant in this story's discussion) and the fact that others have put together limited GPU-based extensions has probably further reduced the pressure for them to do anything in that

Frankly, even equally worrisome is that Matlab doesn't appear to take advantage of GPGPU yet. The concept has been around for over half a decade, and I'd have expected the MAtrix LABoratory to jump on the bandwagon quicker than most. It's a game changer in their core competency, after all.

I guess it depends on the exact question you're asking. A google search for "matlab gpgpu" shows that there are lots of ways to take advantage of GPGPU (NVidia's CUDA specifically) from within Matlab.MATLAB plug-in for CU [nvidia.com]

You can always use the c interface (which itself is weird, considering matlab's roots in fortran...)

The reason the C interface is weird is because MATLAB stores multidimensional arrays in column-major order, like Fortran. C, on the other hand, uses row-major order. However, if you work with linear algebra, then you'll appreciate the column-major layout, because it coincides with the order returned by the vec operator (which is used all the time in computational linear algebra, and stacks the columns of a matrix).

but then you'd have to learn c. Matlab is a tool for physicists and engineers, not computer scientists. They don't necessarily want to take the time to learn c, or they'd have done that. Some do, anyway, of course, but usually what they produce will be one off functions for a specific goal, not entire libraries suitable for sharing.

I work with digital signal processing and use MATLAB almost on a daily basis. The reason DSP engineers use MATLAB is not because they don't know or don't want to know C. In fact, a good DSP engineer must be very competent at writing clear and efficient C code, because that's what he needs to actually implement algorithms on hardware. Modern high performance DSPs are so complex that coding things in assembly is completely out of the question.

The reason MATLAB is so valuable is that it allows one to prototype things extremely fast with minimal performance loss (if you know what you're doing). Of course you won't have a MATLAB environment running on a DSP, so you'll eventually have to write the C code. But since most of my time is spent developing algorithms instead of actually implementing them, MATLAB lets me be much more productive.

It's best to design software for limits that are frankly absurd. Since I coined the phrase "absurd limit theory", let's delve a little bit into it.

A second divided into increments enumerated by fractions of a 64 bit integer is less than the differential travel of differing wavelengths of light over a planck unit of distance. It's a smaller unit of time than matters to our current understanding. 2^64 seconds is more time than the entire history of our Universe from beginning to end even in the most ridic

By your logic, we wouldn't need any integer type longer than 2 bits. You could certainly design an integer arithmetic scheme on that basis, but I doubt you'd want to.

I think the argument is more that, in practice, 32 bits is a decent sweet spot for the changeover from native ints to arbitrary-precision bigints (whereas 2 bits is not a sweet spot). Are there that many cases where someone needs integers between 32 and 64 bits, but doesn't need to account for the possibility of >64-bit integers, and therefo

Sweet spot? Could it have been that a majority of computers were 32-bit? Ya sure, 64 bit computing has been around a while but it was mostly specialized servers. Now that most new computers are x64 [wikipedia.org] compatible it would make sense to optimize for 64-bit. The many integers between 32 and 64 bits would be processed much faster and couldn't the bigint routines take advantage as well? I am not sure how big int work other than they use strings for storage. I assume they use clever math to break the calculations up

numerical computations that are highly optimized for speed on computers do not always allow for variable sized numbers. The more you assume about a problem, the faster you can make the algorithm to solve it. I'm betting that there are many optimized numerical algorithms in matlab that use underlying knowledge of the data structure itself to solve it. It is a trade off speed vs scalability/generality.

As someone who uses math quite a lot in academia, I can tell you that I've never noticed the missing operators. I just don't use 64-bit integers. The reason *I* upgraded to 64-bit Matlab is because I kept running up against memory constraints. 64-bit Matlab can allocate much larger arrays. I am sure there are places where it would be convenient to use really big integers but I find it hard to believe that this is really a big headache for anyone; the main improvement with the 64-bit version is a much bigger memory space.

That is one of the things that pushed me away from Matlab. I kept on running into memory bounds whenever I tried to do things "the Matlab way" (i.e., no loops, vectorize, etc.). It seems that Matlab liked to copy my arrays all the time when I was going in and out of functions. It seems ridiculous to me that I would have to do things "the bad way" in order to get my work done.

MATLAB isn't strongly typed, and by default variables are floating-point (I think 64-bit is the standard if type isn't specified). Makes sense for scientific programming. You need to go out of your way to use integer types in MATLAB, and the only reason I've ever had to do it is when trying to convert MATLAB scripts to C code to run on fixed-point processors. I do think that not supporting 64-bit integer operations is an oversight but I don't think it affects the vast majority of MATLAB users.

...and this is a good thing, when you consider that Matlab demotes float or double values to int when you mix them with ints. I was amazed when I discovered this. It's the only language I know with this evil behaviour.

There aren't may reasons you'd need integers in Matlab. Some that come to mind: (1) if you need more digits of precision than float or double offer, and you don't need the dynamic range of the exponents; (2) if you need to manipulate the data as bits; (3) if they're more computationally eff

Actually, if you you are working with very large data sets you might need to index using 64 bits. A friend of mine who knows very little programming was using matlab to compute all eigenvalues of a large operator. It was all working fine until he started to use his scripts on larger problems. At the time, a couple of years ago I had to recompile octave to use 64 bits as the default integer and the program just worked fine with the larger data-sets. The 64 bits integer was available as a configure option so

MATLAB does most everything with doubles, int and float formats are really only there when dealing with input/output to files. If i put A = 1 into a command line, its put in memory as a double. I use MATLAB most of my working day for signal processing algorithim design, and I don't think I've ever needed the precision of a 64 bit integer. Numbers bigger than a 32 bit integer can handle pop up from time to time, but never with more precision than a double provides.

For some reason commercial software usually seems to lag worst on the 64 bit transition. Windows and OSX lagged Linux, Java and Flash were the last bits on my Linux systems to go 64 bit, etc. They act as if 64 bit is a fad, and people will soon come to their senses and revert to 32.

Ya, immagine that, they acted like it actually took work and thinking to completely go through a 15+ year-old architecture and make sure that every layer is 64-bit clean. And they actually though that some of their developer time should go to improving areas of their OS's that actually matter for 95%+ of their audiences first...

With the excepton of Intel's x86 instruction set hobbling their processors, and Windows only recognizing 3GiB of RAM (then subtract the address space for your video memory), or most

For some reason commercial software usually seems to lag worst on the 64 bit transition. Windows and OSX lagged Linux, Java and Flash were the last bits on my Linux systems to go 64 bit, etc. They act as if 64 bit is a fad, and people will soon come to their senses and revert to 32.

Linux is plenty commercial, I think you mean "consumer software". (Or at least "proprietary software", but I don't think correlation implies causation there)
Windows server platforms got behind 64-bit more or less in step with its adoption in servers (even having ports to Itanium, back when that was a credible contender for the next 64-bit arch). Also OSX's transition to 64-bit started with the kernel and worked its way up through the UNIX layer, allowing server apps to use more memory long before consumer

In order for a number to need the 64th bit it must have a one in that 64th-most-significant position... and in order to add two such numbers, you end up needing a one in a 65 position... and there's your overflow error.

I'm not a MATLAB user, just someone who has had to troubleshoot problems with it for a variety of clients.

A while back, more than a few years now, MATLAB on HPUX was limited to about 1GB of memory. Any MATLAB code that needed more memory than that was shit out of luck - even on a 64-bit machine with 64GB of RAM. This was partly due to MATLAB only being available as a 32-bit binary for HPUX and partly due to MATLAB having been compiled and linked in the most naive way possible. After diagnosing the problem with a client's MATLAB code (they had a lab full of $2M computers and couldn't run this software that only needed a couple of GB of data), I wrote a short explanation of the compile and link flags necessary to enable any process to access at least 2GB of RAM with practically no impact and 3GB with only minimal impact. In either case, no code changes necessary whatsoever.

MATLAB's customer support group responded with a categorical denial that it was even possible to do - that HPUX architecturally limited all 32-bit processes to 1GB of addressable memory. While a customer-specific test release would have been the ideal response, I was really only expecting them to open a feature request and get the next release built the right way. But they wouldn't even give my client that (despite them having an expensive support contract) - just a flat out denial of reality instead. The solution for my client was ultimately to rewrite their software in C and link it with the right flags to get access to 3GB of memory.

So, given just how strong their disinterest was in even trying to make their software work for big boys doing scientific computing, I'm not surprised to hear that all these years later they still haven't even bothered to implement native 64-bit math. They are entrenched and there just isn't enough competition to make them lift a finger.

I guess no one got around to porting MATLAB to the Alpha architecture.

ISTR that there was a lot of discussion about 64 bit floating point numbers when Alpha was first announced because some folks wanted a certainnumber of bits reserved for the exponent, and others wanted a different number of bits. Happily, all that got straightened out, and I don't thinkMicrosoft was involved in that discussion. It certainly kept the DEC FORTRAN compiler team up at night wondering which "standard" would prevail.

a) The data types in matlab have absolutely nothing to do with the processor you are running on - and that is good - all my ml programs run on all my machines
b) already the 8087 supported 64 bit integer - so no i dont know why matlab does not have it since a long time
c) Usually the mathworks people care more about compatibility to the toolboxes. As long as a new feature collides with one of the important toolboxes, they rather don`t introduce it. E.g. if the signal processing toolbox fucks up when being

The summary mentioned Octave as an alternative to Matlab. There is also Scilab (which has some more c-like features).

In recency I have simply been using Python. Use the iPython (interactive python) shell and load scipy (from scipy import *) and you have a very nice calculating environment. The scipy arrays are quite a joy to work with compared with what I remember from Matlab. If you're working with equal size 1D arrays then they can be used without modification in normal mathematical expressions, so a lot of my code no longer involves any iteration with for loops.

There is a graphing library (pylab) based on Matlab syntax. If you start iPython with the -pylab flag it will print out plots the same way as in Matlab. There is also Easyviz which I believe also uses Matlab syntax but interfaces with a number of standard graphing programs (like Gnuplot.)

The sympy package for doing symbolic manipulations is also quite nice, IMHO.

Disclaimer: I only used Matlab casually for my undergraduate math classes.

Just another reason to switch to numeric python [sourceforge.net]. The more I use Matlab the less I find that I like it.

I don't have mod point, so allow me to second that.

The advantage of MATLab for me was ease of development that it allows me to quickly get some simple proof-of-concept code up.
If I want run time speed, I'd use CLAPACK and GNU SL. I can't imagine doing any very serious numeric code in anything else (not that my work was very numeric heavy). With NumPy and SciPy, it is just as easy to do what MATLab does in a language that's actually fun to work with.

Sage is a free open-source mathematics software system licensed under the GPL. It combines the power of many existing open-source packages into a common Python-based interface. Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab.

The price of Matlab (minimum of $2000), more like $10K for a decent set of tool boxes. They charge 20% per year for 'maintenance', though thankfully you don't have to buy a maintenance contract to use the software.
And for all of this, they can't be bothered to support 64 bit integers? I'd be asking very pointed questions about why not, if I had a license.

While this is not implemented natively, (or at least not until you use the fixed point tools to auto-generate some code for a 64bit machine), you can do a lot of fixed point math with the fixed point toolbox:

1) This is true for 64-bit INTEGERS. The default data type for MATLAB is a 64-bit float, and has been forever.

2) This is a design decision by MATLAB's designers. You don't have to declare or type variables in MATLAB: you just set a = 5 and a new variable "a" is created. You set a(2) = 3, and now a grows into a 1-d array.

It's a handy feature and a core aspect of MATLAB's ease-of-use design, but to do this, you need to have a default data type.

64-bit float is the best choice: you can represent any number up to around 4,503,599,627,370,496 without error. For practical purposes, this means MATLAB will work fine for any real-world integer counting task: it only fails if you're interested in cryptography, primes, or other discrete math tasks, in which case you're not using MATLAB anyway.

Being the guy who implemented the proper 64-bit arithmetics support in Octave 3.2, I can maybe share some interesting points.
Matlab's design choice of double as the default type is both a blessing and a curse. Usually the blessing strikes you first (I always disliked it that 1/2 is 0 in C++ and Python, finally Python 3 changed that as well), but you start to feel the curse when diving deeper, and integer arithmetics (which I agree is far less used than floating point) is a perfect example.
Initially, Matlab probably had no integers.
Given that double is the default, Matlab creators decided to make the integers "intrusive", in the sense that integers combined with reals result in integers, not reals, contrary to most other languages. The motivation is probably so that you can write things like a + 1 or 2*a without a silent conversion. Hence, when I is integer, D is double and OP is any operator, I OP D behaves like int (double (I) OP D). Except that things like a + 1 seem to be optimized (something Octave currently lacks, but it shouldn't be hard to do).
int64 is where things start to get messy, because not all 64-bit integers can be exactly represented in double. So, using the simple formula above,
0.5 * i64 could occassionally do something else that i64 / 2, which is highly undesirable.
In order to do the "right thing", Octave will choose one of two options: first, if it discovers that "long double" is at least 80-bits wide (so that it can hold any 64-bit integers), it will use that to do the operations. If not, it will employ a custom code to emulate the operation as if it was performed with enough precision. It's based on manipulating the mantissa and exponent of the double and is much slower.
Although it was kinda fun to implement it, it is really a lot of work for too little music, so that can partially explain MathWorks' attitude to this. Unlike Octave, MathWorks doesn't really need to aim at source portability (as they just distribute binaries), so maybe they're just waiting for proper long double support in all compilers they use, and then they will just use the simple approach. Or maybe they're waiting for some important future design change.
When I implemented this, I was fully aware that it's not a killer feature, yet I thought it may make Octave more interesting to some Matlab users. So I'm glad someone noticed:)
In any case, I suppose at some point Matlab will support this as well.

Myself I am developing HornetsEye [demon.co.uk] which is a Ruby-extension for doing computer vision.The problem with supporting various types is that you end up with a lot of possible combinations when doing computations. I.e. say if you want to support arrays of 8-, 16-, 32-, and 64-bit integers (signed and unsigned) as well as 32-bit and 64-bit floating point, you have 10 ** 2 possible combinations of types when element-wise adding two arrays.If speed is not an issue however, you could just use Ruby's dynamic typing. R

SciPy/NumPy, R, and Octave are all perfectly good alternatives to MATLAB these days for most work. But there are a lot of people who rely on MATLAB-specific toolboxes. I look forward to the day when proprietary math and stats packages take their place in the bitbucket of computing history, but we're not quite there yet.

MATLAB is often very much faster than SciPy/NumPy at the moment, but the latter programs are very much more capable (and, the vast majority of the time, free). My colleagues use MATLAB; I'm perfectly happy producing better results in Python.

Heh, to each his own, I guess. I mostly work in R these days, and while I admit it's quirky, once you get used to its quirks it's quite a useful langauge. Overall, I'd rather be programming in Python than just about anything else, but in my lines of work (bioinformatics and biostatistics) R provides the best overall combination of features and usability. YMMV, and obviously does in this particular case.

What I find odd about R/S/S+ was that the ones who tended to learn it better were the begineers who never programed in another language or had limited exposure. People like me who have been exposed to many mainstream languages find it frustrating.

The professor who taught the course where I had to use S+ told me something along the lines: "It is a language designed by statisticians, of course it will behave randomly!"

Both Matlab and SciPy/NumPy use the same BLAS/ATLAS backend. So, a well-written Matlab program will be comparable to a well-written Python program in speed. However, Python has Matlab beat on memory usage. Matlab does have an extensively documented language with a very good help system.

In my personal opinion, I feel that Matlab was too awkward of a language when it came to do anything else besides math. For example, it made it difficult to use when one needed to do string manipulations to figure out w

"Freetard knockoff?" R overtook S-PLUS several years ago in terms of importance among statisticians. Indeed, it's not just in academia where this freetard knockoff has taken over; I work as a quant at an algorithmic trading hedge fund and we dumped S-PLUS in favour of R four or five years ago.

Sometimes, just sometimes, the open source implementation actually is better.

Don't know what your scientific language of choice is, but I have compared MATLAB programs to FORTRAN programs and the difference in speed was negligible. A properly written MATLAB function can be quite fast.

For a program like octave, having no GUI is very forgiveable. There is really no way to work with the system outside of prompt commands. Even Matlab is very prompt based.

What is unforgivable in Octave's case is its graphing capabilities. Octave used Gnuplot for drawing which basically means it is stuck in the 1990s when it comes to making plots. 3D plots are slow, difficult and complicated things to create. Animations are out of the question. 99% of the time, you're better off exporting to png (itself a nighmare), and animating from those. 3D data is all but ungraphable on Linux systems anyway, so I suppose Octave is not alone here.

3D data is all but ungraphable on Linux systems anyway, so I suppose Octave is not alone here.

As I recall, MATLAB has a Linux port. As does Maple, Mathematica, et cetera. And Mayavi [wikipedia.org] is an open source program capable of excellent 3D graphics that works with Python, and therefore SciPy.

So what you really mean is that 3D data graphing is inadequate with Octave and gnuplot on any system. 3D data is perfectly graphable in Linux.

That's funny. I am researcher and work with math and plotting software on a professional basis, and even when I need Matlab to do the work (e.g. if I have to use nlinfit), I always prefer to export the data to.mat and plot in Octave. Gnuplot's output generally looks better when exported to EPS/PDF.

Gnuplot does not allow to do GUI editing: that's a big plus, because I am forced, every time, to write a script: I know that if I don't write it, I will miss it later when I want to change something (it always happens). Also, it is much easier in Octave to specify a font (-F:Palatino, for example) than in Matlab: possibly not on top of your list of priorities, but when I wrote my PhD thesis I wanted to write everything with the same font: Matlab plots require you to edit the EPS source.

3D plots are slow, difficult and complicated things to create.

Curious. I just published an article with several 3D plots (which I usually eschew), and it was not really more difficult to get things done in Octave than in Matlab.

3D data is all but ungraphable on Linux systems anyway

I call bullshit, you never really tried. Have a look at matplotlib [sourceforge.net]. And, that aside, Matlab is available on Linux too.