Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

i_r_sensitive writes "NetworkWorldFusion has an article on the interaction between multi-core processors and software licensed and charged on a per-processor basis. Interesting to see how/if Oracle and others using this pricing model react. Can multi-core processors put the final nail in per-processor licensing?"

If the efforts of other corporations bent on protecting their intellectual property (RIAA) are any indication, per-processor licensing will move to per-core licensing. If the RIAA can force you to pay multiple times for the same song (which you, unfortunately, cannot move between preferred mediums), then it would make sense that software companies bent on collecting money would make you pay multiple times for one processor. On the other hand, they are somewhat different issues: usage of music would be governed under fair use (in theory), while usage of software (at in terms of licensing per processor) would be governed by the EULA or another contract between the corporation and customer.

As long as IBM is making mainframes there will be per processor fees...and they have been around for 40 years so I see at least another 40. Heck, now they even charge different amounts for a processor depending on what you are going to run on it.

Businesses charge the maximum they can, for maximum total profit: "what the market will bear". Per-processor prices are just a way to negotiate how much money the customer can make from the software, therefore how much is available from their revenue to pay the software supplier. Just like when an employee negotiates their income, they are negotiating for a share of their employer's revenue to which their work contributes. I'd like to see a software licensing model that treats the software's work as automated labor, and negotiates accordingly. Like some kind of profit sharing. People don't get paid up front, why should the software company? That allows a timeframe for a "test drive" during which both parties can get benchmarks on the actual value of the software.

Perhaps a compromise will result. Eventually a 2CPU license could entirely replace a single CPU license. At such a stage licenses could be bundled as 2CPU, 4CPU, etc. As multicores become the norm, naturally 1CPU licenses should phase out entirely.

This would allow companies to keep their per core licensing scheme. Customers would get the feeling of a deal by getting a muticore license. Perhaps the market would lower the cost of 2CPU license to what a single CPU would be worth.

I think it is interesting that, Windows running on a 2 CPU machine requires a 2 CPU license, but, say, 5 instances of VMWare running on a single CPU, each hosting an instance of Windows, requires five licenses. (Six if the instances of VMWare are themselves running on Windows)

Also, what if there was a VMWare-like program that simulated a SMP machine? Would that require a multiple CPU license to run Windows? Even if this program that emulated a SMP machine was running on a single CPU?

I am sure that newly licensed software will explicitly state whether it means physical chips or cores, but remember, companies exist to make money. By licensing per core instead of physical chip, they make more money. The software is the same no matter how many chips, only the price varies.

The real issue is how current licenses handle multiple cores per chip. This may wind up in the courts, or licensees may wind up being extorted for extra money they probably do not owe.

Despite being dead, BSD scales well with SMP and runs SMP apps very well, plus it is free. I know what license I will use...

We shouldn't forget that competitive products can also bring down the price. There have been a number of beat ups between DB2 and Oracle for instance, so all we need is a competitor to significantly undercut Oracle on per processor licensing and have customers switch to a different database platform.

Losing money, normally gets a companies attention, that perhaps their customers think that their licensing is getting too expensive for them to consider Oracle.

I havn't looked into database pricing for a long time (ignoring MySql type "free" databases), but from what I remember, Oracle was one of the more expensive ones. Is it so now?

The Altix 350 incorporates the same high-performance shared-memory SGI® NUMAflex(TM) architecture and optimized Linux tools originally implemented in the award winning Altix 3000. It supports up to 16 processors in a single system image, and features the industry leading 6.4GB/second SGI® NUMAlink(TM) interconnect.

Nah, the biggest thing keeping business's from running Home Edition is the fact that it can not join a domain. This isn't an issue for small business's, but neither is the lack of multi-cpu support. Btw there are basically zero games that take real advantage of a second CPU, the reason are varied but basically come down to the GPU being the limiting force, multi-threaded code being harder to code and debug, and finally a lack of demand.

Oracle Licensing is like mountain weather... if you don't like it, wait 10 minutes and it'll change.

Seriously, though, Oracle changes their licensing more than any other software company I've ever dealt with.

I won't be surprised to see their licensing change after they get some push-back from their customers.

The other thing they DO have a history for, though, is NOT helping customer out when it comes to a license change. I've seen customers sign the deal on a Monday, only to have new pricing come out on the Tuesday. If they'd waited a single day, their software licensing would have been around half of what they paid.

I can't remember exactly, but back when I was working as a IBM mainframe software engineer I had a feeling the IBM and CA who provided various software for our mainframes charged some software based on MIPS (Million Instructions Per Second) ratings of the virtual machines that the software was running on.
Why don't software companies just do the same thing. Establish a performance benchmark and charge based on that. That way you can use single, dual or multi core processors or multi CPU machines and not have to worry about all this licensing drama. If you real machine or "virtual machine" is bench-marked at x MIPS you pay y dollars, who cares what architecture you are running.

Both HP and IBM have had dual core chips for a while now. There are a number of advantages to moving to dual core processors. The most important is that it helps to improve performance without as much heat generation as two single core processors. Another advantage to dual core processors is that you can share caches, which have some very distinct advantanges in multiprocessor environments. By sharing cache processors can check the shared cache without interupting eachother. By improving the performance of the processors, server vendors can actually cut software costs on a per processor basis, as fewer processors are required to perform the same workload.

The real issue for software licensing will be when virtualization becomes more widely used in the Risc and Intel space. How will software vendors charge for 2 tenth's of a processor? This will be the real challenge from a cost perspective, as there will be a number of applications that really only require that much of a processor.

Multi-core, on the other hand, gives multiple independent physical processors that just happen to fit into one socket.

True, but I doubt that a multi-core chip will be on par with a similar dual-cpu setup, you still need to get the heat away from that single cpu. It's very possible you will only get about the same 15-30% boost in speed you get from HT.

From what I understand multi-core designs [ibm.com] have all cores on a single piece of silicon at the center of the CPU just like uni-core CPUs.

I actually ran into the per-processor licensing with database connector software on Linux. A Xeon shows up in linux as two processors due to the hyper threading. Of course hyperthreading is not as fast as 2 distinct CPU's either. It threw the salesman for a loop - he had no idea what the license would be. Turned out they were way overpriced anyway, and a FOSS driver worked fine.

Oracle was licensing based on power units a while back. Any idea if they are stiill doing that? From what I understand, they basically benchmarked certain machines and price the software based on the performance of the box rather than pure # of CPU's. That solves the issue completely. Course we use MySQL and Postgres anyway, with a smattering of MS SqlServer (Yeah I know, but it IS a pretty good DB, and needed by some apps.)

I was recently involved in a conversation about the usefulness of dual core machines as home machines. The typical home machine is only really giving focus to one CPU intensive program at a time, max. Intel and AMD are obviously moving in that direction (and it doesn't stop at dual-core) and the reason is a little surprising. According to an Instat article published recently, Intel is doing it to overcome leakage current / power. As technology gets progressively better, leakage power has become progressively worse. I do not understand how designing machines with two cores is supposed to help this. Even when one core is not in operation leakage will still occur (thus leakage).

You may be right about the one CPU intesive program at a time but it may be running more than one intensive task at a time. If a program is written with hyperthreading and or SMP in mind you can spilt it to multiable threads. For a game you could have one tread handle AI and another what graphics and game play the Video card does not.When ecoding video one thread could handle the images and the other the sound.There are lots of times when a home system could use more than one processor. Most systems already have more than one cpu they just tend to be specialized. The GPU in your video board is one. The DSP in good audio cards is another. I really do not like the idea of dumping more load on the CPU. Things like onboard audio and win modems are what I condsider to be bad ideas.

The thing you have to realize is that in modern processors, what few execution units exist are starved already. Adding more doesn't really make that much difference. The performance problems come from the caches, and we already build the fastest L1 caches we can for the single processor case.

While your statement "I can get so many FP and integer units on chip; what's the best way I can feed instructions from any number of threads to maximize their usage?" is mostly correct, it really doesn't fully recognize just how hard the processor works to feed instructions into those execution units. A more accurate description would be: "I can get so many transistors on a chip; what's the best way I can maximize the number of instructions executed (amount of work done), by any number of threads, on those transistors?" Currently the best way is to have a few execution units, and a LOT of cache.

Getting back to your original point, in general, a processor is any number of threads that share an L1 cache. Whether that processor shares execution units with another one is really irrelevent, and probably wouldn't offer the performance benefits necessary to make the added complexity worth it.

There are designs for which this wouldn't apply, but they would be "throughput computing" designs with big, slow L1 caches that have *dismal* uniprocessor performance. With poor uniprocessor performance the "work done" per instruction executed starts to go down, so these designs have their own set of problems.