I’ve brought up the story of the crippled Intel processor a few times now. I brought it up in my recent post on the Portal 2 DLC and managed to thread-jack my own topic. So, I scored an own goal with that one. Nice.

If you missed it, here is the setup:

Back in the early days of PC processors, Intel used to use one factory to make all of their processors. Their expensive high-end processors were identical to the crap bargain stuff they sold, right up until the bargain ones rolled off the assembly line and they deliberately destroyed the math co-processor. It turns out it was cheaper to just make the good ones than to have two different factories. Then they would sell the crippled one to regular folks and the normal one to rich people. Some people were outraged at this. “Why did they ruin this processor!? It would have cost them NOTHING to give me the fully functional version!

Some people complain that this is an incredibly stupid system, or that it is unfair to the consumer. When I found out about this in my twenties (back in the 90’s) it drove me crazy. I suddenly knew that the CPU in my bargain computer had, for a fleeting moment, been a deluxe powerhouse until some jackass ruined it. On purpose. As an engineer, I dislike this sort of destruction.

But it’s actually not stupid or anti-consumer. It’s just really, really counter-intuitive. It’s a wonderful system that fueled the manufacturing explosion that gave us all of these amazingly cheap computers. Here is how it works:

Warning: These numbers are entirely fabricated for the purposes of this demonstration. Please focus on the system and do not nit-pick the numbers. Thank you.

It takes a lot of money to design a chip and set up a production pipeline around it. Let’s say ten million bucks. That pays the eggheads to design it, someone to figure how to construct it, and the money required to build the facilities to make the dang things. On top of that million bucks, each chip costs about $35 in parts and labor.

So if we only make one processor, we need to sell it for $10,000,035 if we want to break even. That one processor needs to recoup the entire set-up fee for the whole facility. But if we sell a million of them, then we can spread that ten million dollar set-up fee over a million units. Now we only need to charge $45 for each one to break even. I’m sure most of you are familiar with this idea. Economies of Scale are fairly common and apply to everything from cars to hamburgers.

(This gets more tricky when you realize that the producer doesn’t actually know how many they will sell. If they charge $45 each for processors and end up only selling half a million, then they are well and truly screwed and will go out of business. So, they need to speculate a bit. What is the minimum number we can expect to sell? Half a million? A quarter million? We need to set our price so that we can survive even if sales are lower than anticipated. It’s all speculation. If you over-estimate sales you’ll go out of business. If you under-estimate you’ll be charging too much and someone else will undercut you, which could also put you out of business. Have fun!)

Now, we’re going to assume you have perfect knowledge of the market. You know that there are a million people out there who will pay $50 for any damn processor. Let’s call these people Bargain Users. They just need something to make the computer go. You can sell your $45 chip to these people and make $5 apiece. That’s five million bucks in profit. You also know that there are about a hundred thousand people out there who will pay $100 for a processor, but these people want the best. Let’s call them Power Users. They’re simulation or graphics professionals and they want their computer to go as fast as possible. Time is money for them, and so they are willing to spend a lot of money to save themselves some time. If you sell them a processor for $100, you will make $55 each. That’s five and a half million dollars. There is ten and a half million bucks to be made here, but only if you can sell to both groups.

But!

If you sell all processors for $100, those million bargain users won’t buy. That’s too much for them. On the other hand, those hundred thousand Power Users aren’t just going to give you $100 out of the goodness of their hearts. If you try to sell the same processor to both groups, the Power Users will buy the $45 version because they’re not idiots.

So, we need to make a less-powerful processor to sell to the bargain users. Let’s talk about some solutions:

1. Design a slow processor to sell to the bargain market.

Now, you could just make a less powerful processor, but the cost of re-configuring the production facility makes this kind of expensive. If you were selling clothing, you would use higher quality raw materials. Cheap jackets are inexpensive polyester, and quality ones are a cotton/wool blend. But we’re making processors, which are all made from metal, silicon, and plastic. A processor made in 1990 has basically the same ingredients as one made in 2010. Likewise, the labor costs are the same. It costs the same to hire a guy to run the 1990 assembly line as it does to hire a guy to run the 2010 assembly line. (Ignoring inflation, obviously.)

The last thing you want to do is re-configure your assembly line. That gets expensive. You hammer out good processors. Then you re-configure the whole thing at the cost of thousands of dollars to make the weaker ones.

It’s like books. The same printing press can hammer out copies of Lord of the Rings or a similar-sized compilation of Smurfs / Trek crossover fanfiction. The cost isn’t what you’re printing, but in how many you can print at once. Changing print jobs is the expense you want to avoid.

2. Design the factory to be easily switchable from one state to another.

This would increase your setup costs, which is your single largest and most dangerous expense. It makes the system more complex to run, which can lead to expensive Human Errors.

3. Choose between the two markets.

You’ve only got one factory, so you can only make one chip, which means you have to choose between the bargain users and the power users. It’s best to go for the power users in this case. They’re more profitable, since your magin with them is much better. They’re also less risky, because they’re more reliable as customers.

Obviously this leaves bargain users out in the cold and keeps computers as a niche product only available to the elite.

4. Turn some of the processors into an inferior product after production.

Take those powerful CPU’s rolling off the line, break their math co-processor, and sell it to bargain users for $50. It turns out that this is the best solution that gives the cheapest hardware to the greatest number of people. This is the most efficient way of serving both markets.

You can think of this as an alternate way of going about solution #1. You are producing lesser chips. This is simply the most cost-effective way of going about it, since it lets you use the existing facilities and production pipeline. Rather than design a less-powerful chip and a configure the production line to make them, just take the good ones and convert them.

From the archives:

I haven’t noticed you touch on another factor when it comes to these processor manufacturer decisions. This factor is related to the fact that these manufacturing processes aren’t 100% completely effective. There are always processors with manufacturing defects, since the manufacturing involves absolutely mindbogglingly complex patterns being etched into tiny, tiny, tiny spaces with what can best be described as magic.

Instead of throwing out the defective units altogether, they simply cut out the defective bits and market the still-working bits under a different name and a cheaper price. For example, what ATI (AMD now) did with their various sub-tiers of graphics cards. Their HD5770 is a full-fledged mid-range GPU based on their Juniper (and Cypress) GPU design. Their HD5750 is simply a 5770 that didn’t pass full inspections on one of its SIMD cores. Instead of throwing away that defective 5770 completely, they separate the defective GPU bits (one SIMD core, for the 5750’s case), flash it with slightly different firmware, and market it as a cheaper video card. In this way, they minimize their total expenditures, fill out price ranges, and make victory out of failure.

I personally don’t think it’s a stupid way to do things, or a sneaky way to do things, so long as the intentionally-cut designs are marketed for a cheaper price, make sure that consumers know that it’s an intentionally gimped device, and come with the same quality guarantees as the higher-range cards.

This practice is called binning, and it’s very common in the hardware industry. It’s the first thing I thought of when I read the article. Another example (that’s pretty simple) is resistors. Say your factory makes 10kOhm resistors. Some of them will be within 1% of the 10k target. Some will be in the 5-10% off range, some will be more than 10% off, and some will be better sold as 15k or 5k resistors. Well, hopefully not for that last bit. Part of the manufacturing process is determining how far from the target resistance a given resistor is, and sticking it on the pile of similarly-off resistors. Then they can paint little golden bands around the ones within 1% and little silver bands around the remaining ones that are less than 5% off, and little copper bands around the remaining ones that are less than 10% off.

That reduces waste, which reduces costs, which increases profits. Profit is good!

Yeah, but fifteen years ago or so, most of the 486SX chips (==486DX without a math coprocessor, so you had to do floating-point in full software emulation mode) weren’t only DX chips whose coprocessors weren’t good enough. They were, however, DX chips whose coprocessors had been physically disabled.

The architecture of the 386 math coprocessors was pretty hilarious as well, since *no* 386 (or earlier) had one of them onboard. They all had an add-on chip that would monitor the instruction stream, watching for the x86 prefix for floating-point instructions, and do something I never understood, to tell the main CPU that they were handling them.

(And then the “486SX math coprocessor” chips came out, which fit into the other socket on the motherboard, disabled the primary CPU entirely (!!!), and operated as per normal, doing the full CPU instruction set, since they were full 486DX chips. But this setup wasn’t all that weird, if you didn’t know that the extra chip was doing far more than just floating-point instruction interception. Wheeee. Of course, they never did that particular kind of “upgrade” again that I know of, either. And there was no SX/DX difference in the 586/Pentium line. *And* it was probably far, far less expensive for Intel to sell a 486DX as a “math coprocessor” upgrade, than to build another entire chip.)

(Mind you, at some point the 486 DX2, DX4, etc. chips started showing up as well, which were double- or quadruple-clocked. Those were definitely bin-sorted.)

Price discrimination makes the world go round. It makes all the buyers better off; even the power users get a lower price, since the manufacturer gets to spread the cost over the much larger pool of bargain buyers.

From the manufacturer’s perspective, it is identical to price discrimination – make the same product, sell at different prices.

The only difference is that, instead of trying to get the rich people to just pay more, they penalize people for paying less. The concept of “make one product, charge people different prices for it” is price discrimination.

Funny you mention that. One of the recent innovations in the motherboard market has been the ability to re-enable disabled cores on dual- and triple-core processors. While it doesn’t always work or provide 100% performance and/or stability, it’s actually become a bit of a trend in the budget market to do basically what you’re describing. I have also heard of people downloading firmware of higher-end video cards to lower-end ones and having the full clock speed etc. unlocked, though I think this is significantly less successful (for reasons described in other comments here).

If you dig up the sites online that talk about how to do this, it isn’t always successful either. Sometimes it’s disabled for the purposes discussed here, and sometimes it’s disabled because it really had a defect.

Back when I was looking at GPUs, there was an entire discussion about which manufacturing batch on one particular card was actually good, and which wasn’t. I gave up on that and just bought a different (better) card, heh.

Remember the first (or was it the second?) set of Celeron processors? They were actual real Pentium II Processors clocked down from 450MHz to 250 or something, with some internal circuits disabled. Intel sold A LOT more of those after people realized they could be turned back into Pentium IIs again.
Similar to that, most of the first AMD Athlons could run at 900 MHz or more, the yield was extremely good, but AMD was advertizing a clock range from 500 to 1000 (I think), so they just clocked them down and sold them cheaper. Also sold a lot more after that got out :)

But apart from those examples. I think the prices were mostly adjusted to match the yield of the process (Not every individual procesor has the same quality, and some are unusable, so a clock speed is assigned after measuring the response to some diagnostic signals), and beyond a certain safety margin, it will probably not take much higher speeds. These days these diferences are rather small, so you no longer see a two-fold span in clock speed within a single type of processor. And AMD’s three-core pieces are actually four-coure ones where one of the cores didn’t make it through production alive, usually not on purpose.
That way of doing it somehow seems nicer to me, although you’re of course right about this being a reasonable economic choice. As long as the competition is fierce, though (i.e. not in the nineties when Intel was virtually alone on the market), I would expect companies are less likely to mutilate their good chips because with today’s structure sizes the yield rates are low enough as it is. Lots of processors go directly from the factory to the dump because something went wrong with them.

What they do these days, though, is marketing lower-clocked processors as “low-energy” and charge more for them than for those with higher clock speeds. I seriously do not like that, no matter how economically useful it might be for the manufacturer.

No more then overclocking a chip. Its effectively the same thing, the only difference is that the tools for OCing are better because changing clocking is somewhat more universal and easier to work with then re-enabling features

I think it depends on whether the manufacturer is selling you the hardware or the technology. If they are selling the hardware, then no, it wouldn’t be stealing. It would be no different from you buying a car and then modding the engine to run better. If they are selling the technology, then yes, it’s stealing. You paid for a certain chip. It doesn’t matter that you could mod it to do more; that’s not what you agreed to pay for. It would be like going to a movie, and then walking into another show once yours is over. They’re playing it anyway, right? It doesn’t cost them any more to show the movie to one more person. But that’s not the point. Regardless of how many movies are showing, you only paid to watch one. Likewise, regardless of the configuration, you only paid for a downgraded chip.

I bet if you asked them, the manufacturers would say they are selling the technology. And since it’s them doing the selling, they get to decide.

Back In The Day, before we hit the gigahertz limit, using more than one processor in a computer was a weird thing that only academics (for simulations) and businesses (for servers) wanted to do. Since “academics” and “large businesses” are synonymous with “people who have lots of money”, the computer industry evolved to relieve their burden. Commodity versions of Windows, as in every version before Windows NT, would only use one processor in a multiprocessor machine. XP Home would use one processor, but XP Professional would use two, despite being (more or less) the exact same software, etc.

There was also the Athlon XP/MP series of processors. They used the same silicon, but MP processors connected two of the pins together on the chip, which told the motherboard they were a MP processor, and, incidentally, were twice as expensive. Adventurous persons could fool server motherboards into thinking that an ordinary XP processor was a MP version by shorting those two pins together with some very fine wire, tweezers, and a steady hand. These persons also had to be unafraid of using “short together the pins” in the same context as “server motherboard and two brand new CPUs”.

I’ve never taken issue with this sort of thing and don’t see why I need to, because generally the pricing of computer components is already quite fair. Most manufacturers sell as low as they possibly can without going out of business – save for the price fixing that occasionally goes on, computers are actually very affordable and companies like Intel and NVIDIA still aren’t so big and bloated to know one major screw-up can’t kill them – besides, they’ve got smaller companies like AMD to constantly remind them where they could be if they put out a bad chip or make a mistake. Intel recently lost quite a bit of money by putting out a flawed motherboard chipset – the recall on that was definitely not good for their bottom line, and one can imagine how much bigger it would have been if it had been, say, their next big CPU line instead of a motherboard.

One also has to remember that it’s not quite as simple as hardware components being destroyed. In the graphics card market especially, manufacturing is never so good that the samples produced are 100% effective, so they try their absolute best to get as close as possible, then set a reasonable spec that they can sell the GPUs for. But what about all those failures, the ones that only run at 80% the speed, or have a manufacturing defect that results in missing features? You can test this by overclocking – even high-end chips have vastly different thresholds. The inevitability is that the lower-end and midrange GPU markets are usually built not out of deliberately destroyed chips, but high-end chips which never made it. Both occur, of course, but I imagine a lot of the lower-end chips Intel puts out are also those which failed during manufacturing.

Computer hardware and software is one of the few reasonably free markets left in the U.S.–and look at how much the price has come down. I’m just gobsmacked by how much more my computer can do now than it could 12 years ago when I got my first one, and THAT computer made our old family 386 bought 11 years before look like a dinosaur.

Show me another field with advancement like that. And yet people still say that these “shady” business practices need to be stopped. What shady business practices? Selling a product that people want to buy at a price they’re willing to pay? Radically improving every aspect of our lives with the stuff these machines can do, and improving it more and more every year?

I’m not sure the software market still exists so much as free market. Microsoft’s market penetration is such that they can put out a deeply and obviously flawed product and still return a profit on the investment for no other reason than because they’re Microsoft.

Everyone that bought Windows Vista should have gotten Windows 7 for free, and, in a freer market, there would have been the market forces in place to ensure this.

That is as flawed an argument now as it was when 7 was first announced. The “flaws” in Vista were minor, most were actually MS giving consumers what they had been asking for (more security) and complaining about change. Add in the five year development cycle, hardware manufacturers who rested on their laurels instead of working on drivers after Vista was announced, the mess MS made with the Vista Capable/Compatible logo certification and overzealous bloggers creating an echo room and the so-called Vista fiasco turns out to be overblown beyond proportion.

Vista performed satisfactorily for me from release to the time I upgraded to 7.

In the “freer” market you envision nothing would have changed. People who bought Vista would still have had to buy 7 if they wanted it and rightfully so. A free market does not mean that you get free upgrades if you aren’t satisfied with a product, it means that you are free to take your business elsewhere if you are not satisfied. It cost MS money to employ the developers who built 7 and they deserve to make that money back by selling it. If they want to be a charity and give the product of their investment away for free then they can do that as well.

Intel did not upgrade to Windows Vista specifically because Vista would have broken vital programs Intel used.

Don’t use your experience as a low-end user (and all home users are) to justify the OS.

In a freer market, Microsoft might have charged, but it would have been a bad business decision. Had they actually had viable competition in the market, the dissatisfaction of their customers – especially corporate customers – would have been well worth the cost of the “free” upgrade.

This mentioning of “freer” markets confuse me. Isn’t the ultimate end of a free market that of a near-monopoly? That’s the goal of every company – turn complimentary products into commodities, and dominate your industry.

You still really don’t have any idea what the “Free Market” is. Intel not upgrading because Vista would have “broken” Intel’s apps, although interesting, is completely without value as a debate point.

I am not a “low-end user” so your charms won’t work on me…my company is still running XP for the same reason as Intel…but that sir isn’t MS fault in the least and not one that they are responsible for correcting with “Free Upgrades”. That is the 3rd party developers fault. I have at least 10 companies that we use software from that are dragging their feet in upgrading their software to work on MS new OS.

I suggest sir that you check out the Ludvig von Mises Institute for a lesson in the “Free Market”. And in the future leave your obvious disdain for MS at the door.

I was highly amused recently at the rate of hard drive space increase. We just built a file server with 77TB out of COTS parts. We figured out it was only about 4-5 years ago that you wouldn’t have been able to easily buy a drive large enough to hold our array’s inode table.

In a truly free market, the system Shamus described wouldn’t exist- somebody would copy Intel’s design, sell the better chip for $50, and capture both markets. The only reason this doesn’t happen is that Intel’s chip design is patented, and patents are a form of regulation.

Actually, it was and is possible to legally copy Intel’s design and make your own chips. Reverse-engineering was perfectly legal in this time period (with some minor technicalities that were rather amusing*) and it was done from time to time. The problem is, by the time you reverse-engineered Intel’s design, they would be more than halfway to releasing the new one. By the time you got your clone chip into production, it would be nearly obsolete.

The problem is that chips are so dang complex, if you have a team capable of reverse-engineering them, you also have a team capable of just designing your own. (Or you’re close.) I THINK this is what was going on with AMD in the 90’s, but my memory fails me on this point. I remember they were always a bit behind Intel in performance but beat them on price.

* The trick with reverse engineering is that it’s not legal to simply *copy* something wholesale. But it IS legal to figure out how something works and design a different machine to do the same thing. So some nerd sits down with a chip and documents exactly what it does in a mechanical sense. That nerd writes a big ‘ol document that specifies exactly how the device behaves. Then they send the nerd away and bring in new ones. The new nerds look at the specs and design a machine that will behave in exactly the same way. Internally, their design might be completely different, but externally it behaves exactly the same. This was done by all the clone manufacturers, who RE the BIOS for IBM PC’s. Their work gave us the explosion of cheap computers in the 80’s that drove IBM out of the PC market.

This is called Clean Room Design or “Chinese wall reverse engineering”. It’s been challenged in the courts and survived multiple times. Compaq did it to IBM’s BIOS back in the day. It’s legally allowed and the reason we even have PCs in the first place.

“Most manufacturers sell as low as they possibly can without going out of business”.
I think for graphics chips that’s probably true. For PC processors: There was a time when AMD was actually making the better chips, by a large margin. Before and after that, intel could (and still can) command much higher prices, just because they have the brand name. It’s even part of the game: If it’s more expensive, it must be better! At least since 2000 they need to watch out a bit what they’re doing. Before the Athlon they didn’t actually have competition, at least on the PC market

Interesting situation, i must say i perfer they chose that method over the others mentioned, but people complaining about it is odd, your paying for what they hand over at purchace, not what they made and then destroyed

I can’t speak for others, but for me it’s a case of “it just bugs me”. It’s like if an electronics store sells headsets for 20 USD and headphones for 10 USD. And when you go to buy a headphone the clerk takes a headset and breaks the microphone off and then sells it to you for 10 USD. Sure, you’re getting what was promised (a device that outputs audio), but it just feels wrong.

Another facet to this is that Intel needs to be smart enough to design a component that the rest of the processor can easily run without, yet makes it twice as good (in this example, it’s worth twice as much money, so it should be roughly twice as good).

I wonder how it was originally designed. When concerned with speed, things typically get very complicated and interwoven, yet this processor had enough modulization that they could “break” a component and still have a functioning processor.

This raises other questions: Would the chip have been better if they hadn’t moved the math co-processsor? Did they spend additional up-front time (and therefore money) developing a product that they could produce and then cripple post-production? If so, how much cheaper was this solution than the others?

Yes, processors pretty much HAVE to be designed modularly. Otherwise, they’d be far too complex to allocate the design to dozens of different teams. There is no one guru who knows how everything works.

In the case of the math co-processor no it doesn’t take special design to do. The math co-processor is a Floating Point Unit (FPU), an FPU can process only floating point numbers, it is useless for integer and logic ops and the ALU (arithmetic logic unit) can function entirely without the FPU.

This is my guess of how it works. I’ve taken a few EE classes in the subject, so I think I have a decent grasp on it.

So, say you have a chip, with three modules. One that does certain functions really well to certain data values(Like the math co-processor), and then you have a control module, and then a generalized arithmetic module. The control module is going to get two signals from each of the two math modules saying whether they’re ready to process. When you give the control module the specific data values, it’ll probably check to see whether it can send data to what modules. When you rip out the specialized module, it always comes back as not ready, and therefore the data always just flows through the general math module, and doesn’t gain any of the extra speed using the specialized module.

First of all when we’re talking about the math co-processor we’re talking about the FPU so if you don’t have the math co-proc then you can’t run anything that needs FP ops. So part of how its prevented is simply not running programs that you can’t run.

As for the question of how does the CPU know this chip no longer exists, basically you have a boolean implemented in hardware. When the math proc gets destroyed you simply tie the value to ground, literally connect it to the ground plane (oversimplification but good enough to explain the point). When you are looking at the opcode (opcode tells the CPU how the interpret the rest of the instruction, 2 registers, 1 reg and an immediate, jump instruction, etc) check the FP op flag of the opcode (assuming really fixed length instruction format for ease of explanation) and then if the math co-proc value is low then you’d send an interrupt which will halt execution.

In the case of other features where it isn’t necessarily a “if you don’t have feature X you can’t run this program,” scerro is pretty much dead on, if the instruction has some special pathway for it (like MMX) then you check if that exists during instruction decode.

To use your example the CPU basically just has
public static void main(String[] args){
if(doStuffExists){
doStuff();
}
}

Sure you can run stuff that needs FP ops. You just have to have something *else* (which, back in the days of 16-bit, DOS, and win 3.x where the 486 ruled, was built into every program) that could do floating point calculations. Which was, since it was built on integer operations with some pre- and post-operation scaling and lots and lots of crazy to handle rounding and precision, much, much slower.

(IEEE floating-point has either 32-bit or 64-bit numbers. (Or 80-bit, in the case of one of the x87 extensions.) Doing any kind of math on those on a 16-bit instruction set like most programs used in the days of the 486, is going to be several instructions at best, and hundreds at worst.)

Real 32-bit OSes might have provided a machine-wide floating-point emulator, but I don’t think so. (Not sure why not. Maybe the coprocessors — or 586/Pentium CPUs — were widespread enough by the time they got big enough that it wasn’t worth spending code size on them.)

It doesn’t need to be twice as good. Those who need the power will gladly pay twice the price for less than that.

Server processors command _much_ higher prices although they’re essentially identical to consumer processors, with just a little better quality testing. And then plus 50% or more if you want to be able to use it on a two-way board. Again plus 50% for four-way and so on. And it still makes sense to buy them because it’s just cheaper than buying two motherboards, connect them with fastfastfast interconnect and still not be able to share the RAM between processors.

Same with speed. If you’re building a computing cluster you will pay a lot of money to get the same computational power out of fewer processors, because then you’ll need fewer server racks, power, network connections (cluster interconnects are horribly expensive!), RAM (server RAM is horribly expensive!) and so on … Similar for Processors in a professional environment. Time is money. If your highly paid engineer/designer/whatnot waits for an hour in a year because the computer is slow, or gets bad mood because it’s not going fluently, then has to do something to make it go faster, that can quickly can add up to a lot.

Same with reliability. If you have to spend an hour at home because something on your computer doesn’t work, well, so be it, at least it was a bargain. But at work: You have an employee who can’t work but must be paid, in addition to someone else to fix the damn thing who must be paid, too. There’s lots of money spent to avoid that. Not saying that all of that money will always actually go to achieving the goal, I think often it’s just demanded “because we can”

Basically, if you want to hire good developers, you should never skimp on hardware. Software Engineers get paid more money per month than you can feasibly spend on a PC. And believe me, it’s just not worth saving 200$ to buy a cheap CPU if the dev has to wait for an extra 5 minutes (per day) for his code to compile.

Yes but a large majority of end-users wouldn’t notice the difference. Does the typical user notice the difference between 2GB and 2.5GB?

Intel is not really hurting anyone by giving them inferior chips while still making money. Intel has also been very good at re-investing in the company. They take their profits and research better chips to bring better technology the next time. Then the “bad” chips in 5 years is as good as the “best” chips now. If they didn’t take this money, they wouldn’t have the resources to make computing better at this rate.

Actually, if you tun the numbers in my example, you’ll see they would go out of business. They made $5 million from bargain users, $5.5 from power users. If everyone pays $50, then they only make five and a half million. It cost $10 million to set up the factory.

No, your $5 million profit from bargain users is after you subtracted the $10 million to build the factory – $35 per processor for parts and labour $10 * 1 million for the factory, leaves $5 per processor profit.

You’re right here, but given that the numbers were completely made up it doesn’t really matter to the argument. Just say that the manufacturing costs are $45 and the maths works out exactly as Shamus put it – charging $50 for the good chip gives you $5.5M from the Bargain users, plus $0.5M from the Power users, for a total revenue of $6M minus the $10M start-up costs.

I was forgetting how I’d set that up. The ten million they need to make is for the NEXT factory, for the NEXT gen stuff. It’s not like one factory carried Intel from 1990 to 2010. You need to update your facilities and expand capacity, or you stagnate.

The point is, $50 from everyone would move some of that expansion burden from the power users to the bargain ones. Either in slower progress or higher prices. Or both.

Even if we didn’t go out of business, we wouldn’t have the capital needed to make the next, better, processor. So either we’d all stagnate and stop at this level (at which point, no one would buy NEW processors, unless they broke, and we’d go out of business) or, more believable, someone else would come a long WITH the capital and make a more powerful processor, and PUT us out of business. They’d then follow your business model and follow our same fate.

See, in the end, you (as the consumer) WANT the company to turn a profit so they can use that money to make more goods for us. This way you have better stuff later.

Edit: Should really learn to read EVERY post in a chain before replying… Carry on…

Yes, and your grocery store could offer everything for a penny. But they don’t, because then they wouldn’t make back their investment in what they paid to get that food in the first place.

It costs a certain amount to make these processors. That’s a given. They could produce less, go after the power users and raise the price somewhat to compensate for making less units. Or, they could offer you a deal: The crippled processor for half the cost.

As a thought experiment: Let’s say the production line has an issue where a proportion of the processors come out with this defect. They could junk these processors, but instead they sell it to bargain users for half price. Is that the same, better or worse?

The general idea is, that either you can use price differentiation like this and charge significantly more for the “premium” model, or you don’t, and charge everyone just that little bit extra to make up for it.

Excellent piece, Shamus! Rather than attempt to explain this to people myself, I can now link to this article and set them straight. And as RPharazon mentioned, manufacturing yields are very important here.

I know you were seeking to avoid too much complexity, but it should be noted that selling “disable”, lower-end processors gives the company some way to deal with chips that come out flawed but still functional, that may, for some reason, not text in that high-end category. Rather than throwing away any chips that have a math co-processor flaw or can’t hit the target speed, they can sell them cheaper to people who want cheaper chips. It reduces production waste.

So in other words, not only is this process more profitable, but also more green.

That is very impressive for a model. Can you think, if they kept the same production-ready chips, but had to toss out faulty ones? We would’ve completely mined out all the silicon and other fine metals in the chips (Okay, maybe not entirely, but closer than the current model).

It should be noted that this only works because they can produce beyond the 100k chips in the normal market and that they can still make a profit by selling the lower chip. If they could only make 100k chips, (or more specifically, n normal chips for n people who want to buy them) of it cost more to manufacture the inferior good than it would be to sell it, then they wouldn’t bother making the inferior good at all.

A couple of other things: the normal chip still has to be a good value, and it’s not just about what people are willing to pay for it, either. (That is, some of those 100k Power Users will gladly pay more for the chip; it’s a balancing act of getting the most users at the highest price.)

Generally it gets cheaper per unit to make more of a product, since as mentioned above you split the overhead costs over a larger number of units (assuming you sell each unit for more than you paid to make it of course). The only reason to limit production is either lack in resources (a whole other problem entirely) or to prevent flooding the market with more product than you expect to be sold. If you plan to sell to the low-end market to, you would increase production to compensate.

Hm, i seem to remember they did something similar about a decade ago with a certain model of ATI videocard. If you got the cheaper version, you could alter it in a way (not sure about the technicalities) so it would be the same as the high end model. Ofcourse there was about a 50% chance of ruining your videocard even if you did it properly, but i reckon the reason it sometimes worked is because the manufacturing process was the same

Yes, RPharazon explained that above. The thing is, you could buy a lower level card, re-flash it and have essentially a top of the line card for a fraction of the price, but the factory had already tested it and determined it wouldn’t be reliable in that configuration. But, it might take 6 – 12 months to burn itself out, and in that time they’ve likely come out with a new top of the line. So, for people who were going to upgrade that soon anyway, and willing to risk it, it’s a good deal. Your average consumer, on the other hand, doesn’t want catastrophic obsolescence designed into their products.

Not necessarily – while many of the cards were faulty higher-end cards, there (usually) aren’t enough faults in the production process to meet demand for the cheaper cards. So you take some of your surplus working-fine ones, and install the lower-end firmware on them so you can sell them at that lower price.

If you happened to get one of those working-fine-but-crippled ones, a firmware hack gets you a free upgrade. If you get an actually faulty one, it doesn’t.

I even remember that somewhere in the Pentium 1 era, Intel even sold the same processor as a 133MHz, 150MHz and 200MHz(*) CPU. The only thing different was the logo.
So if you bought a budget cpu, and told your BIOS to treat it as a 200MHz cpu, it was free speed for the people.

(*) Like all figures mentioned on this page, mine are purely made up and solely exist for the description of this practice, so I could be off by ±half a decade of CPU-generations.

That was also a case of factory screening. (Using your same numbers) The 200MHz ones Intel was willing to guarantee they’d run at that speed. The 150MHz they were willing to guarantee at 150Mhz, but not 200Mhz. The 133MHz models were guaranteed to run any faster. Many would survive higher speeds, but if you burned them out by doing it, Intel basically said, “Tough toenails.”

I’m amazed that you posted about this, given your staunch stand against getting into arguments about politics and, by extension, economics. Basically, this kind of issue is a political/economic one which goes to the heart of questions about whether our current system works very well. The question you’re implicitly posing when you explain something bizarre and stupid by basically saying “Well, it may seem moronic but actually the market system makes it necessary” is:

Are markets, and/or is capitalism efficient?

As I say, I’m somewhat flabbergasted you would post on this kind of topic. So I apologize for the remainder of my post, but you did ask.

The answer suggested here is, well, apparently not if it results in doing something more expensive (presumably it costs some nonzero amount to burn out all those math coprocessors) to create the result of poorer equipment. And this is hardly unique–to the contrary, I believe it’s fairly common. I knew a guy who worked for a company that made electronic security systems, and they basically made one model and then knocked functions out of the lower-end ones. And of course in the software business, once you’ve made a piece of software, distribution is essentially free; all different versions, basic, deluxe, pro, whatever, are restrictions on utility for the purposes of marketing, marketing which is necessary if you’re going to structure the economy the way we do.
Arguably it’s a form of externality, and corporate production is rife with externalities–that’s the source of a huge proportion of capitalist profit: Not adding value, but shoving off extra costs in one way or another, whether through pollution (externality on the general community), not abating workplace hazards (externality on dead and injured workers), lobbying governments for subsidies (externality on taxpayers) or whatever.

Open Source software is an indication that there are other ways of doing it. Obviously, this kind of issue does not turn up with open source software–the software is the software, you get the whole thing. Sure, people may develop versions with simpler featuresets either to run leaner on less powerful hardware or because they find some of the extras are not worth the complications they bring. But there’s always a choice.
A similar but distinct situation: In places where broadband fibre is laid down as a government service, you don’t get people trying to throttle back your internet speeds just so that they can upsell you if you have the money to pay for faster. It costs what it costs and that’s accounted for in your taxes–there are no extra expenses for marketing, software needed to mess with your internet speeds, profits for shareholders or yachts for executives. For this sort of thing as a rule “socialist” production is more efficient than for-profit production. That’s why all modern successful economies are mixed–giving up government involvement in the economy messes it up. The United States is starting to become a case study in this. On the other hand, for many things “socialist” production isn’t efficient–or at least, centralized, “command-driven” socialist production isn’t efficient.
Modern socialists mostly tend not to be as centralist as the old kind; rather they’ve been strongly influenced by social anarchists. It’s hard to say whether social but decentralized, locally-controlled production would be inefficient because its track record is very short. But you can be pretty sure it wouldn’t be deliberately turning good equipment into crap before letting the nonwealthy have it.

“For this sort of thing as a rule “socialist” production is more efficient than for-profit production.”

This is 100% pure unadulterated bullshit. “Socialist” production never invents this stuff to begin with. So without capitalists for the government service people to leech off of, there would never be such a thing as a fiber network for them to install and claim to be more efficient at.

Apologists for socialism always forget that part of the equation, but it’s the part you CANNOT LEAVE OUT. This is why basically 100% of all new drug research and development in the ENTIRE WORLD takes place in the U.S.: because we dirty capitalists (hah, if you can even apply that term to the drug market in the U.S.–we’re only better compared to the utter crap everywhere else) actually let pharmaceutical companies make SOME money–assuming they can get their drug FDA-approved before the patent runs out and it goes generic.

Drugs in particular are goofy as hell in the way they’re brought to market, because it’s often very difficult to get new drugs through the FDA, so what a lot of companies do (particularly with cancer drugs, where the margins are tiny because a new drug may only offer, say, a 5% improvement over existing drugs) is take their new drugs to Europe (or Japan), where it’s often much easier to get them approved, sell them AT A LOSS due to the government-enforced price-fixing, and use the fact that X drug sells in Europe/Japan as a lever to get the FDA to approve it so they can sell it in the U.S. and actually make a profit.

People complain about big pharma, but the primary reason we HAVE big pharma is because it is IMPOSSIBLE to make money off new pharmaceutical research without an ENORMOUS company that already has many, many profitable product lines going. Even so many of them rely hugely on the (relatively) inexpensive research they get out of universities, because university researchers are subsidized by the government. All the pharma companies have to pay for is some lab equipment and hire the people the universities pump out.

And this of course leads to all sorts of lovely secondary and tertiary results. It’s in the university’s best interest to overproduce people with degrees. It costs them basically nothing, and it keeps the government/corporate money coming in. This situation is endemic in the U.S. Those over-credentialed people wind up competing for very few jobs, which drives salaries in those jobs way, way down.

I’ll happily take capitalists with yachts over that kind of “efficiency”.

In all fairness, if you want to be strictly capitalist there’d be a heck of a lot of corporate espionage and megacorp shenanigans going on.

When a new drug is produced, all their competitors basically got the R&D done on that product for free. If the original producers don’t have some way to cover their start-up costs, there would never be any sort of invention occurring.

The problem with drug manufacturing is that the majority of the start-up cost are in the Research and Development. Once that is done actual manufacturing costs are minimal. Generic drugs are a lot cheaper since the manufacturers do not have to worry about recouping their R&D costs (because someone else did that).

The Patent system means that research companies can make a profit from their R&D which would not be possible if anyone else could freely copy their drugs.

This is why basically 100% of all new drug research and development in the ENTIRE WORLD takes place in the U.S.

and this:

sell them AT A LOSS due to the government-enforced price-fixing

irritated me. While I certainly can’t speak for “the ENTIRE WORLD”, I can assure you a) that there is drug research in Germany, and b) that drug prices in Germany are just about the highest in the ENTIRE WO Western Europe — due to government-enforced price-fixing.

You see, when you spend 10 Mio. on a new drug and 50 Mio. on the accompanying media campaign, another million or so spend on bribes lobby work is well invested indeed. So there is no need to whine about “government-enforced price-fixing” hurtin’ them poor widdle pharma companies.

This is why basically 100% of all new drug research and development in the ENTIRE WORLD takes place in the U.S.

Yeah, that line annoyed me as well.

The top 10 pharmaceutical companies (in terms of brand recognition, as well as amount spent on R&D) are spread evenly between the USA and Western Europe (Germany, UK & Switzerland, mainly). The amount of money spent is split pretty evenly, too.

The difference between the US and the Europe is that the US focuses nearly all of its medical R&D on pharmaceutical solutions, whereas Europe puts a significant amount of funding into deliberately non-pharmaceutical answers.

This is for reasons of socialism – albeit indirectly. Most EU countries have (‘socialist’) state-funded or heavily state-subsidised healthcare, so its in their interest to encourage solutions (where available) that cure patients outright – this often means subsidising the research.

The US (‘capitalist’) approach to healthcare means that patients are paying for their treatment themselves, and it’s very much in the interest of a pharmaceutical company to produce medication that alleviates the symptoms of a condition – but only while you’re taking the medication (ideally so that you have to take – and pay for – the medication for the rest of your life).

Because buying a drug every day is much more expensive than a machine you only have to buy once for use on many patients (even if the machinery is incredibly expensive)

Without government intervention (the US government doesn’t generally pay for the medication of its citizens, and so has little reason to heavily subsidise other forms of treatment), medical research companies in the US quickly find that pharmaceuticals are by far the most profitable way to go.

Also for the pro-US medical research group – as I recall significant amounts of medical research dollars in the US is spent on IMITATING other drugs that are already created, which isn’t that socially beneficial when compared to new treatments, but can be incredibly beneficial to the company.

I am no commie but saying that socialism never invents anything is also patently absurd bullshit.

Sputnik is the obvious example but if memory serves me correctly you also would have to include hyperbaric welding, pressure suits, radio navigation and I am sure many others. There is no doubt that capitalism is more efficient at innovation but saying socialist systems never invent anything is just wrong. At least outside of Ayn Rand novels.

Okay you know what; I’ve had with with all the the absolutist extremes to the right and to the left over here. Ladies and gentlemen here is a newsflash:

Political viewpoints do not invent stuff, people invent stuff.

There might be more efficient ways to spurt innovation, but those certainly are not predetermined to one type of society or the other; any kind of society can create its mechanisms through which it can better or worsen themselves and scientific advancement. Just throwing random words and anecdotal facts to the air and saying that one political’s opinion is the only road to salvation and economical-social stability is and always will be bull.

And if you wish to respond to this now, I better hope you have an high sample of data with their respective correlation indexes and assurances of proper independence of tests which can be fitted on a nice graph with a continuous adjustment just so that everyone can see if it really is as obvious as any of you might say. Simply coming along and crying that one’s way leads to perdition and the other’s to utopia because you ‘HAVE A REALLY HIGH SCIENCE'(sorry Mumbles) does not make it a fact.

In retrospect, I should have taken a break from the internet before I had this reaction; but hey, what’s done is done.

While I don’t have any fancy charts, I will respond simply to state that we are in agreement. My comment was also borne from the frustration of hearing too many arguments from a position of extreme polarization.

Clarity ain’t been my strong suit today, so I will chalk this up to another example of my lack of articulation.

Shamus, you’d already gone there. I guess you didn’t notice, but that’s not my fault. You said,
“But it’s actually not stupid or anti-consumer. It’s just really, really counter-intuitive. It’s a wonderful system”
But it is stupid and anti-consumer. It just happens to be forced on us by the particular economic system we’re currently indulging in, but that doesn’t make it wonderful. However, if I don’t want to accept your opinion that it’s wonderful, I have to make the point that your givens are not in fact given.

To put it a different way, it’s true your post isn’t a polemic for a particular economic view, but it asks a question that can only be seriously explored by looking at different economic views, and then assumes one of them. You’re like a fish talking about how moving around happens and being annoyed that I brought water and its potential absence into the discussion.

That’s like saying that because I talked about the rain it was okay to start a global warming debate.

“But it is stupid and anti-consumer.”

Explain how you would provide better service to customers. Don’t go all abstract. You don’t need to discuss economic theory. Just look at this one case. You own the company. What would you do differently?

not saying it should change in any way, but if it did change, how about running every chip longer?
yesterday’s top range are today’s bargain chips style of thing.
when you introduce a new whatever, you don’t stop producing or selling the older one, you just lower the price.
this has downsides of course, flexibility, rate of improvement, stuff like that. but it could be a viable alternative.
to a certain extent, they’re already doing this though, it’s not really a case of either-or.

If you run the chip longer and you get increased demand for your budget chips though, you have to go back and re-tool your production line though, which is a significant cost. If, on the other hand, you just make the best chips and knock a bit off some of them, you can produce more chips overall since there isn’t downtime while you shuffle the production line around.
And the more chips you make, the less you need to charge per chip to turn a profit. The less you need to charge per chip, the more chips you can sell.

As a final question related to that: Why, as a consumer, would you want to pay more money for an inferior product?

Huh?
I guess you didn’t read my post because you were annoyed that I would ‘go there’. I quite freely conceded that given a stupidly organized economic system, stupid results like this do in fact follow as the way to do things. Given that stance, how can I possibly then turn around and claim that as an individual firm I could do things differently?

Actually, if you don’t like the idea of government-provided broadband being throttled to provide price differentiation, then DON’T come to Australia after they roll out the much vaunted National Broadband Network.

The socialism is worse, because then you don’t have a choice when it comes down to how much you’re paying.

You see, when they lay down that line, its gonna cost either the government or the company nearly the same cost(The company, probably less, due to having to compete with other companies, and they’ll find the cheaper solution.) Yet when it comes down to it, that cost is going to come back at you. The thing with the government is that they’re not going to limit your speeds, but they’re going to charge you fully in taxes every single time. You have no choice if you want to pay less. Whereas a company, you have a choice to pay x for y or x+n for y+f(n). Hey, and if you don’t want it at all, no sweat, you don’t have to pay anything. You have no choice if the government is in control. And hey, they really don’t care if you don’t use it, they want the taxes anyways.

As for software, free software can’t compete with the features of priced software. Sure, GIMP is amazing for small projects, but it doesn’t have the features, nor the power, stability, support, or efficiency of Photoshop.

1. Why don’t people who argue for socialism ever use the word “proletariat” anymore? I mean, c’mon, it’s a really nice-sounding word! It’d be doing the world a favor to hear people saying this and that about the proletariat.

2. Don’t argue about economic systems as if there’s a real sort of choice between systems. It never ends well, and it labors under the false assertion that there was a conscious decision in the first place. The way things are now is just the way that things ended up happening in. There’s no such thing as a purely capitalist or socialist state anyways. You’d be better off trying to solve smaller, individual problems rather than overhauling the entire order of the world besides.

“Some people complain that this is an incredibly stupid system, or that it is unfair to the consumer.”

This mindset frustrates me to no end cause it shows complete ignorance of the most basic fundamentals of economics: A business’ ultimate goal is theft. It wants to take your money and give nothing in return. Obviously it’ll never actually attain this goal, so it’s all about finding how much they can take from the consumer with as little effort (i.e. money) on their part.

The flip side of this is of course that the consumers goal is the exact same thing: To obtain/use the business’ product/service for free. So in the end, I don’t know why I’m frustrated by such comments when it’s a consumer’s ‘job’ to do everything they can to try create an advantage for themselves.

I agree that your principal is correct in theory, but reality is more complicated.

I think both a business and a consumer’s ultimate goal is not to maximize profit, but to satisfy the principals aims. For example, my business is putting on music festivals. There are many things I could do that would greatly increase my profit margin, but I don’t because I don’t feel it would be fair to my customers. My aim is not to get rich, but to put on enjoyable festivals. I am not alone in this way of thinking. Fellow Lexingtonian and beer lover Drew Curtis that runs Fark only pays himself a salary of $60,000 a year despite Fark making over a million a year. He says that he just doesn’t need more than $60,000 a year.

This also works on the consumer side a well. You said that a consumer’s ultimate goal is “To obtain/use the business’ product/service for free.” In reality this isn’t always true. Take this website. Nobody has to give Shamus a dime, but many people donated money because they valued the content at a value more than $0.00, even though they could “purchase” said content for nothing.

Economics is truly fascinating and I have learned enough to know that even the “iron-clad” laws are never really that simple.

1) Bigger companies are less likely to be run solely for serving customers, as they need money to survive and improve.

2) Catering to the customers seems like a solely benevolent act, but it is also a legitimate way to increase revenue (happy customers keep coming back and promote your products) and boost reputation. Feeling good about what you do is gravy, albeit very delicious gravy that some value more than others.

I agree. I would quibble with idea that large companies “must” concentrate more on profit than smaller companies (I know you didn’t state that but I have heard it before). Large companies and corporations do tend to value profit over everything else but typically that is because they have shareholders, their principals, and their aim is profit.

A good example of a fairly large company that places a value on things higher than profit is Chik-Fil-A. If revenue was their sole concern they would be open on Sundays. But the founder and the current board of directors believe their employees should be home with family on Sundays. I respect the choice even though I have cursed it after pulling into the parking lot on Sundays.

Good description of Market Segmentation. Every industry does this, including game publishers. Release a title at 60 bucks until all the die-hard fans buy it, then drop it to 40 until the middle crowd gets it, then drop it to 20 to get the bargain hunters. Some sales at less profit are better than no sales at all.

I can’t agree more with everything you’ve said on this whole topic. So instead I’m going to thread-jack with a comment on human nature. (Sorry.)

When I found out about this in my twenties (back in the 90′s) it drove me crazy. I suddenly knew that the CPU in my bargain computer had, for a fleeting moment, been a deluxe powerhouse until some jackass ruined it. On purpose. As an engineer, I dislike this sort of destruction.

This helps to explain your heated tone on this topic: I often find I’m least temperate and understanding when arguing against a position that I used to hold but have since abandoned. It seems that the acquisition of wisdom often also includes more than a grain of disdain for the person you once were.

Might be amount of hyperbole and over-the-top replacements, perhaps.. I can tell (or at least I think I can) when you are biased/passionate about something when you are using relatively more hyperbolic comparisons than usual, and the replacements of real-world stuff (e.g. million bucks etc.) are obviously nonrealistic.. Together, idk, it may seem like an intentional parody of sorts?

I can’t say I’ve ever taken offence at your posts, nor have I found them heated – but it does show when you have emotions about the subject vs. cold-blooded analysis.

Edit – also, probably a fair share of people know you mainly from spoiler warning – ad if it was the Fallout season, then they have a mental image of a particularly argumentative you in mind – ad you know how it is, the mental image is applied to all of works, especially written.

Apologies for taking up the space, but something about this bothered me.

My first thought – being the least bit passionate about any subject nowadays, especially in a text format, is more easily interpreted as aggressive i suppose.

As an example – video game reviews. 95% are glowing glad handing greasy palmed nonsense that doesn’t address anything relevant past the press packet guidelines and feature list. Then every once in a while, if you scrounge hard enough, you find a place like this where games like Fallout 3 or New Vegas can be simultaneously loved for what they are and criticized for their absurdities and flaws.

Or Yahtzee for example – I’m sure the majority of people who manage to stumble onto his videos or commentaries at first blush believe him to be nothing more than an unfair critic, targeting weak spots and ignoring the strengths, trying to get a reaction to garner traffic…

But that’s not it at all. It makes me uncomfortable to say, but passion for a subject, or showing any emotion (apart from dismissive derision) is really looked down upon or seen as a sign of weakness. At least in my experience. Someone is interested in something, and immediately ten random people want to call them a “nerd” or a “fag” and go back to whatever the hell compartmentalized world they’ve made themselves comfortable with over the years.

I’d say don’t worry about it, just enjoy the people who realize that you enjoy speaking about these things and this world and hope the others realize you’re not just being angry for the sake of itself.

I have to admit though I really don’t know, maybe I’m just being old and bitter. Either way, love what you’ve done with the place.

That is a topic close to my heart. I am often frustrated with how enthusiasm is often cast in a negative light, at least in my experience.

However, I think there are reasons for that, which are worth considering.

For once, it’s hard to become aware of your own biases. Many opinions are thought of and presented as objective facts because of that, which ruins any possibility for discussion. It can also blind you to a more well-rounded perspective on a subject, causing you to ignore flaws or strengths.

Secondly, passion is not known for encouraging restraint. It’s easy to lose oneself to one’s passions and display the sort of extremist behavior you often find on the Internet, and generally behave like an idiot, often without realizing it.

While I am pro-passion and pro-enthusiasm, not everyone is capable of maintaining a good composure. And then you have people behaving like idiots, and that impression sticks.

I’ve often done and said things out of passion that I came to regret later, and will probably do so again. Like Shamus said, there is no idiot more hated than the idiot you used to be.

Yeah, total agreement. It’s a tricky wicket finding that balance. I do wish it was a bit easier for people in general to maintain, rather than falling to the waysides of total apathy or blind zeal that I see in so many people a few years older.

It’d be a relief to get into all of that false dichotomy whatnot and true passion for a subject versus a lustful desire for an emotional outburst and on and on…

Far too gray for me though, I’m already in over my head trying to think on it.

Just hoping Shamus doesn’t go watering down articles over a couple griefers. It’s happened to too many and left a vapid expanse in the one place free thought is supposed to rule. It’s one of the few shames of careful consideration, that weed of doubt he mentioned.

I didn’t take any sort of heated tone from this. In fact, I probably picked up a bit of a interested and happy-to-explain sort of tone.

When I write posts on the internet sometimes I want to preface the post in what mood it was written in, stuff gets so misunderstood because it gets filtered through the mood the person reading the post is in.

Ehhh, I don’t know if one of my posts is evidence of the ‘heated tone,’ but here’s my thought.

To me, it’s partly you and partly everyone else. It seems like the blog and the community has grown a lot of late, but at the same time it has become much more homogeneous. Further, many of the recent articles seem to fall along the lines of “this is why X is bad,” or “this is why Y is good,” with less focus on the Pros and Cons of different ideologies. These two things combined form and almost overwhelming avalanche of like-minded rhetoric, with comparatively little room for discussion.

For example, in this article in particular, you are talking about quantity savings in manufacturing, and the advantages of disabling a high end product, because of prohibitive start-up costs in running a separate line for low-end products. But you neglected to mention that, in doing what Intel did, they made their per-part costs higher (I’ll say more on that below), which may or may not balance out the up-front cost of running another line. This makes the post sound more like “breaking stuff so you can target more than one market with the same line good, you just have to think about it first” and less like “breaking stuff is a practice that has a benefit which may or may not outweigh its costs,” and makes the idea itself come across as more impassioned and less objective than what it really is. And in my scanning (I might have missed some stuff) I have only seen a handful of posts that give any argument to contradict this idea.

Now, with all this said, I don’t know if there’s anything really to be done about it. I think the vast majority of what I’m seeing isn’t really a change in the content you are creating, but rather a change in the context you are releasing it into. But it may or may not be a source of the comments.

Shamus, this is basically the same thing as price discrimination, and it works the same way as decreasing game prices over time.

The basic idea is to use differentiation to sell the same basic product at lower prices in an effort to capture the entirety of the demand for the product as profit. From the people who desperately want the product and will pay a fortune for it to the people who feel mostly indifferent towards the product, and who are more frugal with their money.

That actually is an interesting comparison. Shamus describes this process of damaging chips counter-intuitive, and I think that’s the biggest reason for the objections to this practice. Going back to physics, if we compare classical mechanics with quantum mechanics, there are plenty of hard problems in the former and easy problems in the latter but quantum mechanics are still harder to grasp because they are so counter-intuitive compared to the world we are used to.

I think it’s jsut natural for us to react against what we find counter-intuitive, even when it makes sense on close inspection.

Pretty much. When I was reading up on Special Relativity, the hardest part was convincing my brain that the whole thing with the absolute speed of light was true. Newtonian physics aren’t excempt from this, however. Even the notion that an object in motion stays in motion is unintuitive the first time you encounter it because we’re always seeing slow down due to subtle forces.

Similarly, some part of me just won’t accept disabling features is the best course of action for everyone, even though it has no logic to back it up.

There are parallels to this practice in the software industry, with the way software “packages” are sold containing more or less features based on price.

All of these features exist. They have been coded and probably tested. Some of them only exist in more “premium” packages while missing from more “basic” ones. Aren’t these basic packages essentially a premium package stripped of existing features, in a similar way to how the cheaper CPU is actually the more expensive CPU stripped of the existing math co-processor?

The practice might be counter-intuitive for hardware, where stripped components are tangible things being disabled or destroyed, but it’s a common business model for software, where components are intangible, and they both exist for the same reason of market segmentation.

Is applying this practice in hardware more wrong than applying it to software, just because one is tangible and the other isn’t?

Well, the actual, physical thing that is disabled or destroyed on the hardware piece cost money and materials to make. The extra bit in premium software cost nothing but diskspace or extra server load (which actually isn’t nothing), while the hardware we have a physical component made of silicon that costs money now made useless.

Imagine if you will, a software guy deleting some files for the ‘basic’ package, while the hardware guy knocks of a piece of metal on a motherboard. The latter is actually destroying something tangible, and valuable, while the latter is just deleting a copy of some data. Data isn’t really inherently valuable as long as the original still exists.

OF COURSE this difference is paltry in effect. The cost of the disabled physical component is probably minuscule in comparison to everything else. As this article illustrates, the set up costs are what matters. Not to mention, due to the nature of hardware, the cost for the hardware without that particular component might just be the same.

I just believe this is what makes the distinction in our human heads. We can relate to something physical being destroyed, while data is odd, abstract and replenishable. Which is why you don’t see people up in arms about a “Windows 7 basic”.

“Well, the actual, physical thing that is disabled or destroyed on the hardware piece cost money and materials to make.”

That’s actually not really true in the case of CPUs. You “pay” per square millimeter of core, but how many transistors you etch per area really does not make a difference in production cost. It only makes a difference in development costs, and printer quality, so to speak. But a sheet of paper does not get more expensive if you print more or less onto it.

Some years ago, a Canadian mineral company had a process for refining nickel ore to a purity (and at a low cost) that other companies couldn’t match. Rather than take over the market and have to deal with anti-competition regulation, they simply took the average price from their competitors and used that price for their own goods. They also sold the same refined nickel metal to research laboratories at a premium, as the laboratories required a higher purity level of the metal.

After a while, the labs caught on that the company’s regular product was the same high grade as their premium nickel, so sales of the premium product plummeted.

In order to resolve this, the company modified the refinement process so that as the pure molten nickel ran down a channel, it was split into two streams. On one of the streams, they paid a local kid $17/hour (a very good wage at the time) to scoop some impure metal nuggets out of a bucket and throw them into the stream of molten nickel. There was a small device next to the kid that rang a bell at random intervals to let him know when it was time to throw in another scoop of impurities.

Thus, the company was able to continue to earn additional profit on their premium nickel without losing their mass market customers.

This is dangerous (the deliberate destruction / corruption of goods) because it creates a disconnect between the product, its value and its production cost – same as the stock market I guess. Imagine if Coca Cola produced the regular product and the “Coca Cola Pure” brand, the only difference in these two being that the regular one deliberately has dirt thrown into it, and “because” of that, a lower price.

Does it though? And why should production cost be related to manufacturing cost? Should I be able to buy a Ferrari new for $30,000 because that’s all it cost to make? If it was books and music and movies and software would have to have much lower prices, since producing one more of them in a production run is incredibly cheap.

By damaging some of their chips though, they’re able to give power users a good high powered chip AND budget users a good low powered chip off the same production run. There’s no deception, the processors will all do what they say on the box, so why does it matter that the processor I bought had the POTENTIAL to have been faster? It also pushes down the price for the power users since they don’t have to cover the company’s entire expenses at their price point.

I think your example shows that there should be room for more firms providing this product… inefficient practices like these I feel are the result of monopolies and price fixing between corporations, both are pretty illegal but happen all the time. I’m not convinced people shouldn’t get upset over this for that reason. I feel like any time a firm intentionally de-values its product its because it’s the only one getting to play or its agreed with all of the other firms that they will play with “special” rules.

No not at all. I’m not mad about what they do to their product. I’d be more likely to get upset there isn’t enough competition in the market to prevent these sorts of practices. What I want is for more firms to exist to keep the existing ones honest. Whether or not the infrastructure exists I don’t really know. But still, it is good to scrutinize firms that engage in these practices as they are often a symptom of anti-competitive behavior on the part of these organizations.

Is it anti-competitive though? Does being able to sell some of your less good and intentionally damaged products constitute an anti-competitive move; or is it just a competitive move to service multiple markets? In this case, by taking a small step in the design process they can make 2 different processors aimed at different markets and sell to more people.

No no, what I’m saying is that the fact that they can do this makes me suspect anticompetitive practices are going on. Clearly if another firm existed to compete it could undercut the other simply by not destroying its products and providing a superior service. I’ll admit that I don’t know that much about the technology industry, but the fact that this is a profitable practice seems suspicious is all. The act of neutering their processors isn’t what is anti-competitive but it seems to me that only through a monopoly or price fixing can a firm intentionally reduce the quality of their services at their own cost and profit from it.

What makes me think this is the fact that there are basically two major chip manufacturers in the United States, Intel and AMD. If other competitive firms existed that possessed similar infrastructure it seems unlikely to me that intentionally damaging valuable chips would be a profitable practice.

I will conclude with the acknowledgement that I’m not an expert on this industry. Shamus is correct that getting mad at corporations for making profitable decisions in their market is silly. What isn’t silly is examining the state of the market in which they operate. I wasn’t really trying to refute him, but just pointing out that the fact that the practice makes sense might be a symptom of a greater problem in the industry.

Short introduction to basic microeconomic pricing theory, but it is almost written like it’s some amazing discovery rather than something taught in every micro class over the past 100 years (or more for all I know). Why not finish it off with some supply and demand graphs and some discussion of deadwight loss?

Although, judging from the comments, it does seem shocking and new to some people with a poor grasp of economics.

Most people never get a good economics education. Or even a poor one. Between high school, undergrad, and grad school, I only had a single required economics class. And that was half a year in high school, which barely covered the basics (and was ten years ago). If I didn’t have a natural interest and an economic minded father, I’d be just as clueless as the next guy.

Not everyone takes Microeconomics. I actually remember back in high school econ when a former student came back to guest lecture us after becoming a banker and he flat out told us that “Microeconomics is useless.” He went on to qualify that, but that was one of the first things he said.

To corroborate what others are saying, econ is not emphasized, at least not in California. I too only ever took a single semester in High School, and I’m not sure we even covered Micro. Shamus’ article was fascinating to me, as the logic did seem initially counterintuitive. There’s absolutely no need to act so superior. Let us have fun learning and let Shamus have fun sharing with us.

It’s not price discrimination because the chips are different by the time they leave the factory.

You are misunderstanding price discrimination. It’s always viewed from the seller perspective, where the products are identical. From the customer point of view, however, the “extended” products are different and that justifies different prices. Typical example of price discrimination is a sale. A customer buying in a sale has to pay additional cost of being there in the right time. Although the thing he buys is the same, the “package” he gets for his money is different than the “package” of the customer whose time is more important and doesn’t wait until the sale.
edit: Sorry, wanted to react on (2), but i have no experience with this system.

One of the more notable cases of the public re-enabling features was with Linksys and their WRT54 router. The hardware was the same for all versions but the firmware disabled many features on the cheaper (g) model. Then it was discovered that the code was built on top of Linux, and since the Linux license requires the source code be open, it was quickly discovered how to re-enable the lost options. This gave rise to the DD-WRT firmware.

Anyway, great writeup Shamus. I’d suggest you’re eligible to join the ranks of the economics bloggers, except these days you’ll find more maturity in the gaming community.

Interesting… I never knew that Intel had been doing this! On the face of it, the pragmatist/conservationist in me is screaming in agony at the thought of all those ‘wasted’ good processors, but after reading your article, I can understand why they’re doing it. I’m now thinking of it as simply removing an extra component from the product as opposed to ‘destroying it’. Ahhh… Urge to kill fading…

And that is it’s possible to sell a chip for a higher price initially, and thus recover the research etc. costs and maybe even some nice profit.
And then step down the price.

That way those that can afford it will get the latest and greatest “now”,
and those that can not afford it will have to wait until the chip becomes the second greatest instead.

Binning due to production issues I can understand.
But disabling stuff makes no sense, the disabling itself actually costs more than not disabling.

At times binned chips has sold so well that “higher” chips have had stuff disabled to meet the demand.
I find that illogical, as at that point they are selling well enough that they can replace those with the higher chips and just reduce the cost to maybe a midway point instead.

I never end up buying latest gen, I intentionally buy previous gen as that is much cheaper than buying latest gen all the time.

While those who really want it cheap can just buy 2 gens older.

The industry currently has 1 gen per year.
And each 6 months a refactoring of that gen.
I rarely change hardware more often than two years, sometimes even longer depending on the PC component. (CPU vs Mem vs MB vs Soundcard vs GFX card)

So if they did the following it would make a lot of sense:

New Year 2011 AMD comes out with a gen gen for example,
Then Summer they come out with the improved version, the old version gets a price reduction.
new year 2012 AMD comes out with a new gen,
The old version of the 1st gen is no longer in the market/(sold out by now.
The price of the improved first gen is reduced to as much as half price of the new 2nd gen.

Now rinse and repeat.

any “binned” or flawed chips that can still be used if damage cores are disabled etc, could just fill out the really low budged end.
And if demand is too high then simply continuing with 2 gens older to meet the demand should work.

So you essentially end up with 3 tiers.
1st = Current aka Next gen
2nd = Previous gen or Prev gen improved
3rd = 2 gens back or 2 gens back improved or “binned” previous or current gen.

Some might recognize these as Enthusiast, High End and “the rest”.

But what I’m talking about is High, Mid, Low actually instead.
And consistently so.

As we have heard news and seen tests now and again where “new” mid or low
end is released that perform no different or in some cases worse than last gen.

I also think the 6 month cycle is kinda insane. 12 month cycles would allow software developers to utilize the hardware much better.
We are seeing this with the PS3 and Xbox 360 currently. And we saw it with the PS2 where the latest games looks as good as or better than the first PS3 games.

One thing is for sure, it doesn’t always make sense what the hardware makers do.

My advise further above would actually allow help speed up hardware renewal. Unlike currently when someone buys the bleeding edge hardware then is “stuck” to that for many years.

So instead simply make the last gen the new mid range, and 2 gens old the new low end, and leave the high end the current/next gen.

What I think is woefully missing here is the effect that disabling a feature on an already made product has on the per-item-produced cost. When intel takes a chip, disable a math coprocessor, and sells it cheap, that chip costs more to produce than if they had just built it without the coprocessor to begin with. Intel is taking a silicon wafer, chopping it up into chips, and selling the chips. Having a coprocessor that they fry means there’s space on the silicon being wasted that could have gon to making more chips. People are being paid more, materials are being bought, and time is being added, all to produce fewer than the optimal number of chips.

My company disables features on the electronic products we make all the time. We sell circuits to industries which have a lot of customizations and optional components. Odds are, if one of our customers buys a cheap version of our product, what they’re getting is the premium version with a few jumpers soldered to disable the extra stuff. The overall effect is each of our cheap boards are probably a good 5 to 10 dollars more expensive to produce than what they would be if we didn’t do the disabling thing, but revamping our production to create custom PCBs on the spot costs somewhere in the millions of dollars, and we only sell a few thousand units per year. Not worth it when our customers are industry.

Intel, on the other hand, is producing consumer goods. They can reasonably expect sales in the millions. Following you numbers, if building a dedicated line for low-end chips costs a million dollars, but increases the yield by 5% because they get more chips per wafer, they should build a new line because it wastes less. When I first heard about the coprocessor disabling thing, I thought it was completely batty due to this. The sheer quantity of waste that comes from gimping a high end product to produce a low end one, especially when that means they’re probably destroying more co-processors they’re selling (assuming there are more low end users than high end users) is frankly mind boggling to me.

I understand this from the industrial side but it still makes me feel cheated as a consumer.

Although one thing that makes it easier to swallow is the difference between the facility and materials. Silicon and aluminum are very abundant, and small amounts are used to make a chip. But the plant itself cost billions, and requries a large amount of highly skilled labor to operate. The materials in the chip cost pennies, the factory is 3 orders of magnitude more costly. Looking at it this way, it is easier to swallow.

Not this ever again. This has been the most divisive thread since… I can’t remember. I think our thread on Objectivism was more friendly than this. (Although, I prefaced that one with strong cautionary words and not this one. I suppose that would have helped.)

Comments are closed. Sorry if I cut you off mid-conversation, but this is too rude and angry and I don’t want to officiate it. Let’s go talk about something more fun.