Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "As AMD's Barcelona approaches, the price war between AMD and Intel continues. To spice things up a bit this week, Intel is throwing into the ring a number of new processors, refreshing the Core2 line-up. HEXUS reviews the high-end QX6850 and mid-range E6750: 'Now is a golden time for anyone looking to buy a new CPU, whether Intel or AMD. The latest round of price cuts means you can now get an incredible level of processing performance for little more than £100. But if your need to buy is not urgent, remember that Intel and its big rival are each promising new processors before the end of the year — AMD with K10 quad-core and Intel with 45nm Penryn-derived CPUs.'"

Unlikely to be true. Motherboard vendors don't release new updates for microcode updates, only for bugs - and then often only for the first couple of months of the motherboards' life... and even then they may not actually include the updates (eg. the latest bios for the mobo on my big server was released in may and includes no such updates).

Unasked, unanswered, uninteresting question. It has bugs, and so's every consumer CPU since before the infamous Pentium floating point bug because as the fix some, they get some new. Most of those are worked around in BIOS or in basic OS routines, and the Core 2 processors are neither worse nor better than the rest (AMD or Intel). I'm happy to keep AMD around for competition but this is just FUD against Intel.

"It has bugs, and so's every consumer CPU since before the infamous Pentium floating point bug"

So Intels "professional" CPUs dont have bugs? or what you mean is that all of Intel CPUs are "consumer" CPUs? because server CPUs are affected by the bugs as well. I guess that they are "consumer servers".

So, I suppose you will blame the BIOS or the OS or anything _but_ _your_choice_ of CPUs when the security-related bugs that promise to allow any script kid to compromise your servers in unprecedented ways are exploited.

For me, choosing any CPU that has known security bugs to be used on any connected computer is reason enough to be fired.

So, I suppose you will blame the BIOS or the OS or anything _but_ _your_choice_ of CPUs when the security-related bugs that promise to allow any script kid to compromise your servers in unprecedented ways are exploited.

For me, choosing any CPU that has known security bugs to be used on any connected computer is reason enough to be fired.

What security bugs? I don't know where people get the idea that there were security bugs in the errata Intel released. Theo said that out of 50 bugs "2-3" were "potentially exploitable", but as far as I know no-one has given so much as a proof of concept.

Saying that these bugs "allow any script kid to compromise your servers in unprecedented ways" is totally over the top.

No-one has shown that any of the bugs contain any sort of vulnerability,

no-one has shown that any of the hypothetical vulnerabilities allow remote code execution,

no-one has shown that any of the hypothetical remote code execution vulnerabilities could be exploited in realistic scenarios,

certainly nothing has been made available to script kids,

and I don't even know what "in unprecedented ways" means in this context.

It is just FUD, until someone can actually point out a realistic code execution vulnerability, or even a PoC, even one that could be exploited in unrealistic scenarios, even a DoS, an idea, anything!

Maybe Theo was just wise enough, for once, to keep quiet, at least temporarily, about how to exploit a processor bug for which no fix or workaround exists and avoid handing it on a plate to skript kiddies and hackers-for-hire?

Just because he didn't demonstrate an exploit doesn't mean it can't be done. If you're serious about security then his comments ought to set of your paranoia triggers off. Theo's been (obnoxiously) right a lot more often than his detractors have.

Maybe Theo was just wise enough, for once, to keep quiet, at least temporarily, about how to exploit a processor bug for which no fix or workaround exists and avoid handing it on a plate to skript kiddies and hackers-for-hire?

Just because he didn't demonstrate an exploit doesn't mean it can't be done. If you're serious about security then his comments ought to set of your paranoia triggers off. Theo's been (obnoxiously) right a lot more often than his detractors have.

You're saying the most well known advocate of full disclosure isn't disclosing vulnerabilities? This is really clutching at straws.

Nope, I'm saying that the most well known advocate of full disclosure, who also happens to have a very good and visionary record in preventative security practices even down to the O/S kernel level, disclosed that he believes that there is a security vulnerability with some of the unfixed microcode bugs. He probably feels that he has better things to do with his limited time than to turn it into a working exploit. He has let people know there is likely an issue and leaves it to other security researchers, w

For me, choosing any CPU that has known security bugs to be used on any connected computer is reason enough to be fired.

Congratulations, you just fired every sysadmin in the world. I hate to break this to you, but all modern processors have lots of bugs. They are usually subtle, and they can usually be worked around in one way or another, but they all have them. Expecting a modern processor, with hugely complicated microcode, to be bug-free is like saying you could write a full-scale, bug-free operating

A floating point bug is one thing. A bug that creates an exploitable security hole is something very different.

And no. I seriously doubt that every modern processor has a known security problem in its hardware. For me, the Intel processors mentioned in that errata Theo made some noises a couple days ago are out of question on any critical activities for now.

"Expecting a modern processor, with hugely complicated microcode, to be bug-free is like saying you could write a full-scale, bug-free operating system: the costs of doing so would be astronomical, so pretty much no-one does."Still, both goals are worthy. Although for practical reasons, perhaps only the bug-free OS is feasible.

We live in an era of persistent internet connections, large numbers of attackers attacking computers attached to those connections, and money to be made in compromising systems. We ar

Some of the bugs will be fixed, others won't. Every CPU has bugs, it's just a fact of life. These things are designed by humans, it's just going to happen. CPU errata happens with Intel (This is the Core2 link) [intel.com] and AMD [amd.com]. None of this is a major threat to most users, and they get worked around by most people pretty quickly. Microsoft have released fixes for the Core2 issue, as have Apple. I don't know whether there has been an update to the kernel for these yet, but I am sure they would get back ported by your distribution.

There is a note here [realworldtech.com] and here [dailytech.com] regarding the Core 2 bugs, I think one of these might have even become a slashdot article at one point. The two links here both are referring to Linus' comment of it being "Totally insignificant", which given that he worked for Transmeta and knows a lot more about how the industry works, I would be putting a bit of faith in his statement.

As another poster said, keep up to date on your BIOS revs, as CPU microcode does have fixes for this stuff too.

Yeah, "early adopters" and all that...BTW, is anyone else peeved at the notation "333 MHz (1333 MHz QDR)", as it was over at Hexus? I mean, the bus speed -- the data speed -- *is* really 1333 MHz, it's quite incidental that it is based on a 333 MHz source clock (using the "QDR" method of two signals half a phase out of sync and encoding at every falling and rising wawe edge, thus at four slots per clock tick). At the least it should be "1333 MHz (333 MHz QDR)" -- do these guys understand the tech they are w

Cell can run rings around Intel on floating point or integer [b]vector operations[/b]. Not on anything else. And, in practice, the development time is so disproportionate that it's not worth it except for hobbyists or supercomputing apps.

I'm too lazy to look up the reference but I thought someone was working on this...possibly Intel.Or maybe that was something else. I don't remember it being like a CPU that had a socket for another CPU on the top of it, I think it was more on the nano-scale....Hmm, now I'm trying to think what they were using that technology for then...

Just as a point of interest, when I was looking for new components around a fortnight ago, suppliers were were already listing high-end chips in the forthcoming E6x50 series at lower prices than even the mid-range chips from the older E6x00 range. The E6600 has been near the sweet spot on the price/performance curve for quite a while now, so if you're looking for a cheap upgrade, it looks like they'll be practically giving away E6600s and E6700s for as long as they last.

I've never equated $222.90 [newegg.com] to "giving away free" before...but in comparison to $999 for the QX6850, it does seem like a steal. Especially since I can't find the QX6850 on sale anywhere yet....

I see no signs of price cuts yet for E6x00 chips, but I also see no-one actually shipping E6x50 chips yet.

However, I just checked several popular UK components web sites, and it's common to find (for example) an E6750 on pre-order at near enough the same price as an E6600 to buy today. Prices for the E6850 pre-order vs. E6700 shipping today are similarly close. To me, that suggests a big price drop for the E6x00 chips is coming.

it will be close to that even if it isn't exactly that. frankly, the notion that in a couple months i might spend just under 300 dollars and get a quad core cpu is....amazing. or to put it another way: in just a month you will get a %100 increase in number of cores for a %50 increase in price.

IIRC the virtualization is disabled on the E21xx. While those Brisbane cores do have virtualization. I have a 1.9Ghz that I bought for $65 much earlier this year, so the Brisbane being cheap is not a 9 days ago thing.And of course a more expensive chip outperforms a cheaper one. Also your article from Tom's shows the AMDs besting the Intels on power efficiency for idle (which is what really matters in the real world). You get what you pay for, but if you're looking to spend $125 on a mobo+cpu then your choi

Yes, this low-end dual-core is half the price, but not half the performance. Therefor a real bargain.

For the most part, all of the current Intel/AMD CPUs within the $0-$300 price range all provide nearly equivalent performance for X cost. That is, a $150 AMD chip will perform very similarly to a $150 Intel chip.

Personally, we still buy AMD Athlon64 X2s in the $100-$125 range for our desktops. They were first to market with affordable 64bit dual-core chips and we prefer to keep our systems as homogene

I'm in the market to build my own PC. I have always been an AMD fan (purely because of 64-bit support), but have been annoyed in the past at some software (such as codecs - this was before Automatix2 for my Ubuntu box) not being available. I'm thinking an Intel QuadCore or the AMD Athlon 64 6000+ (Dual Core), but am tempted to wait a little longer - especially if AMD open the ATI GPU drivers and now i'm tempted to wait for these new chips! Choice...choice...choice...

Nope. A marketing guy said something about working on better Linux drivers at one point, and a blogger mistakenly reported this as "ATI is open sourcing their drivers". Slashdot and Digg jumped all over the blog, no one RTFA, and now we have a horde of misinformed people like yourself who are sure AMD said they where open-sourcing ATI drivers. They have NEVER said that, and it is highly unlikely that they ever will.

I think, that with all the "pushing another processor in the market", they are making people to wait longer before they buy another processor. I'm looking for a new laptop, but sincerely, I'm still waiting so I won't fall in the same Core Duo/ Core 2 Duo trick. I bet lots of people are thinking the same way. So I'll wait at least until the T7000 series get cheaper (and also those 2G DRAM).

If you're a big Japanese OEM, you can get some SH7785 (600MHz SH4a)which apparently perform quite well for their low power consumption(however outside of Japan it's apparently damn near impossible to get
newer SH chips. We've tried and failed to even get a roadmap fromRenesas Europe)

Renesas is refocussing on multi-core chips instead of higher clocks(IIRC the SH-X3 is a quad-core design and already running linux)

Just amazed about the craze about the latest and greatest.Here I am running 4 terminals, Thunderbird, Firefox with around 20 tabs, a P2P-client, frequently an instance of mplayer, OpenOffice. Just the average user.And this runs on OpenSolaris resp. Debian, and the processor load hoovers between 10 and 65%. On a Sempron 3000. With 0% swap use. Okay, at compiling (e.g. mplayer) the thing sucks. But how many percent of the users are developers ? And how many are die-hard gamers. And then, QX6850 and E6750 sure

Go scroll some text and stop bitching.Seriously though, I don't get why people like to hate on new processor developments. Does everyone need them? No, surely not, but they allow for great things including pushing down the price of faster hardware. Do you like your nice high rez 2D interface? Like the fact that you can easily run 5+ different apps at the same time? Well then this is the kind of thing you have to thank for it. I remember a time not long ago, less than 2 decades, when you couldn't do that. I

Sure, not everyone cares about this. As a gamer with a PC on the high side of "affordable," I'm not even sure I care that much. But it is interesting. It's nice to know about the new developments, whether or not I'm going to use them anytime soon.

This is not a "race to the bottom" posting to brag about who is on the smallest / slowest machine.I do all my day-to-day web surfing from an OpenBSD P IV 1.5 GHz box running the latest distro.Flash works fine, acrobat works fine and no worries about the bajillion signed-but-have-root active x controls,acrobat overflow conditions, gifs/jpgs/etc with spyware/backdoor payloads, etc. Really, there'sno compelling reason to upgrade. But assuming I did the quantum leap in power to a recent intel or AMDproc would

Anandtech [anandtech.com] has a pretty good article about these releases and also about the price cuts. This is looking great for me when I build a new computer in a few months (on which I'm planning to spend $150 chip from two years ago look pathetic. Oh well.

Of course, I'll need to figure out AMD vs. Intel. I just wish Intel had a better bus design. AMD has a good bus (HT) and Intel has the best chips right now. Maybe if they merged...

The "bus design" is irrelevent. If someone offers me a car that goes 0-60 in 3 seconds and gets 45mpg, or a car that goes 0-60 in 3.4 seconds and gets 40mpg but has a super new carbon fiber uber efficient torquenator, I'll take the first car - thanks!

The bus only makes a difference in a few specialized cases. If you're buying a 4S server or doing stuff where memory bandwidth is number 1 consideration, then you'll have to take it into consideration. Otherwise, it's just a small factor.

But, the AMD X2s in the office have got the unsync'ed TSC problem (which causes stuff like time appearing to go backwards aka nonmonotonic time, which can cause programs to have problems). Sure in theory you're not supposed to assume they're in sync. BUT in practice on consumer-grade motherboards there's not much choice - often you don't get stuff like HPET or it's broken. Plus if your TSCs are synced, they are a better choice - the other timing methods are actually quite crappy[1].

So the workaround I use at work is to never let the cores idle and always run them at full speed. Boot linux with idle=poll.

Ironically, the AMD X2s supposedly use less power than the Core 2 Duos while idle...

Apparently AMD say they're going to fix the TSC stuff, and though it's been quite a while since they said that, AFAIK I don't think it's been fixed. So if I had to buy a CPU today for a desktop computer, it'll be a Core 2 Duo. The alleged Core 2 Duo security bugs don't appear to be being exploited by hackers all the time, whereas this AMD X2 TSC problem is always there.

I believe there are Windows gamers who are having problems with their AMD X2s and end up running the game/app only on one core and it's probably due to this TSC problem. Yeah the programmers shouldn't use TSC etc etc. But really what are their choices? See [1]

[1] Why can't the CPU + hardware + OS people get together and come up with something good for something as basic as time keeping?

AFAIK idle=poll doesn't come at a performance penalty, in fact it might be very very slightly faster.Basically the CPU just keeps polling for something to do, rather than taking a nap till there's something to do.

Running a distributed computing project _instead_ of doing idle=poll probably won't help, since it is unlikely to 100% guarantee that your CPU will never HLT.

BUT, doing idle=poll AND running a distributed computing project could make sense if you cared about not wasting compute cycles - basically w

my main question before that they answered is if the core 2 quad processors are being choked due to insufficient bandwidth. they measured the difference between 1066 and 1333fsb and performance barely increased. this brings me back to the following observations:

1. the processor is not bandwidth limited and merging the 4 cores in a single die would not yield much performance benefit. this brings back to the

Your are right on most points, but you have point 1 backwards. The fact that the processors are not bandwidth limited means it actually makes sense to put 4 cores in the same die. The common complaint about Intel's current Quad-Core designs is that there isn't enough FSB bandwidth to feed 4 cores. Anandtech's data actually disproves that claim.That said, it is still true that there is not much performance benefit to having 4 cores over 2, but the reason for that is very few users are doing the sort of wo

this brings back to the argument who is better the native quad core vs 2 x dual core (though in an engineering standpoint, the native quad core will be better

From an economy angle, the MCM (Multi-Chip Module) is better than an integrated quad-core, since less cores need to be discarded. If an error is detected in one core out of four in an integrated quad-core, the entire chip will be discarded, while with an MCM, only one of the two chips will be discarded.

The higher rate of discarded processors with integrated quad-cores would translate to higher prices.

For anyone buying a new system (like myself), its a great time in terms of CPU pricing. With Intel's price cuts, you can get a quad core chip in the $300 range!

Add to that incredibly low memory prices and incredibly low HDD prices and you can piece together something fast and cheap with little cash.

Unfortunately, the mid-range graphics market for DX10 parts isn't up to par with the rest of the parts. There is a void between $125 and $260. The geforce 8600GT is the $125 part, which is ok, and the 8800