Wasn't Thuban supposed to be AM2+/AM3 and therefore DDR2 capable as well (in the slide it only says DDR3 whereas Phenom II says DDR2/DDR3)? If not what a disappointment (for me at least)...I was going to hold off a CPU upgrade until it came out.

I saw good resolutions pics of the so called lunar module, heck, that s quite a piece of garbage with badly jointed metalic and litteraly hammered plates, seriously, you think that this piece of metalic junk actualy landed on the moon..??

If AMD can pump out cheap low-power desktop and mobile chips in volume, they'll be just fine.

There's a saying in business - "He who lives by price dies by price".

Being able to compete only on price is death, especially in an industry that requires the kind of funding that developing CPUs requires.

__________________

Quote:

Originally Posted by Abwx

I saw good resolutions pics of the so called lunar module, heck, that s quite a piece of garbage with badly jointed metalic and litteraly hammered plates, seriously, you think that this piece of metalic junk actualy landed on the moon..??

I think amd is taking a risk with this design but I think its something they have to do. they aren't like intel where they can have several parellel R&D programs. they have to pick something and hope it pans out the way they think.

__________________
"I contend we are both atheists, I just believe in one fewer god than you do. When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours."
-Stephen F Roberts
Heat: FallOutBoy525

This seems like a high risk/high reward type move. I'm actually glad they acted instead of just steering the course for once. While nV has been trying to drum up GPGPU, having both an open standard (openCL) and a mainstream worth caring about (direct compute) I can see this sector really take off. Also, getting bobcat into netbooks and perhaps high end smartphones (depending on where that wattage range hits) could be a huuuuuuuuuge source of income.

Well, I guess the smallest functional unit is 1 "buldozer", which
is composed by 2 smaller cores and seems to be in reality, 1.5 cores...

Yeah it is confusing.

I guess they want to say "Hey we have HT too" but then their HT is more hardware intensive and sounds a lot better to say you have 2 linked cores than 1.5 cores or a core that can do 2 threads.

From 4/8 I read (4cores/8 threads)/(8 cores/16 threads).

On the other hand 4/8 is different from "up to 6". Fucking slides.

Naw, I don't think they have any plans to ever say hey we have hyperthreading too. In fact, they have spoken out on the record saying they don't plan on having anything similar to hyperthreading, as it shows performance losses in some cases. Not really worth the real estate.

It SEEMS as though FP on CPU is being deprioritized with the advent of on-die GPUs and whatnot. Ironically, its really Intel doing Fusion-idea first. The push for CPU-accelerated vertex processing on their IGPs is a case in point. It's just that no one cares currently. Only matter of time CPUs start accessing GPU resources.

Problem with thread is more complex, but Intel is currently pushing it with Hyperthreading, which by Bulldozer timeframe might be ready with more threaded apps.

Naw, I don't think they have any plans to ever say hey we have hyperthreading too. In fact, they have spoken out on the record saying they don't plan on having anything similar to hyperthreading, as it shows performance losses in some cases. Not really worth the real estate.

A single Bulldozer core will appear to the OS as two cores, just like a Hyper Threaded Core i7. The difference is that AMD is duplicating more hardware in enabling per-core multithreading. The integer resources are all doubled, including the schedulers and d-caches. Itís only the FP resources that are shared between the threads. The benefit is you get much better multithreaded integer performance, the downside is a larger core.

when I read the above, even if they don't want to say HT, they want to say "look you have loads of CPU thingies on task manager, isn't that pretty?".

No, because current GPU's are only fast at extremely parallel applications that use a rather limited instruction set. That's why I mentioned general purpose floating point.

Even if they manage to pull off a miracle and somehow create the next evolution in computing, notice on the slide it says "Next-Generation Software Ecosystem". A new generation of development tools and end user applications aren't going to happen in two years. Maybe, just maybe, you could pull it off in five if you were Intel and owned the compiler.

__________________

Quote:

Originally Posted by Abwx

I saw good resolutions pics of the so called lunar module, heck, that s quite a piece of garbage with badly jointed metalic and litteraly hammered plates, seriously, you think that this piece of metalic junk actualy landed on the moon..??

Meh, its not like the FPU is disappearing. 2x 128-bit FMAC equals to 2x 256-bit FPU that can do ADD and MUL each. In single thread, its effectively having a 256-bit FPU, while in multi-thread its probably limited by resource contention and bandwidth to take advantage of full 256-bit so 128-bit might be enough.

No, because current GPU's are only fast at extremely parallel applications that use a rather limited instruction set.

Shouldn't the success of the future architecture than be dependent on future GPUs?

Rumors have the next ATI generation end of 2010, more than in time to pair with bulldozer. Additionally, we still have to see what Fermi will do (although I don't think AMD is counting on what NVIDIA has to offer or not).

There's no way they'll be able to pair a end-2010 GPU with Llano, unless Llano is releasing at sometime like say, September. From what I heard the GPU performance will be at Radeon 4700-ish levels and will feature a 5x00 derivative core.

Meh, its not like the FPU is disappearing. 2x 128-bit FMAC equals to 2x 256-bit FPU that can do ADD and MUL each. In single thread, its effectively having a 256-bit FPU, while in multi-thread its probably limited by resource contention and bandwidth to take advantage of full 256-bit so 128-bit might be enough.

Yes, the FMAC is nice but the CPU still seems out of whack between INT and FP units to me.

__________________

Quote:

Originally Posted by Abwx

I saw good resolutions pics of the so called lunar module, heck, that s quite a piece of garbage with badly jointed metalic and litteraly hammered plates, seriously, you think that this piece of metalic junk actualy landed on the moon..??

There's no way they'll be able to pair a end-2010 GPU with Llano, unless Llano is releasing at sometime like say, September. From what I heard the GPU performance will be at Radeon 4700-ish levels and will feature a 5x00 derivative core.

Sure, but Llano isn't a bulldozer core, right? I was under the impression it is phenom II cores.

My point was that, the perceived or real imbalance of the bulldozer core, might not be one if the 6xxx architecture solves the problems highlighted by Phynaz.

Wait...so now we are finding out that AMD's "the future is fusion" marketing strategy is really more like "back to the future" with CPU's being Integer processors and the FPU being shoveled into math-coprocessors?

I saw good resolutions pics of the so called lunar module, heck, that s quite a piece of garbage with badly jointed metalic and litteraly hammered plates, seriously, you think that this piece of metalic junk actualy landed on the moon..??

Wait...so now we are finding out that AMD's "the future is fusion" marketing strategy is really more like "back to the future" with CPU's being Integer processors and the FPU being shoveled into math-coprocessors?

Yea, you are right. But the "heterogeneous computing" slide is explicitely about Fusion, which made me thought you were naturally referring to Llano.

Might be misinterpreting your post, and if so I apologize, but won't Bulldozer CPUs also have GPU on the same die, using Fusion as the controller?

High-end being APU (Buldozer CPU+GPU on same die) + Discrete GPU that will also be accessed as GPGPU if needs be, while low-end can just be the APU with no discrete GPU and/or a less powerful individual GPU.

I've the impression Llano is basically a test bed for integration of CPU+GPU on same die (and probably being a cheaper overall platform that can actually play current games), that will then be replaced by smaller Bulldozers a la i3/i5.

It will also be interesting to see where NVIDIA GPUs will fit on this - if the AMD APU will access them and/or if the GPU on the APU will be able to enhance NVIDIA GPU.

Then we have the Intel side - it is a safe bet to say Intel CPU will be faster than AMD CPU, but will the APU (what is the name for the Intel version? Sandy Bridge?any?) be overall a better buy? What will be the value of Larrabee and how will NVIDIA GPUs work with Intel APU?

2011 seems as it can be an interest year. 2010, though, seems one-sided - again...

So when AMD says a Zambezi CPU will have 4/8 Bulldozer Cores does that mean it will only have 2/4 of these "tightly link two-core modules"?

I believe that it means that every 2 cores will share one cache, and that the shared cache pairs will be using direct connect architecture with each other. So an 8 core will be 4 dual cores directly connected to each other on the DCA Bus...
This is similar to Intel's shared cache on the Core Duo, except that Intel had to connect the cache through an off-die FSB rather than directly connect to another cache on the die (much higher latency).

Quote:

A new generation of development tools and end user applications aren't going to happen in two years

They're talking about OpenCL...so I would disagree with your statement as they are almost ready now and developers already have initial SDK kits...

__________________
"Time flies like an arrow, Fruit Flies like a banana"