Improved hardware and software compatibility one of the top three goals of Windows 7 development

Microsoft is working hard to make things better for the launch of Windows 7 following the lukewarm reception to Windows Vista. Vista was plagued with early hardware and software incompatibility issues that were one of the main reasons enterprise customers refused to migrate from XP.

Microsoft says that among the improvements in Windows 7 is better support for Hyper-Threading according to Microsoft's Bill Veghte. Veghte says that Microsoft has been working closely with Intel to beef up Windows 7’s support for Hyper-Threading. Hyper-Threading it a technique used by Intel to allow processing tasks to be divided among multiple cores on a processor.

Veghte said at the Microsoft TechEd conference, "The work that we've done in Windows 7 in the scheduler and the core of the system to take full advantage of those capabilities, ultimately we think we can deliver a great and better experience for you. We need to make sure the ecosystem is really, really ready."

Veghte is keen to get people to understand that Windows 7 won’t suffer from the same early problems Vista had that prevented the operating system from making headway in enterprise environments. He says that Windows 7 is "very, very close" to achieving full compatibility with virtually all hardware and software makers.

Microsoft currently expects to finish Windows 7 by mid-August and offer a final version to consumers and businesses by the holiday shopping season. That is a key target for Microsoft as a better operating system could woo consumers to buy new computers for the holidays. Better computer sales is certainly something that both Microsoft and computer makers need. Microsoft has admitted that Windows sales are down 16% in the most recent fiscal quarter.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

Not necessarily. Remember, hyper threading creates a logical processor for each physical processor. So, each core essentially has another virtual core associated with it that has some registers etc of its own.

If you run two threads on two separate physical processors, they will perform better than if you run them on the same physical processor (ie using a logical processor).

For example, the OS might want to schedule busy threads on different physical cores rather than putting them on two virtual cores within a single HT physical core. By scheduling them on different cores they will run faster than if they are sharing the same core.

from windows XP (maybe SP1 and above i think) the OS is aware of HT, scheduler should work well but can ome times put work onto an HT thread when it should be on an real core thats free, i guess with windows 7 thay makeing that work better

The OS knows what processor is in the computer. The OS knows (or at least can know) whether HyperThreading is enabled or not. And the OS knows how the CPUs are numbered when hyperthreading is turned on versus off. So it is a trivial problem to determine which cores are virtual and which are physical.

As an example, without HT, a dual core processor may be exposed as CPU0 and CPU1, but with HT enabled, CPU0 may be exposed to the OS as CPU0 and CPU1, while the old CPU1 becomes CPU2 and CPU3. Since this will be a consistent and regular identification process, the OS can be programmed to know which physical cores become which logical cores, and adjust each CPU's workload accordingly.

HyperThreading is enabled in the hardware, not the software. I remember many people stating that my Pentium 4 2.8C wouldn't support HyperThreading in Windows 2000 Pro...but it did. Windows 2000 Pro saw two separate processors and I was able to assign affinity to either.

That's like saying you can just use any video card and get 100% optimization in every program. Just because the hardware supports it doesn't mean the software is going to use it well. This is one of the reasons why programs didn't see a jump in performance when moving from one to two cores, back when dual-core was new. OS and hardware sees two cores, but the program itself is still using just the one like it always had. it's the same principle except more complicated since it's not an actual core.

Back with the Pentium 4 and Windows 2000 I wasn't too concerned with the OS using multiple threads. It was all about applications using them and they did. Media encoding performed better with HyperThreading enabled - and it worked in Windows 2000 Pro simply because that OS supports multiple processors.

Windows 2000 is notorious for being a hyperthreading-unaware OS. In many cases, with multi-threaded applications, significant performance degradation has been observed with hyper-threading enabled under this OS. So it's generally strongly advised that hyper-threading be disabled on Windows 2000 machines.

Multi-core is of course different, and as Windows 2000 was designed for multi-CPU machines, it handles multi-core machines just fine. This is very different from treating a virtual CPU as a real CPU and thus incurring unnecessary performance penalties at times.

There's a difference between a CPU and a GPU. Not sure why you would even bother to mention one as they are drastically different.

Look, I don't care what you say. It was Windows 2000 and media encoding improved when Hyperthreading was enabled. There's nothing you can say or do to change what happened and really...Windows 2000 is a bit old to be debating over. I was just making a valid point.

Theres no physical cores vs logical cores, they're all logical cores.But win7 will know which 2 logical cores are the same physical core, and thus can schedule threads for different physical cores. Or in power save mode perhaps schedule on the same physical core, leaving the other cores in deep sleep until needed.

Actually you can have two threads running WITHIN the same data space, in fact, that is the DEFINITION of multi-THREADING. A process forks off a thread or, threads, and each thread exists within the same process space.

Multi-PROCESSING is different, because each PROCESS gets its own memory space, but each process that is forked takes a copy of the memory space of the parent process and runs with it. Secondly, your argument is nullified by the fact that the article has nothing to do with IPC and simply refers to the logic that the kernel's schedule may use when hyper-threading is enabled.

This entire argument should have been ended when the Intel API for hyper threading was posted, but I suppose everyone else doesn't understand the barrier between software and hardware.

And no, SOFTWARE doesn't give a DAM about what the hardware is. Tell me this if it does, why can't I assign data to my hard drive when I request for space in memory (malloc())?

While the essence of what you say is right...you're just repeating what Intel wants you to.

What HyperThreading does is keep the processor pipeline filled with useful data as much as it can. In reality, it does NOTHING to emulate multiple cores. What an HT enabled processor has in fact, is a few key components either duplicated (TLB, a 2nd Inst pointer) OR enlarged (register renaming/mapping hardware, inst window(?)) so as to keep the pipeline full (with data possibly from another thread) if the existing thread is either waiting on something or not making full use of the available resources.So yes, while Intel's marketing department might want us to believe that HT is out of this world, realistically, it is something that just makes better use of the available resources.So THIS is the reason why having 2 physical processor cores is MUCH better than having one with HT enabled...for the right applications.

Having 2 physical cores is better than 2 HT cores if and only if the cores have comparable performance and if the former has more physical cores. If you have a situation where it's 2 physical cores with no HT versus 2 physical cores with HT making 4 virtual cores, the latter is generally better even if the cores are comparable in overall performance. That's because the latter can keep the processor more busy and this is very helpful for multiple applications running at the same time or a multi-threaded application. It's less helpful (or possibly a detriment) if you are only running a single application that is single threaded.

Here's a great illustration why physical cores aren't always better. AMD likes to say physical cores are better than HT cores in their marketing campaign. The problem is that a two-socket Intel Nehalem-EP is faster than a four-socket AMD Shanghai server despite having half the physical cores.http://www.dailytech.com/Server+roundup+Intel+Neha...

I'd say you both are correct. I myself would like to have all processors be equipped with HT because yes, adding more cores adds more performance, but it's still inefficient and that's where HT is needed, to make the processor more efficient. So both adding more cores and adding HT capability is good. IIRC, adding HT capability to current processor only add 10% to the total transistors count anyway, so why not? :-)

i think you misunderstood the point. it wasn't about how many cores total there were or if HT was enabled - for this it usually goes "the more, the better"the question was about where two threads get scheduled by the OS. when both a separate real core and a virtual cpu on the same physical core are available for the second thread, it is most often better to use the separate real core.and we are talking single-socket here.

Then there's also the fact that HT was introduced with NetBurst, an architecture that had trouble having its (very long) pipelines correctly fed, specially when the pipeline had to be flushed because of a miss.There HT made a lot of sense as it was a means of masking those shortcomings for SOME coding patterns, while other patterns were largely unaffected or even some saw a performance penalty.

In C2D architecture the original HT spec couldn't actually bring much to the table, that's why it took so long for it to reappear.

I think this is an important improvement, and am happy to see Microsoft focusing on it. The shortcomings of Jackson Technology were that it wasn't utilized properly. If the scheduler can utilize logical cores for what they are and not treat them like physical units (ie light background instructions in sequence with physically processed instructions) then it could show real improvements in system performance, unlike degrading performance like it did in its original Netburst implementation.

HT has always been a great concept without the proper (software) support.

"Paying an extra $500 for a computer in this environment -- same piece of hardware -- paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be." -- Steve Ballmer