3 Answers
3

Hyper-threading is where your processor pretends to have 2 physical processor cores, yet only has 1 and some extra junk.

The point of hyperthreading is that many times when you are executing code in the processor, there are parts of the processor that is idle. By including an extra set of CPU registers, the processor can act like it has two cores and thus use all parts of the processor in parallel. When the 2 cores both need to use one component of the processor, then one core ends up waiting of course. This is why it can not replace dual-core and such processors.

+1 should add that hyper-threading is specific to Intel's implementation of SMT e.g. the SPARC processor has a different form of SMT implemented but with similar goals.
–
sybreonMar 22 '10 at 2:20

@Earlz are you suggesting that the processor is running two threads by dividing the cores in two? or is it looks like parallelism made in reality hyperthreading is causing processor to switch from one thread to another?
–
Doopy DooOct 13 '12 at 11:17

4

@DoopyDoo neither. Basically the processor has two "execution cores", which is the pipeline for instructions and such and it also has two sets of registers and other essential things.. The difference between hyper-threading and regular dual-core though is that some things are NOT duplicated. For instance, there may only be one ALU. So, this would mean while a dual-core processor can add two separate sets of numbers together at the same time, for a hyper-threading processor will have to make one of the virtual-cores wait until it's turn with the ALU.. Of course, this is a simplified example
–
EarlzOct 13 '12 at 18:05

Hyper-Threading is where two threads are able to run on one single-threaded core. When a thread on the core in question is stalling or in a halt state, hyper-threading enables the core to work on a second thread instead.

Hyper-threading makes the OS think that the processor has double the number of cores, and often yields a performance improvement, but only in the region of 15-30% overall - though in some circumstances, there may actually be a performance hit (=<20%).

Currently, most Atom chips and all i7 (and Xeon-equivalent chips) have hyper-threading, as did some older P4s. In the case of the Atoms, it's a desperate attempt to improve performance without increasing power consumption much; in the case of i7s, it differentiates them from the i5 range of chips.

95% correct. If a normal core = A + B then a hyper-threading core is more like A + 2 x B. They CAN execute two threads simultaneously as long as both threads don't need A.
–
Vincent VancalberghJul 5 '13 at 15:08

To expand on what's already been said, hyperthreading means that a single CPU core can maintain two separate execution contexts and quickly switch between them, effectively emulating two cores at a hardware level.

You get a modest speed benefit for multi-threaded workloads when compared to a normal, single core. However, it is nowhere near the benefit of having two independent cores. In terms of performance you're best to think of it as a small boost in multi-threaded performance over a single core rather than as having performance approaching two cores. The size of the speed boost varies according to workload - indeed for some workloads the performance boost is quite decent.

The hyperthreaded core only has one main execution unit, but certain other parts of a CPU associated with readying instructions for processing and maintaining an execution state are duplicated.

Processor cores have an instruction pipeline - a queue of future instructions to be executed, that is constantly being updated, ready for the CPU to execute the instruction at the head of that queue. CPUs use these to optimise execution speed by looking at these future instructions and doing some simple, low-level pre-processing on them where possible (such optimisations include "out of order execution" and "branch prediction").

Hyperthreaded cores have dual instruction pipelines, and this - along with a second set of registers - is where you get the speed benefit for multithreaded workloads. Switching between thread contexts does not throw out the pipeline or registers, and the pipeline and registers for the other thread remains ready and "hot" so they can be switched to and used immediately.