At first, we had single core CPUs. These CPUs were clocked at a certain speed and could deliver performance at that particular speed. Then came the age of CPUs with multiple cores. Here, every individual core could deliver its own speed independently. This exponentially increases a CPU’s power, thereby increasing the computing device’s overall performance. But the human tendency is to always look out for even better. Hence, multithreading was introduced, which slightly increased the performance – but then came Hyper-Threading. It was first introduced in 2002 with Intel’s Xeon Processors. With the implementation of hyperthreading, the CPU was always kept busy executing some task.
It was first introduced with Intel’s Xeon chip and then appeared in consumer-based SoCs with the Pentium 4. It is present in Intel’s Itanium, Atom, and Core ‘i ‘ series of processors.
What is HyperThreading in computers?
It is like negligible waiting time or latency for the CPU to switch from one task to another. It allows each core to process tasks continuously without any wait time.
With Hyperthreading, Intel aims to reduce the execution time of a particular task for a single core. This means that a single processor core will execute multiple tasks one after the other without any latency. Eventually, this will reduce the time taken for a task to be executed fully.
It directly takes advantage of the superscalar architecture, in which multiple instructions operate on separate data queued for processing by a single core. However, the operating system must also be compatible. This means that the operating system must support SMT or simultaneous multithreading.
Also, according to Intel, if your operating system does not support this functionality, you should just disable hyperthreading.
Some of the advantages of Hyperthreading are-
- Run demanding applications simultaneously while maintaining system responsiveness.
- Keep systems protected, efficient, and manageable while minimizing the impact on productivity.
- Provide headroom for future business growth and new solution capabilities.
In summary, if you have a machine used to pack a box, the machine has to wait after packing one box until it gets another from the same conveyor belt. However, implementing another conveyor belt that serves the machine until the first one fetches another box would boost the packing speed. This is what Hyperthreading enables with your single-core CPU.
This article is largely false.
SMT does not allow for “multiple tasks to execute simultaneously”; it allows one core to process exactly two threads using the same resources. To maintain the highway analogy, it’s like having two on-ramps for the same one-lane highway. SMT is not technically a form of parallelism, but rather of concurrency.
SMT bringing performance to an “all-time high” is quite a stretch and it absolutely does not “[…] bring down the execution time of a particular task”. It certainly can help crunch through multiple threads at a faster pace, but any given thread will be running at the same speed or slower than it would without SMT. It can even slow the overall pace of your whole system if two threads are overly competing over the same resources on the same core. Spin locks without a slow down mechanism can cause a lot of trouble.
Also, SMT does not in any way keep a system secure; in fact, it’s rather notorious for introducing side-channel vulnerabilities. The threads share resources and clever methods can leak information about your “core buddy” that can be exploited.
I came here to say the same thing. Horrible article that seems to come straight out of a 2003 CNet article….
Yes, this article is horrible and gets more wrong than it gets right. Although, SMT does indeed let “multiple tasks execute simultaneously” which is indeed parallel execution. Each core has many execution units and they are shared among the two threads simultaneously. One ALU might be doing an ADD for one thread, while a different one might be doing a SUB for another. This creates more independent instructions and allows the core to keep the execution units more busy.
Thanks for the feedback, The article has been reviewed & edited.
I took issue with strictly defining it as parallelism because the gains are completely dependent on how well the threads interact. If the threads are overly competing, there will be no parallelism, and it will in fact only be concurrent. In the case of independent cores, instructions are truly independent (to be fair the memory system is typically not). I think we can agree that the parallelism gains from SMT are pretty insignificant compared to the gainsadding a full-fledged core to the system.
However, I did look it up since and according to the definitions, a true SMT implementation is pretty unambiguously a form of parallelism.