How does multi-core processor work?

Working process of multi core processor

A multi-core processor is one which combines two or more independent processors into a single package, often in a single integrated circuit to perform task parallel. With only one core, a system can only work on one task at a time. After completing the first task then can only move to another task. But, in case of multi-cores, a system can perform multiple tasks at once, which is very useful for today’s multitasking environment. You may know that the microprocessor with multi-cores is currently used in almost all personal computers. It widely used across many application domains, including general-purpose, embedded, network, digital signal processing, and graphics processing unit (GPU).

Before diving into deep and figure out how multi-core processors work, it’s important to know about the backstory of processor technology. Before the concept of multiple core processor, people and companies tried to build computers with multiple CPUs. That means a motherboard should have more than one CPU socket. This concept increased latency because of the increased communication lines. A motherboard had to split up data between two completely separate locations in a computer, this physical distance becomes the cause of slow processing. So, performing multiple processes on one chip with multiple cores not only decrease the distance but also allow different cores to share same resources to perform heavy tasks simultaneously.

After that, the computer manufacturers came up with the concept of hyper-threading when processors required to be more powerful. The concept came from Intel, and it was first conceived in 2002 on the Intel Xeon server processors, and later, came out Pentium 4 desktop processors. Today, hyper-threading is also used in processors, and that is even the main difference between Intel i5 chips and i7 chips. When a processor performs a task that required low processing power, then there have so many unused resources which can be used to perform another task. Hyper-threading concept basically presents this unused resources as a separate core to the operating system, as though it has only one core. Hyper-threading is slightly slower than a processor with one core when there isn’t enough processing power to share between the two programs.

After various experiments, CPUs with multiple cores were finally built. It describes that one single processor basically had more than one processing unit like a dual-core processor has two processing units, a quad-core has four, and so on. From the 1980s until the 2000s, engineers were able to increase processing speed from several megahertz to several gigahertz and companies like Intel and AMD did this by decreasing the size of the transistor which also provided more space with improving performance.

Now, let’s find out how do multi-core processors work?

At first, the motherboard and the operating system need to recognize the processor that there are multiple cores. Because an older OS with one core might not work well if you tried to install on a multi-core system. Like, Windows 95 doesn’t support hyper-threading or multiple cores, so if you install it on a multi-core system then it doesn’t utilize your hardware, everything will be slower than single core processor. All recent ora are supported multicore and multithreading processors like Windows 7, 8, 10, Apple’s OS X 10.10 and Linux.

In single core system, the operating system then tells the motherboard that a process needs to be done., then motherboard instructs the processor. But in a multi-core, the operating system can tell the processor to do multiple things at once where data is moved from the hard drive/RAM to the processor through the motherboard.

The recent processors also have multiple levels of cache memory that hold data for the processor’s next operation(s). These cache memory can save a lot of time of processing. The first level of a cache memory is the L1 cache. If the processor cannot find the data it needs for its next process in the L1 cache, it looks to the L2 cache. The L2 cache is larger in memory, but it's slower. If a processor cannot find what it’s looking for in L2 cache, it continues down the line to L3 and L4, if a processor has it. After that, it will look in main memory. Usually, one core will have its own L1 cache, but multiple cores will share L2 cache. This will be completely different for multiple processors. One of the main advantages to having a shared cache is the ability to use a cache to its fullest, because of that if one core isn’t using the cache, then the other can.


This multi-core concept really expands processing power of a processor. In future, we will continue to see the implementation of more cores and also increased processor clock speed.

Comments (0)

  • To add your comment please or

We use cookies to improve your experience on our site and to show you personalised advertising. Please read our cookie policy and privacy policy.

Got It!