With their launch in late 1990s multicore processors redefined what multitasking meant. Before their introduction, the systems only had a single core thus only one processing hardware that had to fetch and process data alone. The manufactures experimented to build computers with multiple CPUs to overcome this limitation. Multiple CPUs meant more than one CPU socket on the motherboard and more money to be spent on the additional hardware. Not to forget the increased latency created as a result of the increased communication that took place between these separate processors.
Multicore simplified lives by adding more than one processing unit on the same processor, thus not only decreasing the distance for threads to travel but also sharing resources to perform complex operations. In its early days, it was specifically engineered for enterprise customers but as the demand for complex multitasking increased in personal computers as well they were marketed for all variety of users.
Multicore changed the way multitasking was done on computers. Now you could open multiple programs/software all at once without suffering from bottlenecking or slow processing.
Today multicores can be found very commonly in systems, tablets, and even smartphones. Not only it provides you with the freedom to work with as many programs you wish, but also it enhances the overall system processing speed. Here we have highlighted the basic changes multicore brought with its first introduction.
A thread simply stands for a stream of data being transferred from the processor to the computer. An application can generate one to many threads depending upon its need. However, a single core processor only handles one thread at a time while multitasking, meaning the system has to switch rapidly between threads to process data in a synchronized manner. Here is where multi-cores turned the game around by handling more than one thread simultaneously with each of its cores handling a separate thread individually. This architecture greatly impacts and enhances the overall productivity and level of multitasking of the system.
Hyperthreading is another phenomenon that mimics this pattern of data delivery but there is no second core physically present in this scenario. Hyperthreading actually uses the unused resources in a processor to manage another thread, therefore it is always slower than multi-cores. But it is useful in cases where programs are single threaded and are not written to be used on a multi-core.
Continuing the thought another concept is Multi-threading, that is essentially the ability of the system to split code into multiple threads and fully benefit from the cores present.
The foremost detail to keep in mind is that not all programs or software are written to support multithreading. Therefore, software compatibility is necessary to fully take advantage of multiple cores or efficiency of the processor is gone unappreciated. As rather than using a quad core or octa core processor for a program that uses only a single core it is better to opt for a dual-core processor with a higher base clock value. This is significantly important where the system has to process complex calculations like graphics/video editing or gaming than a simple mail program or web browser.
Number of Cores
Many games deliver only a slight difference in performance with two and four cores, therefore, number of cores might not bring a great change with the games currently in the market. On the other hand programs like video encoding or editing can greatly benefit from the additional cores by collecting multiple threads from individual cores simultaneously. Thus Multi-core can fully display its potential when used with the compatible program and specifically in situations where complex processing is required.
Like a simple math rule higher the clock speed, the faster the processor works. However multiple cores stand exception to this case. As the multiple data threads ran by the cores renders the system potential to achieve the max clock speed due to the thermal restriction. This excess of energy dissipated restricts the processor to work on its maximum individual speed.
A processor has multiple levels of cache memory that ensures that it is always able to find data when required. This is usually broken down to four levels with L1 being the first place holding data, then come L2 that usually has larger memory but is slower than L1, L2, and L3 after that processor further searches data in the RAM or in direct memory of the system. A processor looks for data is this stepwise order. Different processors have different ways to handle data, for instance, some can duplicate the data on L1 and L2 cache which of course takes up space in L2. A multi-core processor comes in handy as it usually has separate L1 but shares L2 cache with all its cores. This maximizes the caching ability as the unused cache from one core can be used by another.