Tue Mar 29 2022
Parallel computing and its advantage and disadvantage

Parallel computing is the use of two or more processors (cores, computers) in combination to solve a single problem. It is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate.
This type of computing is also known as parallel processing. The primary objective of parallel computing is to increase the available computation power for faster application processing or task resolution.
Parallel computing infrastructure is standing within a single facility where many processors are installed in one or separate servers which are connected together.
It is generally implemented in operational environments/scenarios that require massive computation or processing power.
In April 1958, S. Gill (Ferranti) discussed parallel programming and the need for branching and waiting. Also in 1958, IBM researchers John Cocke and Daniel Slotnick discussed the use of parallelism in numerical calculations for the first time. Burroughs Corporation introduced the D825 in 1962, a four-processor computer that accessed up to 16 memory modules through a crossbar switch.
Historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural and engineering sciences, such as meteorology. This led to the design of parallel hardware and software, as well as high-performance computing.
To deal with the problem of power consumption and overheating the major central processing unit (CPU or processor) manufacturers started to produce power-efficient processors with multiple cores. The core is the computing unit of the processor and in multi-core processors, each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers. Thus parallelization of serial programs has become a mainstream programming task.
In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors.
Parallel computers based on interconnected networks need to have some kind of routing to enable the passing of messages between nodes that are not directly connected. The medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines.
Advantages
-
Parallel computing saves time, allowing the execution of applications in a shorter wall-clock time.
-
Solve Larger Problems in a short point of time.
-
Compared to serial computing, parallel computing is much better suited for modeling, simulating and understanding complex, real-world phenomena.
-
Throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.
-
Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory.
-
You can do many things simultaneously by using multiple computing resources.
-
Can using computer resources on the Wide Area Network(WAN) or even on the internet.
-
It can help keep you organized. If you have Internet, then communication and social networking can be made easier.
-
It has massive data storage and quick data computations.
Disadvantages
-
Programming to target Parallel architecture is a bit difficult but with proper understanding and practice, you are good to go.
-
The use of parallel computing lets you solve computationally and data-intensive problems using multicore processors, but, sometimes this effect on some of our control algorithm and does not give good results and this can also affect the convergence of the system due to the parallel option.
-
The extra cost (i.e. increased execution time) incurred are due to data transfers, synchronization, communication, thread creation/destruction, etc. These costs can sometimes be quite large, and may actually exceed the gains due to parallelization.
-
Various code tweaking has to be performed for different target architectures for improved performance.
-
Better cooling technologies are required in case of clusters.
-
Power consumption is huge by the multi-core architectures.
-
Parallel solutions are harder to implement, they're harder to debug or prove correct, and they often perform worse than their serial counterparts due to communication and coordination overhead.