Parallel and serial processing describe whether a computer system can break apart computational tasks to use several processors or cores simultaneously or if it is reliant on completing tasks with a single processor core. All individual consumer computer processors were serial processors prior to mid-2005 when Intel introduced the first consumer dual-core processor. Several single core processors can work together to handle serial processing through networked parallel computer clusters or running several processors on one motherboard.
Computers are Multitasking Machines
A typical modern computer runs dozens to hundreds of tasks at any given time; however, each core is only working on one process at once. The processor constantly jumps between the different processing "threads" or "instruction streams" to run several concurrent programs under a real-time illusion called concurrency. The computer ends up wasting processor cycles while switching between jobs and doesn't run at optimal efficiency when multitasking.
Executing Tasks in Parallel
A parallel processing environment can process tasks faster when programs are designed to use parallel processing. Serial programs line up all instructions in serial arrangement and interface with the processor using a single thread. Parallel programs work by breaking up tasks into individual parts that can be divided between several processor cores and re-assembled as completed tasks. Parallel processors can multiply processing power of similarly clocked serial processors with properly written code. However, a serial processor with a higher clock speed can outperform parallel processors when working with a single thread.
Serial Processing in Action
Programs written for serial processing only use one core at a time and process tasks in sequential order. A serial processor works a lot like having a dozen open checkout lanes at grocery store with one cashier running between the different lanes, checking out everyone at the same time. The cashier, or CPU, jumps from lane to lane checking out a few items at a time before moving on to the next one with the goal of finishing all the orders at the same time.
Parallel Processing in Action
The idea behind parallel processors is that more cores that work together will lead to better performance. A parallel processor behaves like having more than one cashier operating a dozen checkout lanes. If a program is set up to take advantage of parallel processing, the "customer" could break up his order into smaller groups and use several checkout lanes at once.
Parallel Processors Expand Possibilities
In 2007, Nvidia first used parallel processing to advance graphics technology. Graphics processing units use parallel processing on a level that blows away serial processing performance when making small calculations. While CPUs tend to have an easily countable number of cores, GPUs can have thousands of lower-powered cores that are better suited for running simpler simultaneous calculations. GPUs are commonly used for graphics, but can do other calculations for things like sorting and matrix algebra.
- Computer Hope Jargon: Parallel Processing
- University of Michigan Artificial Intelligence Lab: Serial Processing
- Techopedia: Parallel Processing
- PC Magazine: Iniside Intel's First Dual Core CPU
- Ars Technica: Ask Ars: What is a CPU Thread?
- Princeton: Parallel Processing
- Nvidia Cuda: Parallel Programming and Computing Platform
- Nvidia: What is GPU Accelerated Computing?
- PC Magazine Encyclopedia: Definition of: GPGPU
- Lawrence Livermore National Laboratory: Introduction to Parallel Computing