What is the difference between the terms concurrent and parallel execution? I've never quite been able to grasp the distinction.

The tag defines concurrency as a manner of running two processes simultaneously, but I thought parallelism was exactly the same thing, i.e.: separate threads or processes which can potentially be run on separate processors.

Also, if we consider something like asynchronous I/O, are we dealing with concurrency or parallelism?

In a nutshell - concurrent: many different operations happening at once. parallel: the same operation broken into small bits happening at once.
–
Oded♦Mar 15 '13 at 16:04

1

@Oded, I understand what the words mean, but I'm having trouble grokking the implications. Do you have a concrete example?
–
blzMar 15 '13 at 16:11

1

@Oded, I don't really agree with you, nowhere in their definitions (either general or applied to programming) do the notions of "concurrent" and "parallel" mention anything about the number or "size" of operations.
–
Shivan DragonMar 15 '13 at 16:22

5 Answers
5

Concurrency means, essentially, that task A and task B both need to happen independently of each other, and A starts running, and then B starts before A is finished.

There are various different ways of accomplishing concurrency. One of them is parallelism--having multiple CPUs working on the different tasks at the same time. But that's not the only way. Another is by task switching, which works like this: Task A works up to a certain point, then the CPU working on it stops and switches over to task B, works on it for a while, and then switches back to task A. If the time slices are small enough, it may appear to the user that both things are being run in parallel, even though they're actually being processed in serial by a multitasking CPU.

@blz: That's right. It's also how preemptive multitasking works. The main difference is that on async IO, the program decides to give up its time and tell the CPU to process something else, whereas in preemptive multitasking, if the running thread doesn't voluntarily give up the CPU after long enough, the OS preempts it.
–
Mason WheelerMar 15 '13 at 16:37

Concurrent processing describes two tasks occurring asynchronously, meaning the order in which the tasks are executed is not predetermined. Two threads can run concurrently on the same processor core by interleaving executable instructions. For example, thread 1 runs for 10ms, thread 2 runs for 10ms etc.

Parallel processing is a type of concurrent processing where more than one set of instructions is executing simultaneously. This could be multiple systems working on a common problem as in distributed computing, or multiple cores on the same system.

Parallelism is one way to implement concurrency, but it's not the only one. Another popular solution is interleaved processing (a.k.a. coroutines): split both tasks up into atomic steps, and switch back and forth between the two.

By far the best known example of non-parallel concurrency is how JavaScript works: there is only one thread, and any asynchronous callback has to wait until the previous chunk of code has finished executing. This is important to know, because it guarantees that any function you write is atomic - no callback can interrupt it until it returns. But it also means that "busy loops" won't work - you can't set a timeout and then loop until it fires, because the loop will prevent the timeout callback from executing.

You say that Concurrency means that two or more calculations happen within the same time frame, and there is usually some sort of dependency between them. But the user that wrote that accepted answer says Concurrency means, essentially, that task A and task B both need to happen independently of each other. So what is the conclusion?
–
nbroMar 13 at 12:07

Parallelism is a way to speed up processing. Whether you do matrix multiplication on a single core, on multiple cores or even in the GPU, the outcome is the same (or else your program is broken). It doesn't add new functionality to some program, just speed.

While concurrency is about things you couldn't do sequentially. For example, serving 3 different webpages at the same time to 3 clients, while waiting for the next request. (Though you could simulate this to some degree through interleaving, as it was done in the elder days.)
Note that the behaviour of concurrent programs is nondeterministic. It is for example not clear, which of the 3 clients will be completly served first. You could run quite some tests and get a different result each time regarding the order the request will be finished. The run-time system should guarantee that a) all clients will be served and b) in a reasonable amount of time.

Usually, the work horse of a parallel computation isn't aware of, nor does it care about, parallelism. While concurrent tasks often explicitly employ inter-process or inter-thread communications - such as blocking queues, synchronization and locking mechanisms.