Technical vocabulary in IT industry is sometimes very confusing and “Concurrency” and “Parallelism” are some of them. Many developers think “Concurrency and parallelism means executing at the same time” which is right 50%, but with one big difference:

Concurrency gives you a feel of parallelism and parallelism as the name implies is actual parallelism.

Feel of parallelism means you execute multiple tasks on the same core and the core switches context between tasks and serves them. You can also term this has time slicing / overlapping time period because your single core is just dedicating some time to one task and then some time to other.

In order to achieve actual parallelism, we need dedicated cores, separate memory and so on.
We need MORE RESOURCES.

Let’s say we want to show a progress bar for some task completed. Now we really do not want to have separate cores allocated to display the progress.

We do not want PERFORMANCE here, we want that physiologically the end user feels both tasks are happening simultaneously.

We want to just beat the human eye capability of 100 FPS and give an illusion of parallelism without stressing our computer resources. But let’s say we want to process big Excel files with a million records, then yes we would love to have actual parallelism to achieve performance.

In order to achieve concurrency, we need to compose our application logic independently. For instance, let’s say you want to process employee data where you want to increment the salary by x% and bonus by x%.

So you can decompose the application into logical units by following different designs:

Design 3

There can be many such designs and combinations. So when you say your application is supporting concurrency, your application should be composed into small independent units.

Now you take these units and run on one core (Concurrency) or you run on multiple cores (Parallelism). So concurrency is about design while on parallelism, we talk more from the hardware perspective, 2 core, 3 cores and so on.

If you try to run every concurrent code as parallel, you have resource starvation unnecessarily. So ask yourself if you want an illusion (concurrent) or if you want performance (parallel).

Comments and Discussions

If the current thinking/usage is that concurrency includes multi-tasking ... and, particularly, if it includes the possibility of inter-process communication (I did, after all, suggest a duet was an example of concurrency) ... then I accept concurrency as more general than parallelism.

A background worker then is "concurrent" (with the UI main thread) while tasks running in parallel on multiple cores are ..uh, "parallel."

But I recently used the term "concurrent" in an article about applying machine learning models (e.g., CNTK) to large sets of data for classification purposes. There, the data set is partitioned and the model is cloned and the number of logical processors is controlled. It seems to me that in such a situation, each processor is working with a copy of the model and a different slice of data and so you have essentially different operations going on (and different results being obtained) at different times "inside" each processor. Although each processor is working on the same overall problem at the same time, they communicate when they are finished with their tasks (their own foreach loops) and the whole thing returns (with a filled concurrentbag). Somehow this (the whole parallel foreach) still seems more "concurrent" than "parallel."

Similarly, your typical map-reduce parallel computing paradigm sort of resembles "concurrency" in this sense. I mean, you have to wait until all the "parallel" processes (with different latencies) finish and return their data.

It's a false debate and the two words can have different means depending on the context where they are used. Concurrency and parallelism can be used to "solve" different problem and sometimes I think the two words can have the same meaning.

If we keep the same perspective your code and design will vary accordingly. Now coming to context yes that decides what should be chosen and when.

If your goal is PERFORMANCE , parallelism comes to play. You want to run rocket fast , you want to create threads run them on different cores or must be on different machines. Our design has things like web gardening , taking care of ourproc session variables , data partitioning , thread , TPL and so on.

If your goal is making application USABLE , NON BLOCKING concurrency should drive your design. Like you have some heavy computation running at the background but the UI should not hang. But you are not thinking performance . You code would have more async code like in C# we have async and await keywords to acheive it , events and so on.

Both have different coding mindset , they will end up with different design. Many developers just use thread and TPL for achieving non-blocking which can be OVERDESIGN for just making applicaiton usable.

I'm glad you choose to write about this "issue", but honestly, Shivprasad, I believe that I disagree with the position you take here.

"Concurrency" or "concurrent" literally means (to me) "at the same time." The only way that is possible is using multiple cores (whether inside a chip or distributed across computers distant from one another). Using multiple time slots on a single core is "multi-tasking" and does give the appearance of parallel processing, even though, strictly speaking, this is neither "parallel" nor "concurrent."

Is "concurrent" a more general term than "parallel?" I'm not too sure about that, but would accept it.

A distinction I sometimes make when using the term "concurrent" is when the programmer controls the situation (to some extent, i.e., by specifying MaxDegreeOfParallelism in a ParallelOptions parameter of a Parallel ForEach statement). Typically, I think, this is used with partitioning to insure that each core only runs one process thread. Although still often, load balancing and so forth may result in several threads running on the same processor anyway.

In your literature review, you probably encountered this argument and perhaps the weight of community opinion is on your side. Maybe more readers will weigh in here with their opinions too.

I disagree with your disagreement.
A task can be composed if sub-tasks. Two tasks can run concurrently because sone sub-tasks from task 1 will run then some sub-tasks from task 2. The two tasks are running "at the same time" but the sub-tasks are not. Sub-tasks can be part of the design or just the hardware time-slicing.

I disagree with your disagreement.
A task can be composed of sub-tasks. Two tasks can run concurrently because sone sub-tasks from task 1 will run then some sub-tasks from task 2. The two tasks are running "at the same time" but the sub-tasks are not. Sub-tasks can be part of the design or just the hardware time-slicing.

Yes, a process can be big, multi-part, even contain its own parallel processes ... or a very small and concise function of some sort. I watched the Rob Pike video mentioned in the literature review, which almost literally made the same or very similar case that the author here did. Pike mentions your point too. Both presenters associate "concurrency" with "program structure." But I still disagree somewhat with that argument. So what? If a "parallel" process is running on a machine with only one core ... either it just runs on the main thread (with no parallelism) or it multi-tasks using other thread time-slots (e.g. background methods to keep the UI active).

Calling that "concurrent" seems to me to be rather academic. Actually I would probably invert the entire argument and say that "parallel" makes more sense to mean "doing things together" and "concurrent" is even closer to the meaning of "doing things more or less at the same time" (both requiring two or more do'ers). And those "things" can be duplicate or completely different processes, even ones that can be awaited.

I might argue that (coordinated) teamwork - human or computer cores - is the essence of parallel operations, but if humans or PCs are doing some thing(s) at the same time, then they would be said to be performing concurrently.

That's a great point. A design done for concurrency is not really suitable for parallelism most of the time and if it does then its just LUCK and BONUS. I also disagree to Robs point on this.

In Concurrency the individual units can communicate with each other , if the communication is too chatty , running these units parallely can go against performance. A parallel execution should have NO communication and dependency on other things.

asiwel wrote:

"planning" and "doing" seems to be confusing to just about everybody.

Yes the planning and doing is not a good idea to keep in mind. Both goals are different approaches are different , ending up with different codes and syntaxes.

We agree to disagree . If you see just from the english dictionary meaning perspective its the same thing.From programming language perspective ( due to history ) meanings are different.

asiwel wrote:

Using multiple time slots on a single core is "multi-tasking"

Multitasking was more used with operating system in the initial years of computer ages and its a synonm to concurrency.

asiwel wrote:

Is "concurrent" a more general term than "parallel?" I'm not too sure about that, but would accept it.

I slightly agree on this. But the main goal here is whats your expectation performance or just making application usable , non blocking.

If you want to make your application non-blocking , like showing a progress bar or UI should not hang when some heavy computation is running at back ground then concurrency should be the approach. It would just complicate code by running progress bar on one core and the UI on another core.

A different perspective of design and code comes in to play when we see both scenarios. I come from C# background and for concurrency my code all ends with async and await keywords , for parallelism i end up with tasks , webfarming , web gardening , data partitioning approaches and so on.