a line of reasoning or train of thought that connects the parts in a sequence (as ideas or events)

What is the difference between Threads and Processes?

Actually, we can think of a process as one big thread. A process has an address space and one thread of execution (hence the “big thread” idea). If you create another process, you would have it with its own address space and execution thread.
Threads on the other hand, as mentioned before, share the address space among themselves. Although we might think of threads as separate processes, one has to be aware that they are sharing memory which if not planned correctly can lead to some undesirable side effects.

What can you use them for?

Why would we want a “smaller process” inside a process? Well, there are actually several reasons, so let’s check upon a few:

You can use Threads to parallelize tasks

If you have a task that can run in parallel, i.e. to perform the task, its parts don’t depend on each other, you can benefit from running them in different threads.
Take for instance a list of items that have to be worked. Let’s say there is a huge amount of cans, that you have to work through. You have to take each can and copy down the text which it is labeled on the surface of the can to a paper and put that into a pile. Some of the cans have really long texts and others have short ones which can take more or less time to copy.
This is an example of a task that can be parallelized. If there are more people all reading from the same list, copying down the text and putting the paper to the pile, than the number of cans will go down faster than if you are doing it alone!

A Task with just one person will take way more time than if it is divided.

Don’t Always have to be cans though

You can now substitute the people for threads and the cans for something like a text parser which is copying some relevant info into a file. You can even add another thread that will be saving the file every minute or so so that you don’t lose your work.
If this process was single threaded than every time the user triggered the save operation, no info would be parsed and the whole process would take much longer.

One thread will take all time to finish all the tasks one after the other.Multiple Threads will only take the time to finish the longest thread.

Now the image above is mostly true: if we have a multi-core CPU running these than we see a scenario like that. On the other hand, if the machine has 1 core, there is a subtle switching that happens and this would not turn out to happen exactly like the image.
Threads really shine on systems where there are multiple CPUs available. But more on that later!

Wouldn’t Processes work just as well?

Because threads all share the same memory space, it is possible to achieve something like the situation described above, i.e. they all can save into a shared file and they all can read from the same stream of incoming data (some problems may arise on this… we will talk about that soon).
Since processes don’t share memory space, this would be impossible (or very complicated) to implement as described with processes.

Threads are also lightweight when we compare them to processes. This makes it easier and faster to create and switch the context of these threads. In many systems, creating a thread goes 10–100 times faster than creating a process.[1]

The Classical Model

There are other models, with attention to the one used in UNIX systems. However, in this post, I will focus on the classical one. This one is the theoretical one and the base for the other models so it will suit you just fine to learn it!

Processes and Threads – A Symbiotic relation

So let us start with the big picture: You can imagine processes as ways of grouping related resources together: files, other processes, alarms, handles, data, etc. Because these resources are all bundled in together as a process, they can all be managed and accessed more easily.

A process has itself a Thread of Execution (often just shortened to Thread) which is where the execution of tasks runs. The thread has:

Program counter – keeps track of instructions to execute

Registers – where the current variables are stored

Stack – contains the execution history which has frames for each procedure called and not yet returned.

this Thread, as well as any other need to run inside the context of a process. However, they are different concepts and can be treated separately: Processes are used for grouping of resources and threads are the ones that get scheduled for execution on the CPU.

What Threads bring to the Model

With threads, we extend the process model to allow multiple executions in the same environment. This is called multi-threading. This is a situation analogous to the one of having multiple processes running in parallel on one computer. But there is one difference: the threads are sharing resources which can ease the communication and sharing that happens among threads and reduce the total communication overhead in an application.

Because of this property of the threads that can be compared to the processes, Threads very often are called lightweight processes.

On the left: 3 processes each with one threadOn the right: 1 process with 3 threads

Who has what

Just to make it clear, let’s write it down what belongs to processes and what belongs to threads:

Process Stuff

Thread Stuff

Address Space

Program counter

Global Variables

Registers

Open Files

Stack

Child Processes

State

Pending alarms

Signals and signal handlers

Accounting information

As one can see: the threads have their own “stuff” that only they know about and take care of. However due to the fact that threads are all inside a process, they have access to the process “stuff” too, and there is where the problem lies…

Problems of thread misuse

Although threads may offer a variety of benefits, one should also be aware of the problems that may arise with their usage.

I am not saying you should never make use of them, but by being aware of the problems you can start to watch out for them in your implementations and even test the behavior in order to catch them before they get in production.

Many Threads

If a few threads offer some benefits, then adding more can only be even better right??? Right???

Wrooooooooong!

Of course, if you have many threads they might start to incur a problem for the system memory. There are ways around that too, by pooling them together for example. This, however, is not within the scope of this article!

There are problems that can arise when we use several threads together with each other, these usually involve some kind of situation where one thread expects something and another also does or some situation such as where a thread is faster than the other. To go around this Here are the problems which you are most likely to see:

Deadlocks

In this case, what happens is pretty simple: Someone is “hogging” someone else’s stuff and both can’t play on without each other!

But let’s explain this better. Suppose we have 2 Threads: Thread_A and Thread_B. The deadlock situation occurs when none of the threads can finish their work because they are both dependent on one another.

To finish its task Thread_A needs the results that Thread_B is going to deliver so it (Thread_A) can write to a file, let’s call it, file_out.txt

Thread_A has acquired the file_out.txt and it is waiting on Thread_B to finish its computations

Thread_B is doing its stuff, BUT it needs to access file_out.txt … but it can’t since Thread_A is using it

Now Thread_B is waiting to use the file and Thread_A is not going to release it until he receives the values from Thread_B

No one is getting what they need… but everyone will be waiting

Can you see the problem here?
There are different strategies to solve these situations: Locks, Semaphores, etc. But we will cover these topics in another post.

A famous thought experiment/problem in computer science is called the Dining Philosophers. Here is the Wikipedia explanation of the problem:

Five silent philosophers sit at a table around a bowl of spaghetti. A fork is placed between each pair of adjacent philosophers. (An alternative problem formulation uses rice and chopsticks instead of spaghetti and forks.)

Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when he has both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it’s not being used by another philosopher. After he finishes eating, he needs to put down both forks so they become available to others. A philosopher can grab the fork on his right or the one on his left as they become available, but can’t start eating before getting both of them.

Eating is not limited by the amount of spaghetti left: assume an infinite supply.

The problem is how to design a discipline of behavior (a concurrent algorithm) such that each philosopher won’t starve; i.e., can forever continue to alternate between eating and thinking assuming that any philosopher cannot know when others may want to eat or think.

If you’d like a nice little post about it, check out Austin Walter’s blog about it!

Starvation

Now this one is all about never getting a chance to do what you want!

This case happens when a Thread never gets access to a resource it needs in order to finish its work. This might happen because another thread never releases the resource or even because of some unfortunate timing, there is always another thread taking the resource needed before the thread has a chance.

Some thread is not getting anything

Livelock

This case is somewhat similar to the other two above, but it does involve some different nuances. In this case, differently, from the Deadlock case, the threads don’t keep waiting without doing anything. The threads usually keep doing something but are involved in some kind of loop which they can’t get out of due to other factors. Here is an example:

We have 1 Thread again: T_A and T_B. And both can communicate with each other. Now suppose they both share one resource with other threads and that they are both so programmed that they have to give up the resource on request, this happens for example because they should be secondary threads and there are other more important tasks.

Now the problem of a Livelock happens when T_A is using the resource and the T_B thread asks for the resource. T_A will than stop using the resource and give them access to T_B. T_A, however, will want to poll for usage of the resource again, and by doing this T_B will get the request of another thread wanting to use the resource and will forgo it. This will happen again and again and both threads won’t get any work done.

Race Conditions

This is a tricky one. A race condition is what usually gets us, developers, scratching our head trying to understand what is happening. It is also a “mean one” since it is hard to debug it because adding a debugger will affect the timing of the program. If you need another reason to wish this never happens, then here you go: This issue might only happen sometimes! Maybe it runs 2, 3 or even 1000 times but it will fail at the 1004th time.

This case happens when 2 or more threads access a critical section of the code at the same time. What might happen is that a value changes unexpectedly and thus you get a value different from what you would’ve expected.

Now if you the counter was let’s say “99” and you call the class method above, you would expect it to return you 100 right? Yes. But what happens if several threads ask for the value at the same time? Well then, since the value+1 is not an atomic operation, i.e. it does not happen in one step in the processor, this could lead to some problems that are best explained with such an image showing the steps on each thread:

Two threads accessing the value with some unfortunate timing can lead to a race condition

So we end up with two threads that have the same count although this should not be the case.

Final thoughts on Thread

We saw quite a bit with this post, but there is way more to find out.

I definitely recommend Deadlock Empire for you to get a feeling of the problems with multithreading. This is a simple role that puts you in charge of the CPU processor. Your job is to make the program fail by giving the threads CPU time. It is quite easy to get a hold on the control and will give a nice feel for these problems.

Please share with me anything that you liked or disliked about this post so I can improve. Hope to see you next 🙂