Archive

It’s a typical exercise, but we always see it theoretically, let’s bring it to practice. We’re developing with semaphores. As we can see here and here, semaphores, among other things will be used to block a process or thread which is trying to access an exclusive resource which is already being used by other process. A typical example is when you go to a public toilet: when the door isn’t locked, you go in and lock the door, when you finish whatever you were doing there, you unlock the door and exit. The same way, when a process tries to use a resource, if the semaphore is open, closes it, use the resource and reopens the semaphore when finish.
But, here we can create as many semaphores as we want, and we are free to do whatever we want with them, so we can create tons of situations.

But, what if we have three processes (P1, P2 and P3), and P1 is waiting for P2, P2 is waiting for P3 and P3 is waiting for P1? We’ll be waiting forever.

What I’m about to code is:

We have two processes (P1 and P2)

We have two resources (R1 and R2) which are exclusive (only one process can access each moment)

P1 wants to access R1, so closes its semaphore

P2 wants to access R2, so closes its semaphore

P1 wants to access also R2, so waits for its semaphore to be open

P2 wants to access also R1, so waits for its semaphore to be open

As P1 is waiting P2 to open R2 semaphore and P2 is waiting P1 to open R1 semaphore, both processes will be waiting indefinitely.

Don’t make a mess

Use the least amount of semaphores, if we can do the same using one semaphore, just use one. We can do it in some cases.

Use the resource if available, skip if not

We can use sem_trywait(), if the resource is busy, it’ll return an error without blocking the application. We only have to do it in one process, but this process will enter fewer times in the critic section:

We’ve been practicing sharing variables between child processes, but when there are some processes trying to access a shared resource, we need a mutex to make it safer. This time to implement the mutex we’ll use semaphores. This semaphores must be also shared variables to work properly.

First, think about semaphores as variables which can be 0 or 1. So if the semaphore is 1, it’s open and we will close (0 value) it after we pass; if it is 0, we’ll wait until it goes 1 (it’s not like a while (semaphore==0); because the operating system will deactivate the process and reactivate it when the semaphore is open and we can use our system resources for anything else).
But, let’s go further, semaphore’s value can be whatever, not just 0 or 1, but if it’s positive, and we want to pass, we will decrement it and it won’t block our process, but if it’s zero or less, our process will block. So we can say a mutex is a semaphore with 1 and 0 values, used to protect a resource.

To use semaphores we must have in mind three basic functions (there are some more):

sem_init(semaphore, pshared, value): Initialize the semaphore with a known value, pshared can be 0 if we want it to be shared between threads of the process, or another value if we want it to be shared between processes. In this case we will put a 1 here.

sem_post(semaphore): Increment the semaphore, it’s what we do to free the resource

sem_wait(semaphore): Decrement the semaphore, if its value is less than zero, blocks the process until we have a value greater or equal than zero. We’ll use this to check if the resource is locked.

In the next example, we’ll increment a number, but to make it a bit more difficult, it will be stored in a string, each increment must be done by a different child process. The final value of x must be 20. On the other hand, I’ve inserted some random waits to simulate a heavy process and provoke a race condition.

We can change SEMAPHORES constant value from 1 to 0 to see how this program behaves in each case:

Each time a process wants to enter our critic section, it will be written on screen by its process Id, so we can see when a process is accessing the resource, and we can detect if two or more processes are accessing simultaneously (and we don’t want it). Remember, then final x value must be 20 and without semaphores we may or may have not this value, it isn’t under our control.

Another way of facing concurrency in the wonderful world of doing several task at the same time is using child processes with fork(). The main difference between fork and threads is that forked process are full process, they have their own memory for code, for data and stack, while threads share code and data, and have their own stack.

But, what about sharing variables in forked processes? If we try to make it as simpler as in thread examples, it’s not going to work, because as they are different processes, they are using different memory zones for their data. For example:

In our ideal world, the MAIN process can see the values written by CHILD and vice versa, but the value the MAIN process can access is completely different than the value the CHILD can access, even using malloc before calling fork(). fork() copies the values of all variables before fork(), but the physical memory address is different.

To do that we have to work with shared memory, have to use mmap instead of malloc:

In this example, the main program waits for a second (with Thread.sleep()) while the tasks is scheduled, so we will see the output running, and after that second, the process stops, but not exits because the Timer is still active. We can write “timer.cancel()”, the task is cancelling too and the program exists.

The problem comes when resumin this task after a while, because we have to create the TimerTask object again, if we try to reuse the old TimerTask (task) created, we’ll get an exception.

But, as we must define the inner TimerTask class again, it’s interesting to create a separate derivate class. But let’s go further, we’ll make a TimerTask derivate that cancels and resumes itself, so our TimerTask (MyTimerTask) must know the Timer object, this way it can also re-schedule the task at different frequencies:

But, as we can see, we must re-create the task object when re-scheduling, what about passing information between the old task to the new task? We may want that, for example, to know what has happened before the new task is created. To do that, we can create a store class, where we can use the attributes to store useful information between schedules, and we can pass this variable to our MyTimerTask constructor, but to avoid external classes to modify the values of our store class, this new overloaded constructor can be private.

In the next example, we just store an integer (we can store as many things as we want), and with this object we will control the times the task was canceled, and make it exit after 5 times:
MyTimerTaskInfo.java

A few days ago, we talked about building our first multi-thread programs using POSIX threads. But soon a need arises: share data between the main thread and the recently launched thread, at least to indicate what we want it to do.
So we can use the last argument of pthread_create(), this is a void pointer we can use for whatever we want and pass the variable type we want. For example an integer:

123456789101112131415161718192021222324252627282930

#include <stdio.h>#include <stdlib.h>#include <pthread.h>

void*newtask(void*_number){int number =*(int*)_number;printf("The number I was asked for: %d\n", number);
pthread_exit(NULL);}

printf("Main process about to finish.\n");/* Last thing that main() should do */
pthread_exit(NULL);}

Remember to compile using pthread:

$ gcc -o simplepass simplepass.c -lpthread

This way, the thread can read the value we’ve passed him, as we can see if we run this little program. Just after starting newtask() we extract the number from the void pointer to an integer variable. But working a bit on this code we can realize this variable can be bi-directional, in other words, we can write a value from the secondary thread, and the main thread can read it. (Instead of doing this, we can use global variables, but I don’t like it much):

printf("Main process about to finish.\n");/* Last thing that main() should do */
pthread_exit(NULL);}

In older and slower computers, we may have to cary the value passed to usleep(), I’m just simulating a wait, it could be the time taken by additional computation. But using a sleep could be a problem, depending on the computer capabilities this value must change, or we could give a high value to make it compatible with more computers, but it could be a waste of time for modern computers. We can solve it many ways, but this first one will be a demo of what you shouldn’t do making an active wait:

printf("Main process about to finish.\n");/* Last thing that main() should do */
pthread_exit(NULL);}

In this case, we have a new variable (vars->ready), when it’s 1 it means the values are set by the secondary thread, so the main one can read them. The main thread, to wait for vars->ready is asking all the time for this value while it’s 0:

1

while(!vars->ready);

But it has a great disadvantage: the process is eating CPU while the condition is not met. We can see it clearer writing sleep(20) just before the secondary thread set ready to 1, and use a task manager to see how CPU use is going, our process will use a big percentage of CPU for nothing, just waiting. If we think about it and the secondary thread is doing any type of complex calculation, the main thread will be stealing CPU time obstructing the secondary one, making it slower, so we have to use another type of solution, like semaphores, signals, mutex, etc. But I’ll talk about them in another post.
Photo: OakleyOriginals (Flickr) CC-by

Nowadays is very common to see dual-core or quad-core CPUs, but there are still systems with one single core.
In this post, I want to about the very basis of concurrency, a brief introduction to this world, so some descriptions will be so so simple, but real world is a little bit more complex.

In multi-core or multi-CPU systems, the operating system (OS) distributes tasks within our CPUs, and we can run several tasks at the same time, but we have been enjoying multitasking for decades, long before multi-core appeared, so, how is it possible? Let’s think about single-core systems, they can only perform one task, but the operating system distributes the CPU control along all tasks very fast so it makes us think all of them are running at the same time. So single-core CPUs usually have one execution thread, dual-core system, will have two execution threads, so they can perform two tasks at the same time. But don’t worry about how it is distributed, because the operating system does it for you.

We’ve seen the OS is smart enough to take advantage of our CPUs or CPU cores, so does it have any sense to make our application use several cores, or make our application run two things at the same time?
Of course it does, let’s see some examples:

Imagine a GUI program performing a heavy task. While this task is running, we may want to give the user some little control to cancel the process, for example, or to carry on using our software. We can run the heavy task in one thread and the GUI in another one.

A server which attends several clients simultaneously.

A program which does intense calculations, if we have dual-core, it can be twice faster

In the last case, we must think about something a little bit. Distributing tasks in time takes some time for the OS, so we must make sure the time we save (separating the process in several threads) is bigger than the time the OS wastes managing the new tasks and sometimes we must put all the results of all individual threads together (and it takes time too).
We may think it will be valid only for multi-core processors but the time taken by our tasks may vary depending on what we are doing, for example, calculations take all CPU Time, but if we access external devices (keyboard, screen, hard disk, USB, etc), even access a file, it makes our process wait for the data and while we are waiting, the CPU is wasting time, so we can take this time to perform calculations for some other process, and it is valid even on single-core CPUs. Sometimes running a tasks in two threads can squeeze our CPU better than one thread using CPU while the other thread is just waiting for data an vice-versa.

Ok, before playing a little bit with it we must know there are some ways to do that. We can create several processes, it’s like running some programs at the same time, and the other way is creating threads. The difference is that processes must store in RAM memory, data and stack among other things, and some threads can share code and data, so only stack and a little information about execution status belongs to each thread exclusively. We can say process initiates also a single execution thread.

Let’s code a bit, we will use pthreads (POSIX threads):

1234567891011121314151617181920212223242526272829

#include <stdio.h>#include <stdlib.h>#include <pthread.h>

void*newtask(void*null){printf("Hello world! I'm a new thread\n");
sleep(1);printf("Goodbye world! I'm a thread about to die\n");
pthread_exit(NULL);}

printf("Main process about to finish.\n");/* Last thing that main() should do */
pthread_exit(NULL);}

This example can be compiled including pthread library, this way:

$ gcc -o onethread onethread.c -lpthread

We are just executing a program that creates another thread that performs a task, this task is writing a couple of messages on screen and wait a bit between them. We can see a result like this:

Main process just started.
Main process about to finish.
Hello world! I’m a new thread
[ wait for a second ]
Goodbye world! I’m a thread about to die

The first two messages belongs to the main thread, one when it stats and the other one when it ends, but before ending we are invoking the other thread that writes the other two messages. As we can see, each thread is executed independently, but to see it a bit more clearer, let’s see the next code:

for(i=0; i<10;++i){
usleep(100);printf(" MAIN: %d\n", i);}printf("Main process about to finish.\n");/* Last thing that main() should do */
pthread_exit(NULL);}

What we’ve done is that the main thread (MAIN) and secondary thread (THREAD) will write on screen several messages inside a for loop. They will be counting from 0 to 9, but waiting a little bit between each writing (instead of doing heavier time eater tasks we simulate them with sleeps).