I don't do programming but I saw this and it looked like something someone in the forum would find interesting.

Quote:

In Ruby and Python’s standard implementations (MRI and CPython, respectively), the languages make use of a Global Interpreter Lock (GIL). The GIL mechanism performs timeslicing and scheduling of threads. Here’s how it works: Each thread has exclusive access to the GIL for a bit of time, does some work, and releases it. Meanwhile, every other thread is on hold, waiting to get a chance to access the GIL. When the GIL is released, a random thread will get to access it and start running.

There are two major advantages to using this system. The first is that you can write code in these languages that use threading, and it will run on an operating system that does not natively support threading with no modifications needed. The second is that, because only one thread is running at a time, there are no thread safety issues, even when working with a non-thread safe library.

There are some major downsides, though. The biggest one is that multiple threads will never run at the same time. While your application may look like it is running in parallel, in reality, the process never has more than one thread, and it is just wildly bouncing around within one thread doing different things. This brings us to our second issue, which is speed. You will not see any speed advantage on multicore or multiprocessor machines because only one thread is running at a time; you will see a slowdown due to the context switching costs.

The use of the GIL makes a threaded application a bad idea in many (if not most) cases. Fortunately, there are options. For one thing, the GIL is not mandated by the language specifications. There are some implementations that do not use the GIL (JRuby and IronRuby, for example). Also, you can easily fall back on the process model that Ruby and Python both support, using the traditional fork/join mechanisms. While it may not be ideal (or possible) to use a different implementation or write your application to rely upon forking, it is good that there are alternatives to make truly parallel programs possible in Ruby and Python.

So how does this tie in with the multi-threading Linux kernel? Or do they fight with each other?

It neither fights the linux kernal nor does it tie into its threading.

Basically, there is only 1 process that runs, so to the linux kernal, it runs that as it would any other process. So just running it, the process would get normal priority and the kernel would schedule its time slots accordingly.

Within that running process (the Python interpreter), there is another scheduling system. If there are multiple "threads", it then schedules each "thread" to have a certain time slot. When that slot starts, it locks the GIL and executes. When the slot is up, it halts the script, unlocks the GIL, and starts another "thread".

As the article points out, there is no problems with safety when passing data between "threads" because it is actually impossible to have multiple "threads" accessing, or more importantly, writing the same data at the same time. The major downside is that it isn't actually running anything in parallel, and therefore on multicore/multithreaded processors, there won't be a performance boost over single core/thread processors.

This isn't to say that there is no benefits from doing this type of threading though.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum