Introduction

I'm afraid to say that I am just one of those people, that unless I am doing something, I am bored. So now that I finally feel I have learnt the basics of WPF, it is time to turn my attention to other matters.

I have a long list of things that demand my attention (such as WCF/WF/CLR Via C# version 2 book), but I recently went for a new job (and got it, but turned it down in the end) which required me to know a lot about threading. Whilst I consider myself to be pretty good with threading, I thought, yeah I'm OK at threading, but I could always be better. So as a result of that, I have decided to dedicate myself to writing a series of articles on threading in .NET.

This series will undoubtedly owe much to the excellent Visual Basic .NET Threading Handbook that I bought, that is nicely filling the MSDN gaps for me and now you.

I suspect this topic will range from simple to medium to advanced, and it will cover a lot of stuff that will be in MSDN, but I hope to give it my own spin also. So please forgive me if it does come across a bit MSDN like.

I don't know the exact schedule, but it may end up being something like:

This article will be all about how to control the synchronization of different threads.

Wait Handles

One of the first steps one must go through when trying to understand how to get multiple threads to work together well, is how to sequence operations. For example, let's say we have the following problem:

We need to create an order

We need to save the order, but can't do this until we get an order number

We need to print the order, but only once it's saved in the database

Now, these may look like very simply tasks that don't need to be threaded at all, but for the sake of this demo, let's assume that each of these steps is a rather long operation involving many calls to a fictional database.

We can see from the brief above that we can't do step 2 until step 1 is done, and we can't do step 3 until step 2 is done. This is a dilemma. We could, of course, have all this in one monolithic code chunk, but this defeats the idea of concurrency that we are trying to use to make our application more responsive (remember, each of these fictional steps takes a long time).

So what can we do? We know we want to thread these three steps, but there are certainly some issues, as we can not guarantee which thread will be started first, as we saw in Part 2. Mmm, this isn't good. Luckily, help is at hand.

There is an idea within .NET of a WaitHandle, which allows threads to wait on a particular WaitHandle, and only proceed when the WaitHandle tells the waiting thread that it is OK to proceed. This mechanism is know as signaling and waiting. When a thread is waiting on a WaitHandle, it is blocked until such a time that the WaitHandle is signaled, which allows the waiting thread to be unblocked and proceed with its work.

The way that I like to think about the general idea behind WaitHandles is something like a set of lights at a train crossing. While the barrier (the WaitHandle) is not signaled, we (the waiting threads) must wait for the barrier to be raised (signaled).

That's how I think about it any way. So what does .NET offer in the way of WaitHandles for us to use?

The following classes are provided for us:

System.Threading.WaitHandle

System.Threading.EventWaitHandle

System.Threading.Mutex

System.Threading.Semaphore

As can be seen in this hierarchy, System.Threading.WaitHandle is the base class for a bunch of other System.Threading.WaitHandle derived classes. There are a few specific things that should first be explained before we dive on into looking at the specifics of these System.Threading.WaitHandle derived classes.

Some important common (but not all) methods are as follows:

SignalAndWait

There are several overloads for this method, but the basic idea is that one System.Threading.WaitHandle will be signaled whilst anotherSystem.Threading.WaitHandle will be waited on to receive a signal.

WaitAll (Static method on WaitHandle)

There are several overloads for this method, but the basic idea is that an array of System.Threading.WaitHandles is passed into the WaitAll method, and all of these System.Threading.WaitHandles will be waited on to receive a signal.

WaitAny (Static method on WaitHandle)

There are several overloads for this method, but the basic idea is that an array of System.Threading.WaitHandles is passed into the WaitAny method, and any of these System.Threading.WaitHandles will be waited on to receive a signal.

WaitOne

There are several overloads for this method, but the basic idea is the current System.Threading.WaitHandle will be waited on to receive a signal.

Let us now concentrate on System.Threading.EventWaitHandle.

EventWaitHandle

The EventWaitHandle is a WaitHandle and has two more specific classes: ManualResetEvent and AutoResetEvent, that inherit from it that are used more commonly. As such, it is these two sub classes that I will spend time discussing. All that you need to note is that an EventWaitHandle object is able to act just like either of its subclasses by using one of the twp EventResetMode Enumeration values which may be used when constructing a new EventWaitHandle object.

So let's now look a little bit further at the ManualResetEvent and AutoResetEvent objects, as these are more commonly used.

AutoResetEvent

From MSDN:

"A thread waits for a signal by calling WaitOne on the AutoResetEvent. If the AutoResetEvent is in the non-signaled state, the thread blocks, waiting for the thread that currently controls the resource to signal that the resource is available by calling Set.

Calling Set signals AutoResetEvent to release a waiting thread. AutoResetEvent remains signaled until a single waiting thread is released, and then automatically returns to the non-signaled state. If no threads are waiting, the state remains signaled indefinitely."

In laymen's terms, when using a AutoResetEvent, when the AutoResetEvent is set to signaled, the first thread that stops blocking (stops waiting) will cause the AutoResetEvent to be put into a reset state, such that any other thread that is waiting on the AutoResetEventmust wait for it to be signaled again.

Let's consider a small example where two threads are started. The first thread will run for a period of time, and will then signal (calling Set) a AutoResetEvent that started out in a non-signaled state; then a second thread will wait for the AutoResetEvent to be signaled. The second thread will also wait for a second AutoResetEvent; the only difference being that the second AutoResetEvent starts out in the signaled state, so this will not need to be waited for.

This results in the following output, where it can be seen that T1 waits for 5 secs (simulated work) and T2 waits for the AutoResetEvent "ar1" to be put into signaled state, but doesn't have to wait for AutoResetEvent "ar2" as it's already in the signaled state when it is constructed.

ManualResetEvent

From MSDN:

"When a thread begins an activity that must complete before other threads proceed, it calls Reset to put ManualResetEvent in the non-signaled state. This thread can be thought of as controlling the ManualResetEvent. Threads that call WaitOne on the ManualResetEvent will block, awaiting the signal. When the controlling thread completes the activity, it calls Set to signal that the waiting threads can proceed. All waiting threads are released.

Once it has been signaled, ManualResetEvent remains signaled until it is manually reset. That is, calls to WaitOne return immediately."

In laymen's terms, when using a ManualResetEvent, when the ManualResetEvent is set to signaled, all threads that were blocking (waiting) on it will now be allowed to proceed, until the ManualResetEvent is put into a reset state.

Consider the following code snippet:

using System;
using System.Threading;
namespace ManualResetEventTest
{
///<summary>/// This simple class demonstrates the usage of an ManualResetEvent
/// in 2 different scenarios, bith in the non-signalled state and the
/// signalled state
///</summary>class Program
{
publicstatic Thread T1;
publicstatic Thread T2;
publicstatic Thread T3;
//This ManualResetEvent starts out non-signalledpublicstatic ManualResetEvent mr1 = new ManualResetEvent(false);
staticvoid Main(string[] args)
{
T1 = new Thread((ThreadStart)delegate
{
Console.WriteLine(
"T1 is simulating some work by sleeping for 5 secs");
//calling sleep to simulate some work
Thread.Sleep(5000);
Console.WriteLine(
"T1 is just about to set ManualResetEvent ar1");
//alert waiting thread(s)
mr1.Set();
});
T2 = new Thread((ThreadStart)delegate
{
//wait for ManualResetEvent mr1, this will wait for ar1//to be signalled from some other thread
Console.WriteLine(
"T2 starting to wait for ManualResetEvent mr1, at time {0}",
DateTime.Now.ToLongTimeString());
mr1.WaitOne();
Console.WriteLine(
"T2 finished waiting for ManualResetEvent mr1, at time {0}",
DateTime.Now.ToLongTimeString());
});
T3 = new Thread((ThreadStart)delegate
{
//wait for ManualResetEvent mr1, this will wait for ar1//to be signalled from some other thread
Console.WriteLine(
"T3 starting to wait for ManualResetEvent mr1, at time {0}",
DateTime.Now.ToLongTimeString());
mr1.WaitOne();
Console.WriteLine(
"T3 finished waiting for ManualResetEvent mr1, at time {0}",
DateTime.Now.ToLongTimeString());
});
T1.Name = "T1";
T2.Name = "T2";
T3.Name = "T3";
T1.Start();
T2.Start();
T3.Start();
Console.ReadLine();
}
}
}

Which results in the following screenshot:

It can be seen that three threads (T1-T3) are started, T2 and T3 are both waiting on the ManualResetEvent "mr1", which is only put into the signaled state within thread T1's code block. When T1 puts the ManualResetEvent "mr1" into the signaled state (by calling the Set() method), this allows waiting threads to proceed. As T2 and T3 are both waiting on the ManualResetEvent "mr1", and it is in the signaled state, both T2 and T3 proceed. The ManualResetEvent "mr1" is never reset, so both threads T2 and T3 are free to proceed to run their respective code blocks.

Back to the Original Problem

Recall the original problem:

We need to create an order

We need to save the order, but can't do this until we get an order number

We need to print the order, but only once it's saved in the database

This should now be pretty easy to solve. All we need are some WaitHandles that are used to control the execution order, where by step 2 waits on a WaitHandle that step 1 signals, and step 3 waits on a WaitHandle that step 2 signals. Simple, huh? Shall we see some example code? Here is some, I simply chose to use an AutoResetEvent:

using System;
using System;
using System.Threading;
namespace OrderSystem
{
///<summary>/// This simple class demonstrates the usage of an AutoResetEvent
/// to create some synchronized threads that will carry out the
/// following
/// -CreateOrder
/// -SaveOrder
/// -PrintOrder
////// Where it is assumed that these 3 task MUST be executed in this
/// order, and are interdependant
///</summary>publicclass Program
{
publicstatic Thread CreateOrderThread;
publicstatic Thread SaveOrderThread;
publicstatic Thread PrintOrderThread;
//This AutoResetEvent starts out non-signalledpublicstatic AutoResetEvent ar1 = new AutoResetEvent(false);
publicstatic AutoResetEvent ar2 = new AutoResetEvent(false);
staticvoid Main(string[] args)
{
CreateOrderThread = new Thread((ThreadStart)delegate
{
Console.WriteLine(
"CreateOrderThread is creating the Order");
//calling sleep to simulate some work
Thread.Sleep(5000);
//alert waiting thread(s)
ar1.Set();
});
SaveOrderThread = new Thread((ThreadStart)delegate
{
//wait for AutoResetEvent ar1, this will wait for ar1//to be signalled from some other thread
ar1.WaitOne();
Console.WriteLine(
"SaveOrderThread is saving the Order");
//calling sleep to simulate some work
Thread.Sleep(5000);
//alert waiting thread(s)
ar2.Set();
});
PrintOrderThread = new Thread((ThreadStart)delegate
{
//wait for AutoResetEvent ar1, this will wait for ar1//to be signalled from some other thread
ar2.WaitOne();
Console.WriteLine(
"PrintOrderThread is printing the Order");
//calling sleep to simulate some work
Thread.Sleep(5000);
});
CreateOrderThread.Name = "CreateOrderThread";
SaveOrderThread.Name = "SaveOrderThread";
PrintOrderThread.Name = "PrintOrderThread";
CreateOrderThread.Start();
SaveOrderThread.Start();
PrintOrderThread.Start();
Console.ReadLine();
}
}
}

Which produces the following screenshot:

Semaphores

A Semaphore inherits from System.Threading.WaitHandle; as such, it has the WaitOne() method. You are also able to use the static System.Threading.WaitHandle, WaitAny(), WaitAll(), SignalAndWait() methods for more complex tasks.

I read something that described a Semaphore being like a nightclub. It has a certain capacity, enforced by a bouncer. When full, no more people can enter the club, until one person leaves the club, at which point one more person may enter the club.

Let's see a simple example, where the Semaphore is set up to be able to handle two concurrent requests, and has an overall capacity of 5.

This example does show a working example of how a Semaphore can be used to limit the number of concurrent threads, but it's not a very useful example. I will now outline some partially completed code, where we can use a Semaphore to limit the number of threads trying to access a database with a limited number of connections available. The database can only accept a maximum of three concurrent connections. As I say, this code is incomplete, and does not work in its current state; it's for demonstration purposes only, and is not part of the attached demo app.

using System;
using System.Threading;
using System.Data;
using System.Data.SqlClient;
namespace SemaphoreTest
{
///<summary>/// This example shows partially completed skeleton
/// code for consuming a limited resource, such as a
/// DB connection using a Semaphore
////// NOTE : THIS CODE WILL NOT RUN, ITS INCOMPLETE
/// DEMO ONLY CODE
///</summary>class RestrictedDBConnectionStringAccessUsingSemaphores
{
//initial count to be satified concurrently = 1//maximum capacity = 3static Semaphore sem = new Semaphore(1, 3);
staticvoid Main(string[] args)
{
//start 5 new threads that all require a Database connection//but as a DB connection is limited to 3, we use a Semaphore//to ensure that the number of active connections will never//exceed the total allowable DB connectionsnew Thread(RunCustomersThread).Start("ReadCustomersFromDB");
new Thread(RunOrdersThread).Start("ReadOrdersFromDB");
new Thread(RunProductsThread).Start("ReadProductsFromDB");
new Thread(RunSuppliersThread).Start("ReadSuppliersFromDB");
Console.ReadLine();
}
staticvoid RunCustomersThread(object threadID)
{
//wait for the Semaphore
sem.WaitOne();
//the MAX DB connections must be within its limited//so proceed to use the DBusing (new SqlConnection("<SOME_DB_CONNECT_STRING>"))
{
//do our business with the database
}
//Done with DB, so release Semaphore which will//allow another into the Semaphore
sem.Release();
}
staticvoid RunOrdersThread(object threadID)
{
//wait for the Semaphore
sem.WaitOne();
//the MAX DB connections must be within its limited//so proceed to use the DBusing (new SqlConnection("<SOME_DB_CONNECT_STRING>"))
{
//do our business with the database
}
//Done with DB, so release Semaphore which will//allow another into the Semaphore
sem.Release();
}
staticvoid RunProductsThread(object threadID)
{
//wait for the Semaphore
sem.WaitOne();
//the MAX DB connections must be within its limited//so proceed to use the DBusing (new SqlConnection("<SOME_DB_CONNECT_STRING>"))
{
//do our business with the database
}
//Done with DB, so release Semaphore which will//allow another into the Semaphore
sem.Release();
}
staticvoid RunSuppliersThread(object threadID)
{
//wait for the Semaphore
sem.WaitOne();
//the MAX DB connections must be within its limited//so proceed to use the DBusing (new SqlConnection("<SOME_DB_CONNECT_STRING>"))
{
//do our business with the database
}
//Done with DB, so release Semaphore which will//allow another into the Semaphore
sem.Release();
}
}
}

From this small example, it should be clear that the Semaphore only allows a maximum of three threads (which was set in the Semaphore constructor), so we can be sure that the database connections will also be kept within limit.

Mutex

A Mutex works in much the same way as the lock statement (that we will look at in the Critical sections section below), so I won't harp on about it too much. But the main advantage the Mutex has over lock statements and the Monitor object is that it can work across multiple processes, which provides a computer-wide lock rather than application wide.

Sasha Goldshtein, the technical reviewer of this article, also stated that when using a Mutex, it will not work over Terminal Services.

Application Single Instance

One of the most common uses of a Mutex is, unsurprisingly, to ensure that only one instance of an application is actually running

Let's see some code. This code ensures a single instance of an application. Any new instance will wait 5 seconds (in case the currently running instance is in the process of closing) before assuming there is already a previous instance of the application running, and exiting.

When we try and run another copy, we get the following (after a 5 second delay):

Critical sections A.K.A. Locks

Locking is a mechanism for ensuring that only one thread can enter a particular section of code at a time. This is done by the use of what are known as locks. The actual locked sections of code are known as critical sections. There are various different ways in which to lock a section of code, which will be covered below.

But before we go on to do that, let's just look at why we might need these "critical sections" of code. Consider the bit of code below:

This code is not thread safe, as it could be accessed by two different simultaneous threads. A thread could set item2 to 0, just at the point when another thread was doing the division, leading to a DivideByZeroException.

Now, this problem may only show itself once every 500 times you run this code, but this is the nature of threading issues; they are only exhibited once in a while, so are very hard to find. You really need to think about all the eventualities before you write code, and ensure you have the correct safe guards in place.

Luckily, we can fix this problem using locks (A.K.A. critical sections). Shown below is a revised version of the code above:

In this example, we introduce the first of the possible techniques to create critical sections, by the use of the lock keyword. Only one thread can lock the synchronizing object (syncLock, in this case) at a time. Any contending threads are blocked until the lock is released. And contending threads are held in a "ready queue", and will be given access on a first come first served basis.

Some people use lock(this) or lock(typeof(MyClass)) for the synchronization object. Which is a bad idea as both of these are publicly visible objects, so theoretically, an external entity could use them for synchronization and interfere with your threads, creating a multitude of interesting problems. So it's best to always use a private synchronization object.

I now want to briefly talk about the different ways in which you can lock (create a critical section).

Lock keyword

We have already seen the first example, where we used the lock keyword, which we can use to lock on a particular object. This is a fairly common approach.

It's worth pointing out that the lock keyword is really just a shortcut, for working with the Monitor class, as is shown below.

Monitor class

The System.Threading namespace contains a class called Monitor, which can do exactly the same as the lock keyword. If we consider the same thread safe code snippet that we used above with the lock keyword, you should see that Monitor does the same job.

This simple example shows that you can use System.Runtime.CompilerServices.MethodImplAttribute to mark a method as synchronized (critical section).

One thing to note is that if you lock a whole method, you are kind of missing the chance for better concurrent programming, as there will not be that much better performance than that of a single threaded model running the method. For this reason, you should try to keep the critical sections to be only around fields that need to be safe across multiple threads. So try and lock when you read/write to common fields.

Of course, there are some occasions where you may need to mark the entire method as a critical section, but this is your call. Just be aware that locks should generally be as small (granularity) as possible.

Miscellaneous objects

There are a couple of extra classes that I feel are worth a mention, these are discussed further below:

Interlocked

"A statement is Atomic if it executes as a single indivisible instruction. Strict atomicity precludes any possible preemption. In C#, a simple read or assignment on a field of 32 bits or less is atomic (assuming a 32-bit CPU). Operations on larger fields are non-atomic, as are statements that combine more than one read/write operation."

One way to fix this may be to wrap the non-atomic operations within a lock statement. There is, however, a .NET class that provides a simpler/faster approach. This is the Interlocked class. Using Interlocked is safer than using a lock statement, as Interlocked can never block.

MSDN states:

"The methods of this class help protect against errors that can occur when the scheduler switches contexts while a thread is updating a variable that can be accessed by other threads, or when two threads are executing concurrently on separate processors. The members of this class do not throw exceptions."

CompareExchange(location1,value,comparand): If comparand and the value in location1 are equal, then value is stored in location1. Otherwise, no operation is performed. The compare and exchange operations are performed as an atomic operation. The return value of CompareExchange is the original value in location1, whether or not the exchange takes place.

Exchange(location1,value): Sets a location to a specified value and returns the original value, as an atomic operation.

These two Interlocked methods can be used for implementing lock-free (wait-free) algorithms and data structures that would otherwise have to be implemented using full kernel locks (such as Monitor, Mutex etc.).

Volatile

The volatile keyword can be used on a shared field. The volatile keyword indicates that a field can be modified in the program by something such as the Operating System, the hardware, or a concurrently executing thread.

MSDN states:

"The system always reads the current value of a volatile object at the point it is requested, even if the previous instruction asked for a value from the same object. Also, the value of the object is written immediately on assignment.

The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access. Using the volatile modifier ensures that one thread retrieves the most up-to-date value written by another thread."

ReaderWriterLockSlim

It is often the case that instances of a Type are thread safe in terms of read operations, but not for updates. Although this problem could be remedied with the use of the lock statement, it could become quite restrictive if there are many reads but not many writes. The ReaderWriterLockSlim class is designed to help out in this area.

The ReaderWriterLockSlim class (this is new to .NET 3.5) provides two kinds of locks: a read lock and a write lock. A write lock is exclusive, whereas a read lock is compatible with other read locks.

Put plainly, a thread holding a write lock blocks all other threads trying to obtain a read or write lock. If no thread currently holds a write lock, any numbers of threads may obtain a read lock.

The main methods that the ReaderWriterLockSlim class provides to facilitate these locks are as follows:

EnterReadLock (*)

ExitReadLock

EnterWriteLock (*)

ExitWriteLock (*)

Where * indicates that there is also a safer "try" version of the method available, which supports a timeout.

It should be noted that there is also a ReaderWriterLock class within the System.Threading namespace. This is what MSDN has to say about the difference between ReaderWriterLockSlim (.NET 3.5) and ReaderWriterLock (.NET 2.0):

ReaderWriterLockSlim is similar to ReaderWriterLock, but it has simplified rules for recursion and for upgrading and downgrading lock state. ReaderWriterLockSlim avoids many cases of potential deadlock. In addition, the performance of ReaderWriterLockSlim is significantly better than ReaderWriterLock. ReaderWriterLockSlim is recommended for all new development.

According to Sasha Goldshtein (Senior consultant and instructor, Sela Group), Jeffrey Richter (famous author of CLR Via C# book) has revised versions of the ReaderWriterLock which are supposed to provide better performance. Sasha also informed me that Vista adds yet another RWL as part of the OS.

I certainly picked the right technical reviewer, thanks Sasha.....which leads me nicely on to the real thanks below.

Special thanks

I would like to personally thank Sasha Goldshtein (Senior consultant and instructor, Sela Group) for his help with technically reviewing this article.

Sasha Goldshtein is someone that I chatted with as a result of the last two articles in this series. He always seemed to be leaving comments about things I had wrong, or posting questions that I didn't know. So I approached Sasha, and asked him if he would mind being the technical reviewer for the rest of this series, which he kindly agreed to. So thanks Sasha from Sacha (me).

We're done

Well, that's all I wanted to say this time. I just wanted to say, I am fully aware that this article borrows a lot of material from various sources; I do, however, feel that it could alert a potential threading newbie to classes/objects that they simply did not know to look up. For that reason, I still maintain there should be something useful in this article. Well, that was the idea anyway. I just hope you agree; if so, tell me, and leave a vote.

Next time

Next time, we will be looking at Thread Pools.

Could I just ask, if you liked this article, could you please vote for it? I thank you very much.

In the lock test, why are you checking if (num1 != 0) before you divide?
you'll get an exception if you'll try to divide by zero, so you should have checked (num2!=0) ... Unless I'm missing something here ...

(It won't happen in this example since the next thread resets num2 to 10 from 0 upon entering ...but still ...

I'm more of a tinkerer programmer than anything else and have been doing so for about 2 years however, I'd still rate myself as a novice. Lately, I've been dabbling in using more events to drive my application as well as threading.

To me, alot of stuff you read on sites like MSDN are very technical and use other very technical terms and processes to explain these already more advanced topics which leaves novice programmers more confused than when they first visit.

Additionally, when you do find a site or book that does break down concepts like (for me it was delegates) many dumb them down TOO MUCH thus leaving novice or learning programmers with the inevitable question of, "why use it and not simply call another method?"

So far, I've gotten to the third part of this series and I've learned so much that I have already put to use, for example: I used to use lock() for all my thread variables but you introduced me to the Interlock class which seems to be better performing in many cases. It's articles like this and places like Martin's blog that I found when searching for how to create custom events that really help people who haven't quite grasped the advanced concepts and technical terminology needed to understand alot of what sites like MSDN have as explanations of not-so-advanced concepts.

So, thank you for this very insightful and easy to follow article that also gives us PRACTICAL, YET SIMPLE, examples of threading. I look forward to completing 4 and 5 and will likely bless my browser link bar with it's very own place of honour.

..What happens if for some reason the 'try' section does not execute correctly? 'finally' might then write incorrect results. Or, depending on various things, apparently 'finally' is not guarantee to execute (intuitive understanding notwithstanding). What happens then?

Granted this is not directly related to the threading issues under discussion, but it left me wondering...

Amazing article about threading. Have been also following your WPF threads, again amazing.

Im facing perhaps a novice problem and guess that you are the man that could help me find the right direction or solution, you obviously know a lot about Threading and WPF.

Im currently doing an WPF application where a small part of it have a functionality just like FireFoxs download manager but in combination with your example in this article, to have threads waiting on other threads to complete (the Order example). In our case we are uploading a file via the webdav protocol, only if this operation is successfully completed I should call a web service to create the files representation in the underlying database. The overall status, file upload progress, web service call and the final outcome of the complete operation should be communicated back to the end user in a WPF status window (something like the firefox download manager (ctrl+j)). It should be possilbe to execute the just describe operations for different files simultatinously and thats were im in real doubt how to pull this out.

For the file upload we use a 3rd party component from a company called IndependentSoft, WebDav.Net.

This component offers asynch upload and reports progress to the caller via a delegate.
The delegate should now update the a WPF progress bar to indicate the status. But if I have multiple tasks / progress bars ongoing (different threads), how would I know which WPF progress bar (task) to update??
The WebDAV comp. reports via the EventArgs both the local and remote filename and I could of course write some uggly code to find out which prog. bar to update, but is this really the way to do it. Im sure there is some more elegant way to do it.

My guess that the solution is perhaps more about how to structure my code / patterns than isolated to WPF / threading, but I still thin it is valid in this article / threding, call it a real "world example".

Summarize.

Multiple threads are uploading files
The componenet reports progess back progress via events.
If multiple threads / tasks are ongoning, how do i relate a progress notification to its WPF progress bar / task?

Belive me, i have now re written this very message about 100 times trying to make my self more clear.
Not easy to explain in words, and no offense if you have no clue understanding for what im trying to explain.

As regards to this component, 3rd party component from a company called IndependentSoft, WebDav.Net that you bought, I could not possibly say how to correctly use that.

But generally if I were trying top do what you are up against from scratch, I would have some UserControl that has its own ProgressBar and its own BackgroundWorker thread that reports its own progress.

That is how I would have approached that part, and as for the waiting for certain operations to complete etc etc, thats WaitHandle derives classes like ManualResetEvent/Semaphore/AutoResetEvent etc etc all the way.

Sorry I can not be more precise, its just that you are dealing with a 3rd party app, so that is something I just cant help you with, you should talk to "IndependentSoft" about that.

As I say if I were to write all this from square 1, I would use the System.Net APIs and the Threading APIs to do it myself.

Sacha Barber

Microsoft Visual C# MVP 2008

Codeproject MVP 2008

Your best friend is you.
I'm my best friend too. We share the same views, and hardly ever argue

I usually use a BackgroundWorker object to perform asynchronous operations along with a callback method.
Now I'm wondering if it's possible to make an asynchronous call in a method without introducing any callback method.
If so, what is the best way to do it? Thanks.

I'm designing a class which includes a function to return a WaitHandle (as an alternative to continuously polling a 'Ready' function). In circumstances where Ready is always going to be true (e.g. if an output-stream-like object were pointed at /dev/null), is there any predefined WaitHandle I could return to callers that would simply let them proceed? Presumably a caller shouldn't try to wait on a WaitHandle if Ready is false, but if the caller is too lazy to check Ready before using the WaitHandle, it shouldn't get blocked.

Iam developing a website, in which iam using auto refresh in a page. While in the pageload i am using databind with datagrid. The script to refresh is just next to databind(), i put a sleep(10000) in between them so that user can se the data.
But the page is blank till 10000 ms, and then get loaded after which it get refreshed, so user cant see the data grid. tried with changing time interval but still its not working.
Pls give some solution to give pause before refresh script execution.