Yeah, I know tbb::task_scheduler_init does not work with tbb::pipeline the same way it works with tbb::flow::graph. And that the input is virtually infinite (not quite infinite as it will actually end after 2^32 iterations).

Though the question is about the access violation.

I've narrowed it down to the parallel mode on input filter. Changing mode to serial_in_order resolves the issue. Though the question remains: why parallel mode gives me an error? (access violation arises from tbb::internal::input_buffer::sema_V())

It's not a matter of pipeline vs. flow graph, I think. What do you "know"?

"inputThread" only processes 100 items before it exits, and "count" won't have overflowed before that time (try_process_item() wouldn't return item_not_available for an input filter).

If you don't understand why something is happening and are asking others to help, it's not just good manners to remove any red herrings.

Other than that, I didn't see an obvious problem. With the difference between "parallel" and "serial_in_order", the possibility of an implementation problem looms large. Perhaps you should specify some relevant parameters (TBB version, environment).

(Clarification after #7) I didn't see an obvious incorrect use (because "parallel" is not currently forbidden and could theoretically be supported).

I provided full experimental source code just to illustrate the use case. And yeah, I forgot about the input thread`s finite cycle. Though it does not matter much, since access violation appears before the second item reaches stage 1.

I'm getting the same behavior with TBB version 4.2.u2 and 4.2.u3 (Visual C++ 2013, Win32 platform, Debug and Release configurations).

I think that the documentation should explicitly mention whether "parallel" mode is supported for a thread_bound_filter, and whether it can be serviced by multiple threads (useful?). Now it only mentions "thread" in the singular, and I only see serial_in_order and serial_out_of_order being tested in test_pipeline_with_tbf.cpp in tbb42_20140122oss (4.2 update 3, the current stable release), but...

My guess is that "parallel" is not supported, that this should be stated, that the constructor should throw an exception for argument "parallel", and that the test should verify that it does.

Raf is correct; the fact that the pipeline stage is bound to a single thread means it is serial. He is also correct we should have thought of someone trying this and add an assertion. As it is the structures allocated for the pipeline for a thread-bound filter make the assumption that it is one of the two serial types of filters, while some of the fields necessary for a serial filter are not needed for a parallel filter.

I will add the assertion to the code. Thank you for catching this, Evgeniy, and for your suggestion, Raf.

Just curious (and it's too late/early here to analyse the source code): if a parallel stage follows a thread-bound stage, are items handed off to TBB threads, unlike the situation with a number of parallel stages following a normal serial stage?

Since a thread-bound stage supports the serial mode only, does it have to be the same thread for all items processed by the stage? Or the items could be processed by different threads? And if the latter is true should the access be serialized?

Unless I'm highly mistaken, you can use multiple threads, but you currently have to serialise all accesses.

But if this means that you have a good use case for a really parallel thread_bound_filter, you should probably speak out right here and now. The evolution of TBB is partly user-driven, through stated user requirements or even contributed code. I would guess that such a stage/filter has to buy scalability by substituting a concurrent queue for the simpler queue used now that only supports a single consumer, but I don't immediately see why it wouldn't be doable. If there's a good use case.

The thought behind a thread-bound filter is that there is some resource (some kind of hardware) that requires its handling thread be "locked" to a particular spot, which the user has to do.

Raf, I seem to recall (it's been a couple years since I've worked in the code) that a TBF requires an "impedance-matching" buffer before any follow-on stage. Otherwise the buffers are not needed and not allocated for parallel stages. (Declaring a TBF parallel resulted in no buffer allocated for it, but the inference is the filter is serial and requires a buffer. We should catch this at an early point.) The follow-on stage is handled with tasks, and moderated by the token count.

If the resource is not bound to a particular piece of hardware you don't need a TBF; just use a "normal" filter. Of course if you find you really need a TBF which is parallel, that would be a motivation to examine TBF again. :)

Thanks. I've run some tests, and it looks like thread_bound_filter could be safely served by different threads simultaneously and does not even require access serialization (at least its try_process_item() method). Not sure if TBB actually guarantees this behavior.

As per our use case: we have a five-stage video analytics pipeline, each stage processes video frames in `serial-in-order` fashion. Since video frames are being captured from a video input device or a network RTSP source with some predefined rate, we cannot use the normal filter (to my understanding it would impede concurrency as the worker thread would spend its resource mostly in waits for a new frame). That's where, I thought, thread-bound-filter could come in handy: I'm using it to feed pipeline with captured frames. Currently our use case does not require a parallel input filter, as we are using separate pipelines for different video channels, serial processing stages and the input filter does not perform significant work.

We do not make any guarantees that thread_bound_filters are thread-safe. Most of the accesses to the internals are controlled by a mutex, but we sometimes make assumptions when we know only one thread can execute a piece of code.

Let us know if you have any other problems, and thank you for the description of your application.