Answers

First, you need to understand how AdvanceTimeSettings.IncreasingStartTime works. It won't enqueue a CTI until it gets a start time that is ahead of the last one. If, for example, you have the following start times:

01:00:00
01:00:00
01:00:00
01:00:01

You won't get a CTI until the 01:00:01 item is actually enqueued. So that will give you events a full second of latency. Depending on the frequency of inbound events, this can increase latency pretty significantly. What is your inbound/outbound event rate,
by the way? If you use a null sink, do you see the latency disappear?
Second, since you are measuring latency as the time it takes to get to the output adapter's source, you need to take into account any latency from that as well. One thing that you said ... that the latency increases as the application runs ... leads me to think
that your output queue is getting backed up. This can happen if your output adapter/sink takes too long to process individual messages. StreamInsight will queue them up and feed them to your sink as fast as it'll process the messages but if your processing
for each event takes 100 ms and you have 40 events/second (which is pretty slow), you'll take 4 seconds to process 1 second of events. That will get your output queue backed up pretty quickly. You can check this by looking at the StreamInsight Server
: # Events in output queues performance counter. This tells you the total number of events that have been "released" to output by the engine but are waiting on the sink/output adapter to actually process them. One strategy to handle this is to batch your output
events and write/send whenever you receive a CTI. Events are released to the output adapter/sink when there is a CTI
anyway - they don't "trickle" in but come in "spurts" immediately followed by a CTI - and then do whatever processing that you need to do on a separate thread and/or asynchronously. It is
very important to keep the actual dequeue/OnNext operation as small and fast, especially if you have large numbers of events.

DevBiker (aka J Sawyer)
Microsoft MVP - Sql Server (StreamInsight)

If I answered your question, please mark as answer.
If my post was helpful, please mark as helpful.

First, you need to understand how AdvanceTimeSettings.IncreasingStartTime works. It won't enqueue a CTI until it gets a start time that is ahead of the last one. If, for example, you have the following start times:

01:00:00
01:00:00
01:00:00
01:00:01

You won't get a CTI until the 01:00:01 item is actually enqueued. So that will give you events a full second of latency. Depending on the frequency of inbound events, this can increase latency pretty significantly. What is your inbound/outbound event rate,
by the way? If you use a null sink, do you see the latency disappear?
Second, since you are measuring latency as the time it takes to get to the output adapter's source, you need to take into account any latency from that as well. One thing that you said ... that the latency increases as the application runs ... leads me to think
that your output queue is getting backed up. This can happen if your output adapter/sink takes too long to process individual messages. StreamInsight will queue them up and feed them to your sink as fast as it'll process the messages but if your processing
for each event takes 100 ms and you have 40 events/second (which is pretty slow), you'll take 4 seconds to process 1 second of events. That will get your output queue backed up pretty quickly. You can check this by looking at the StreamInsight Server
: # Events in output queues performance counter. This tells you the total number of events that have been "released" to output by the engine but are waiting on the sink/output adapter to actually process them. One strategy to handle this is to batch your output
events and write/send whenever you receive a CTI. Events are released to the output adapter/sink when there is a CTI
anyway - they don't "trickle" in but come in "spurts" immediately followed by a CTI - and then do whatever processing that you need to do on a separate thread and/or asynchronously. It is
very important to keep the actual dequeue/OnNext operation as small and fast, especially if you have large numbers of events.

DevBiker (aka J Sawyer)
Microsoft MVP - Sql Server (StreamInsight)

If I answered your question, please mark as answer.
If my post was helpful, please mark as helpful.