It can be fine if not too much of these loops are run, but why is it so different when it runs on let’s say a million rows?

Same problem occurs with a database: if the sql enginge running the query doesn’t know anything about how to get rows that match your WHERE clause, it will have to run on all rows.

In the case of our loop above, LINQ does the same: how LINQ would know about which items match you lambda function until it tried it on all items in your list?

To solve that issue we are going to use an uncommon LINQ object: Lookup.

The goal is simple: we are going to use it to build out an index out of our data to group it by a given key. Running this only once on our dataset fixes our problem, in the sense that getting back data subset for each loop iteration will be instant with our Lookup.

Here are the performance difference you can get from our test app (output from our console app):

Because Cosmos DB has now a Table API that behaves exactly the same as a Storage Table, just:

open your “Local and Attached” top root navigation node in Explorer

right-click “Storage Accounts”, add select “Connect to Azure Storage”

Select “Use a connection string or a shared access signature URI” and follow the rest of the process to add your Cosmos DB table and use it as a Storage table!

This is a work-around to play with your CosmosDB data in a simple way, without having to wait.

Still, CosmosDB does not work the same way as the traditional Table Storage, especially on import/export on large volume of data: where Table storage is limiting the query performance, CosmosDB just cut connection straight away.

You can go here to read more details about Webjobs and how they work in Kudu.

VSTS configuration to build a Webjob

As part of our CI/CD strategy, we would like to deploy those jobs from VSTS.

VSTS does not have yet a predefined task to do so, but if we look again on where files should be placed, we could actually push artifacts in the proper place to make it works the way we need.

First let’s look at how we need to build our console app.

What are going to do here is to add a “Copy” task to actually copy our build output to the proper folder, using the same App Service folder structure:

What you can see here is that we are copying the output of the console app build from “$(Build.SourcesDirectory)/MyApp/MayAppConsoleApp/bin/Prod” to “$(build.artifactstagingdirectory)\App_Data\jobs\continuous\MyAppJob”.

Looks similar to what we saw earlier right? 🙂

By doing so we are creating our build artifact already with the required structure, so we can deploy our artifacts then straight to the root App Service folder, as the WebJob folder will already be there.

This means when looking at release App Service deployment part:

As you can see, nothing special is to be found here 🙂

TL;DR:

If you want to deploy your Webjob in your App Service:

In your build, create a “Copy” task which outputs the copy to you artifact directory : “$(build.artifactstagingdirectory)\App_Data\jobs\continuous\MyAppJob”,

I encountered Hangfire a while ago, and tried it out sometimes a year ago, but was not having time or need to properly address some job capabilities.

If you are reading me, it’s because your either looking at understanding what is Hangfire or how to address some of your needs.

Hangfire is about running some portion of your code as jobs away from your main (server/web) process. It also adds the capability to run this code as recurring (very convenient to put in place simple update/cleaning/reminder/mailing jobs).

The most important part you have to get when willing to run Hangfire job, is that you code has to be cable to give itself a proper context:

no HttpContext.Current or similar objects: only what you give to your object at method calling time matters (this is what get serialized as Json on the Hangfire back-end).

no complex object graph: if the class/service you are willing to instantiate has many dependencies (other objects inits or similar), please make sure everything is in proper order from the call you initiate with Hangfire OR let your object initialize itself properly.

Bottom-line, be context friendly! if you have keys or ids to identify data you want to manipulate, pass on these values for serialization: simple to serialize and easier to maintain.

When digging into implementing Hangfire, you’ll see by yourself going over the documentation that almost all you need has been thought through.

When willing to use an IOC container, make sure your use the proper Enqueue prototype; if you don’t, Hangfire will simply store the actual type (not the interface) that was used at Job enqueuing time, which might work at first, but won’t switch to your new type if you change your interface implementation in your ioc container: