But then comes the time when you actually need to use these logs, maybe to debug an issue, this is when you realize the mess: here is an error log but wait what happened before that, who was the user, what was the action and how long did it take... someone please put me in context!

Well there are current solutions for that, kinda, you'll need to go back to the code and start adding correlation ids, and use some logging query magic with grouping and filtering on top of what might be a high amount of logs. Not so easy after all, especially after the fact as you have enormous amount of logs and most of them are just noise.

What you really want to do is remove all the noise, but still have full logs when you need them (error case for example).

In this blog post I wanted to show a new approach to logging using a new framework called: Story.

The story framework (currently only for .NET) is about collecting information (logs and any bits of data) about the currently running code and putting it inside a context and when this context ends using rules figure out what to do with this information collection (story).

Too abstract, lets use an example:

We have a web application for posting questions, in this approach every request will run under a new story context. This means that every log produced will be added to that story and every data will be added to it like the current user posting the question, the url of the request, the response and even the query we ran to insert the question to the database.

Then when the request ends and the story stops, a set of rules will decide what to do with it:

If nothing special happened send to trace (or maybe do nothing with it).

This gives you full control on your logs/telemetry/analytics at coding time, and the story you get, if there was an error for example, will have all the information you need, and more importantly, information (stories) that you don't need (usually most of them) will not distract you.

How cool is that?

Well these rules can also be updated on the fly which opens awesome scenarios, for example - your user has an issue, just add a rule that sends all his stories to a new storage container and observe that container, no need to filter and you see everything that this user do immediately.

Using Story

To use the Story you need to create/start a new story and run your code within the context of that Story.

The created story is added to the context (CallContext or HttpContext depending on where you run) so anything running within that context can just call Storytelling and add logs/data to the story.

We name the story ("MyAction" in this case) to help us later when we run rules on the stories.

You can also create new stories, which will be added as children to the current story in context (and will become the current story in the context) giving us an execution graph later when we observe the story.

The second part is setting the rules which tells each story what to do when it ends (and begins), for example:

// Create the ruleset
var ruleset = new Ruleset<IStory, IStoryHandler>();
// Add a new predicate rule, for any story run the console handler which prints the story to the console
var consoleHandler = new ConsoleHandler("PrintToConsole", StoryFormatters.GetBasicStoryFormatter(LogSeverity.Debug));
ruleset.Rules.Add(
new PredicateRule(
story => true,
story => consoleHandler));
// Set a new basic factory that uses the ruleset as the default story factory
Storytelling.Factory = new BasicStoryFactory(ruleset);

You should initialize the factory once at the start of the app.

Adding Story to Web Application

To get a better real sample I'll show how to add stories to an asp.net owin web application.

Finally we need to initialize the story factory, we will use FileBasedStoryFactory which lets us update the ruleset on the fly. The ruleset itself is a class written in a file, to change it we change the code in that file and copy it over to the server.

The ruleset code file is stored in the project both as a source file (Compile) and as a content file (marked as "Copy Always") so it has the benefit of intellisense and used as an external file to the project.

You'll notice we've added both a story handler that sends stories to the Trace and one that sends stories to the azure table storage.
To make the second one work you'll need to add a connection string for your azure storage account with the name "StoryTableStorage" (can be in the web/app.config or the azure portal settings if using azure).

If you use the azure table storage story handler to persist your stories and also use Azure Web Apps to deploy your web application you can install the Azure Websites Log Browser (blog post here) which now supports viewing your stories in a readable way.

It also has a Deploy to Azure button if you want to try it out on the cloud and try out the log browser site extension, just make sure to set the azure storage connection string (when asked). To access the log browser, go to: https://{sitename}.scm.azurewebsites.net/websitelogs/.

Scheduled WebJobs have existed from the beginning of Azure WebJobs, they are a result of combining 2 different Azure resources: a triggered WebJob containing the script to run and an Azure Scheduler job containing the schedule to use.

The Azure Scheduler job would point to the triggered WebJob invoke url and would make a request to it on schedule.

There have been some difficulties with this approach mainly around the deployment part of the schedule which convinced us to build another scheduler implementation that is built into kudu (the WebJobs host) which enables a scheduled WebJob to be deployed by simply adding a file with the WebJob binaries.

How to add a schedule to a triggered WebJob

The way to add a schedule to a triggered WebJob is by adding a file called settings.job with the following json content: {"schedule": "the schedule as a cron expression"}, this file should be at the root of the WebJob directory (next to the WebJob executable).

If you already have this file with other settings simply add the schedule property.

The schedule is configured using a cron expression which is a common way to write schedules.

CRON Expressions

There are many pages that can teach you how to write a cron expression, I will describe the main format used for the scheduled WebJob.

Samples

You can find some more cron expression samples here but note that they have 5 fields, to use them you should add a 0 as the first field (for the seconds).

Important - To use this way of scheduling WebJobs you'll have to configure the website as Always On (just as you would with continuous WebJobs) otherwise the scm website will go down and the scheduling will stop until it is brought up again.

Debugging a schedule

To see the scheduler logs for a scheduled WebJob you need to use the get triggered WebJob api, go to the url: https://{sitename}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname} (remove the job name to see all triggered WebJobs).

Adding a schedule for an on demand WebJob in Visual Studio

If you have a Visual Studio Azure WebJob project, the way to add a schedule is by authoring the settings.job file described above and adding it to the project.
In the solution explorer you'll need to change the properties of that settings.job file and set the Copy to output directory to Copy always.

This will make sure the file is in the root directory of the WebJob.

Changing (or setting/removing) a schedule of a triggered WebJob is all about updating the schedule property of the settings.job file in the WebJob's directory (d:\home\site\wwwroot\App_Data\jobs\triggered\{jobname}), whenever the file is updated the change is picked up and the schedule will change according.

This means you can deploy the schedule in any way you wish including by updating the file on your git repository.

Differences between the two scheduled WebJobs

There are pros and cons to each way of scheduling a WebJob, review them and choose which way to go.

Azure Scheduler

Pros

Doesn't require the website to be configured as Always On.

Supported by Visual Studio tooling and the current Azure portal.

Cons

Doesn't support continuous integration - to schedule a job or reschedule a job you'll need access to your Azure account.

Loosely tied to the triggered WebJob, you cannot always tell that a WebJob has an Azure Scheduler job behind it.

Internal WebJob Scheduler

Pros

Supports continuous integration and any deployment mechanism available for Azure Web Apps as it is file based.

Supports the common cron expressions.

Can tell a WebJob is scheduled with a simple api call.

Cons

Requires Always On.

Not yet supported by tooling and portal (hopefully that will change).

Summary

To summarize, we've introduced a new way to schedule WebJobs that is continuous deployment friendly, in some cases it won't be the right one to choose but if the cons doesn't bother you it is a simpler and way for you to schedule triggered WebJobs.