Filip's Technical Blog

Tuesday, 6 February 2018

Lately we compared performance of the Faster R-CNN algorithm on different GPU types available with Azure N-series virtual machines. We wanted to check which GPU is best suitable for object recognition training.

Wednesday, 17 January 2018

Lately I've been working on some advanced object recognition scenarios. I was looking for appropriate tools to do the job. Obviously I thought about using some kind of deep learning solution, but I was afraid of its complexity. I'm not a data scientist and have a very limited knowledge of neural networks. I needed a tool, which I could use as a black box: feed some data into it and then consume the result, without even understanding what's going on under the hood.

I was advised by domain experts to check out the Microsoft Cognitive Toolkit (CNTK). The big advantage of CNTK is rich documentation and end-2-end examples available. In my scenario (Object Recognition) I found following resources particularly useful:

This tutorial was my entry point. You will find there exact, step-by-step instructions on how to build an object recognition solution together with sample data sets. It also provides some scientific background for those who want to learn how this works.

First 2 tutorials are using an algorithm called Fast R-CNN. It is good, but CNTK released also its improved version called Faster R-CNN. This tutorial provides scripts that utilize that improved algorithm. Because it bases on the same sample dataset, you can simply compare results from both tutorials. From my experience, Faster R-CNN is not actually quicker, but it provides better recognition rates.

After going through these 3 you should be able to build an object recognition solution on your own, without having any data-science background.

Wednesday, 19 October 2016

Usually you can easily delete an application from your Service Fabric cluster using the cluster explorer. However, there are times when this doesn't work - this may happen if one of services breaks and it can't heal itself.
In such situation you can use following PowerShell script to force the delete operation:

Monday, 28 September 2015

In this blog post I will show you how to version assemblies during the Visual Studio Online vNext build.

Why to version assemblies?

Versioning assemblies will help you to detect, which library version has been deployed to your environment. You can obviously do it manually for each of your projects, but it can be a mundane and error prone task. Instead, I’ll show you how to easily automate the process.

In result, each project library will be automatically versioned. The version number format will follow the recommended standard:

[Major version[.Minor version[.Build Number[.Revision]]]]

How to version assemblies?

By default the assembly version is defined in the AssemblyInfo.cs class, which can be found in the Properties folder.

[assembly: AssemblyVersion("1.0.0.0")]

If you have many projects it’s much easier to have a single file with all common assembly information shared by all projects. This way you will only need to update the property once and it will be picked up by all projects.

To do that create a separate file called CommonAssemblyInfo.cs and place it the solution’s root folder. Move the AssemblyVersion definition to that file. Then, link the file from all projects:

Right click project

Add Existing…

Select the created CommonAssemblyInfo.cs

Click small arrow next to the Add button

Select Add as Link

Drag the linked file to the Properties folder.

The result should look like this:

Obviously you can move more common properties to the CommonAssemblyInfo file e.g. AssemblyCompany, AssemblyCopyright etc.

Automate versioning

Now that we store the assembly version in a single file we can easily automate the versioning process. The idea is that before executing the actual build we will run another MSBuild script that updates the version in the Common file. The script will be using the AssemblyInfo target task from the MSBuild Community tasks.

The script must be added to your source together with community tasks files, so you can reference it in your VSO build definition.

VSO Build definition

Now that we have all components it’s time to define the VSO build. The first step would be execution of our custom build task:

As you can see we use 2 environment variables here:

$(Build.BuildId) – id of the build

$(Build.SourceVersion) - The latest version control change set that is included in this build .If you are using TFS it will have the format “C1234”, so you need to remove the “C” prefix (see our build script above)

Then, we can use the regular MSBuild step to build the solution. All assemblies that have the CommonassemblyInfo.cs file linked should have the correct version number set.
Now you can add more steps to the build definition: running unit tests, publishing artifacts, etc.

Alternative approach

You can also achieve the same functionality using PowerShell instead of MSBuild. There is a good example here. Which one you choose is up to your personal preference – I prefer my solution, as it requires less code.

Tuesday, 4 November 2014

Adding a footer to SharePoint masterpage may be a bit tricky, since SharePoint automatically recalculates height and scrolling properties of some default div containers on page load. Today I will show how to add a so called "sticky footer" to a SharePoint masterpage using Javascript. The sticky footer will be displayed always at the bottom of the page, even if there is little content. We will base on the SharePoint 2013 "Seattle" masterpage.

Masterpage structure changes

First we need to add a footer container (div) to our masterpage, that will contain the footer content. We add this at the end of the default "s4-workspace" div, right after the "s4-bodyContainer" div:

_spBodyOnLoadFunctionNames.push() vs. $(document).ready()

Notice that instead of using regular jQuery ready() event we are using SharePoint's custom mechanism for calling function after the page loads. This will ensure that all SharePoint resizing code has already executed when calling our function.

Tuesday, 22 April 2014

There are several tutorials on the Web explaining how to integrate SSIS with Dynamics CRM using the script component. All of them however show you only the basic setup, where records from a data source are processed 1 by 1 when executing CRM commands (e.g. creating CRM records). In this post I would like to show you have to leverage the ExecuteMultipleRequest class from CRM SDK to create bulk operations for records from the SSIS data source.

Tutorial scenario

At first we will create a simple database with 1 table that stores user names

Then we will create an SSIS project

Next, we will add our db table as data source, so SSIS can read information about users

Then, we will add a script component that creates contacts in CRM for each user from the table

When the project is created add a Data Flow task to your main package:

Data Source

Double click your Dat Flow task to open it

Double click "Source Assitance" from the toolbox

On the first screen of the wizard select "SQL Server" as source type and select "New..."

On second screen provide you SQL server name and authentication details and select your database

A new block will be added to you Data Flow, representing your DB table. It has an error icon on, cause we haven't selected the table yet. Also, you will see a new connection manager representing you DB connection:

Double click the new block, from the dropdown select the Contacts table we created and hit OK. The error icon should disappear

Script component

Drag and drop the Script Component from the toolbox to you Data Flow area

Create a connection (arrow) from your data source to your script:

Double click your script componet to open it

Go to "Input Columns" tab and select all columns

Go to "Inputs and Outputs" tab and rename "Input 0" to "ContactInput"

1-by-1 import

Now that we have basic components setup let's write some code! In this step we will create a basic code for importing Contacts into CRM. I'm assuming you have basic knowledge of CRM SDK, therefore the CRM specific code will not be explained in details.

Open the script component created in the previous steps and click "Edit Script...". A new instance of Visual Studio will open with a new, auto-generated script project. By default the main.cs file will be opened - this is the only file you need to modify. However, before modyfing the code you need to add references to following libraries:

Microsoft.Sdk.Crm.Proxy

Microsoft.Xrm.Client

Microsoft.Xrm.Sdk

Microsoft.Runtime.Serialization

Now we are ready to write the code. Let's start by creating a connection to you CRM organization. This will be created in the existing PreExecute() method like this:

Obviously the code above requires some null-checks, error handling etc but in general that's all you need to do in order to import your contacts into CRM. If you close the VS instance with the script project it will be automatically saved and built.

You can now hit F5 in the original VS window to perform the actual migration.

Bulk import

In the basic setup described above there is 1 CRM call for each record passed to the script component. Calling web services over the network may be a very time consuming operation. CRM team is aware of that and that is why they introduced the ExecuteMultipleRequest class, which basically allows you to create a set of CRM requests on the client side and send them all at once in a single web service call. In response you will receive an instance of the RetrieveMultipleResponse class, allowing you to process response for each single request.

Let's modify the script code to leverage the power of the ExecuteMultipleRequest class. To do that overwrite the ContactInput_ProcessInput method. The default method implementation can be found in the ComponentWrapper.cs file and it as simple as this:

As you can see by default it calls the ContactInput_ProcessInputRow method that we implemented in the previous step for each record from the source. We need to modify it, so it creates a batch of CRM requests and then send it to CRM at once:

Execution time comparison

As you can see the code for sending requests in batches is a bit longer (but still quite simple I believe) so you may be tempted to go with the simpler version. If you don't care about performance too much (little data, no time limitations) then it might be the way to go for you. However, it's always better to know your options and take a conscious decision. SSIS packages usually process large amount of data, which often takes a lot of time. If you add additional step performing CRM operations via CRM SDK (i.e. via CRM web services) you may be sure this will affect significantly the execution time.

I've measured the execution time for both methods. Importing 1000 contacts into CRM took:

1-by-1 - 2:22s

Bulk import - 0:44s

In my simple scenario bulk import was 3x faster than 1-by-1. The more data you send to CRM the bigger the difference may be.