Importing External Data as Content in Umbraco

At Skybrud.dk where I work, most of our solutions have some sort of integration with external data. For a majority of these solutions, it makes most sense to either save the data in a local database or as a file on disk. But in other cases, it make most sense to store the external data in Umbraco as content item - eg. if an editor later should be able to edit the imported data - or add additional data to it.

For instance when making an online shop, we might want to import product data from an external system. So for the sake of this article, let's assume that our fictional online shop gets product data from an external source.

To keep things simple - and let you play around with changing the source data - the product data is saved in a JSON file.

Creating a plan of action

Like most things in Umbraco, there isn't a single way to accomplish something like this. For this article, I will go through the following steps:

Getting the products from the external source

Determining whether products should be added or updated

Populating the properties in Umbraco

Minimizing the data saved in the local database

Setting up a scheduled task for importing the products at a certain interval

While I may not be able to explain everything in this article, I've included anUmbraco test solutionwhich you can download, and get it up running using Visual Studio, IIS or similar. If you're using Visual Studio, you should be to just press CTRL + F5 ;)

The test solution is based onThe Starter Kit. However when you boot up the solution, you might see that I have deleted all products that normally comes with the starter kit. Instead you'll have the run the import job (more about this later) to get any products.

For the test solution, the login for the backoffice is[email protected]for the username andSkriftRocks1234for the password.

Getting the products from the external source

If you pull down the GitHub repository, you will find the JSON file I described earlier at~/App_Data/Products.json- or you can grab the contents fromhere. Normally you would pull the products from a URL or similar, but with a local file, you can try and change of the property values, and then run the importer again.

For parsing the JSON file, we'll be using the widely used JSON.net and create models that represents the products and other data involved. The models can be found inthis folder.

With the models from the repository, loading and parsing the JSON file could look something like:

// Map the full path to the JSON file on diskstringpath=IOHelper.MapPath("~/App_Data/Products.json");
// Read the raw contents of the filestringcontents=System.IO.File.ReadAllText(path);
// Parse the raw JSON into a list of productsProductListproductList=JsonConvert.Deserialize<ProductList>(contents);

As you can see from the models, I've decorated the properties withJsonProperty attributes to match the JSON. While it's convention in .NET to use Pascal casing (eg.Products), the properties in the JSON are using camel casing (eg.products). We can control the parsing using theseJsonProperty attributes (which is part of JSON.net).

However, out in the real world, you might end up with some JSON that requires a bit more than justJsonProperty attributes. JSON.net is still the way to got, but this part is out of the scope of this article.

Determining whether products should be added or updated

In the demo solution, all products are added under a single container node, and each product has an external ID we can use to check whether a product already has been imported in Umbraco. As such, we can get all products from Umbraco's content cache:

// Get a reference the container nodeIPublishedContentcontainer=Umbraco.TypedContent(1234);
// Get all children (products) from the containerEnumerable<IPublishedContent> children=container.Children;

TheChildren property returns an instance ofEnumerable<IPublishedContent>. If we just stick with this type (which we shouldn't - I'll come back to that), the updated code could look something like:

// Make the full path of the JSON filestringpath=IOHelper.MapPath("~/App_Data/ExternalProducts.json");
// Read the raw contents of the filestringcontents=System.IO.File.ReadAllText(path);
// Parse the raw JSON into a list of productsProductListlist=JsonConvert.Deserialize<ProductList>(contents);
// Get a reference the container nodeIPublishedContentcontainer=Umbraco.TypedContent(1234);
// Get all children (products) from the containerEnumerable<IPublishedContent> children=container.Children;
// Iterate through each product from the JSON fileforeach (Productproductinlist.Products) {
// Has the product already been imported?boolexists=children.Any(x=>x.GetPropertyValue<string>("externalId") ==product.Id);
if (exists) {
// update product
} else {
// add product
}
}

Using the right tool for the job

Choosing the right collection

As mentioned earlier, we really shouldn't cross reference an instance ofEnumerable<IPublishedContent>like I did in the example above. Let's say we have hundreds or thousands of products - both in the JSON file and in Umbraco. The amount of products in the JSON file could be denotedN, while the amount of products already imported in Umbraco could be denotedM.

With theforeach loop and the use of theAny method as shown above, we could potentially make up toN * Miterations. So if we have 100 products in the JSON file, and they all have been imported in Umbraco, that would be 10000 iterations. 1000 products would be 1000000 iterations. This is bad for performance, as the amount of iterations will grow exponentially as our shop gets more products.

Scenarios like these can be described usingBig O notation. In programming, Big O notation can be used as a way of determining bottlenecks in code by finding the worst case scenario - the most amount of iterations potentially required that is. The scenario above withN * Miteration would be described asO(N * M)- or given thatNandMare almost equal, we could stretch it toO(N^2). This is an exponential curve, which should be avoided as far as possible.

Luckily there are other types of collections we can use instead. For instance a dictionary, which is build up of key and value pairs. If we already know the key (eg. the external ID of the product), a lookup will only take a single iteration (described asO(1)using the Big O notation) regardless of the size of the dictionary. We still have to iterate through all the products inproductList.Products(which wasO(N)), and we'll also have to create the dictionary based onchildren, which then will beO(M). In the end, these put together can be described asO(M + N + M). This is a linear curve, which is much better. With 1000 products, the worst case scenario is now 3000 iterations - which is a lot less than the 1000000 iterations we had before.

The code still has other loops and iterations as well - eg. parsing the products in the JSON file or getting the children of the container node. But these are linear as well, so even though it still affects performance slightly, we are no way near the 1000000 iterations for 1000 products.

Changing the code to use a dictionary instead could something look like:

// Make the full path of the JSON filestringpath=IOHelper.MapPath("~/App_Data/Products.json");
// Read the raw contents of the filestringcontents=System.IO.File.ReadAllText(path);
// Deserialize the productsProductListlist=JsonConvert.DeserializeObject<ProductList>(contents);
// Get a reference the container nodeIPublishedContentcontainer=Umbraco.TypedContent(1234);
// Get all children (products) from the containerDictionary<string, IPublishedContent> children=container.Children.ToDictionary(x=>x.GetPropertyValue<string>("productId");
// Iterate through each product from the JSON fileforeach (Productproductinlist.Products) {
// Has the product already been imported?IPublishedContentcontent;
if (children.TryGetValue(product.Id, outcontent)) {
// update product
} else {
// add product
}
}

The code should now perform a lot better😄

Should we use the content cache?

In my examples so far, I've been using the content cache for finding existing products. The content cache is really fast, but is it really reliable for the sort of thing we're doing? Perhaps not.

We could have a scenario where editors would unpublish a product, or there would be some sort of scheduling determining when the product should be published and unpublished. As the content cache only exposes published content, we can't use it for checking whether a product already exists in Umbraco.

Examine could be another choice, as it's similar fast, and Umbraco has an internal index for content, which also includes unpublished content. But in our experience, Examine isn't reliable either for this kind of task. The index could be corrupt, or simply not available when we need it (eg. if it's being rebuild). Wrongly determining that a product hasn't yet been imported, could lead to the same product being added twice.

In addition to what's described above, we also need get a reference to theIContent's representing the products in Umbraco, as we need it when we use the content service to update the product. So we could just use the content service for creating the dictionary as well.

The content service hits the database, so it's slower that the content cache and Examine, but it should be the most reliable, and the extra time shouldn't really matter as we'd ideally be running the import in a background task.

By replacing the content cache with the content service, the code should then look something like:

// Make the full path of the JSON filestringpath=IOHelper.MapPath("~/App_Data/Products.json");
// Read the raw contents of the filestringcontents=System.IO.File.ReadAllText(path);
// Parse the raw JSON into a list of productsProductListlist=JsonConvert.Deserialize<ProductList>(contents);
// Get all children (products) from the containerDictionary<string, IPublishedContent> children=ContentService.GetChildren(1234).ToDictionary(x=>x.GetValue<string>("externalId");
// Iterate through each product from the JSON fileforeach (Productproductinlist.Products) {
// Has the product already been imported?IContentcontent;
if (children.TryGetValue(product.Id, outcontent)) {
// update product
} else {
// add product
}
}

Minimizing the data saved in the local database

With the import logic we have right now, we'll just either add or update a product. Given that the goal is to run an automated import of products, it might be a good idea to have a look at how much data we're saving to the database - and whether we can to anything to minimize that.

For this, there is a few things we should consider:

Should we always either add or update a product?

The simple answer: No. There may not always be changes for a given product, so we should have some way of checking whether a product has been changed at the source.

When you save a content item in Umbraco, the new version is added to the database, but old versions are still kept in the database - eg. so editors can use the rollback feature to roll back to an older version. Adding a mechanism to our import logic that makes sure we only update the content item when we have to, will help keeping the amount of content versions to a minimum.

As an example why this matters, we had a solution that would update products every 15 minutes. After doing this for a month or so, the backoffice had become pretty slow from all the content versions that had been added to the database over this period. In our defense, it was a site still under development, but still pretty bad😂

Adding logic to determine whether we actually need to update a product can be done by comparing the new and old values - for JSON, this would be something like:

// Has the product already been imported?IContentcontent;
if (children.TryGetValue(product.Id, outcontent)) {
// Get the old and new data valuesstringoldValue=content.GetValue<string>("productData");
// Serialize the product to a raw JSON stringstringnewValue=JsonConvert.SerializeObject(product);
// Does the two values match?if (oldValue==newValue) continue;
// update product
} else {
// add product
}

In theory this should be fairly simple, but there is a few things you should be aware of when saving JSON in Umbraco:

The default behavior of theJsonConvert.Serializemethod is not to apply any formatting to the generated JSON - meaning that the JSON isn't indented. This is actually what we want, as indented JSON takes up more space. But if a user later saves the product via the backoffice in Umbraco, the JSON will now be saved as intended. I've raised an issue for this, as I think what Umbraco is doing is wrong:Umbraco shouldn't store JSON property data as indented

A way to get around of this could be to make sure the JSON we're saving via the content service is also indented - eg.JsonConvert.Serialize(product, Formatting.Indented). But then this would take up more space in the database than what is necessary.

Like most of the .NET world, Umbraco uses JSON.net to serialize and deserialize JSON. A problem with the default settings of JSON.net is however that it will automatically look for strings that look like dates, and internally keep them as an instances ofSystem.DateTimeinstead ofSystem.String. The problem here is that the original date format and culture isn't also stored, so when the JSON is later serialized, the date format may be totally different. And the date may be formatted using a culture so the string value later can't be parsed back to an instance ofSystem.DateTimeusing an invariant culture. I've also raised an issue for this:Default JSON.net configuration corrupts date strings when deserializing JSON data for the backoffice

I haven't really found a good way around this, so my solution so far has been to prefix the serialized JSON string with an underscore - this means that Umbraco will just see the stored value as a regular string, and not try to deserialize the value to a JSON object. Bit of a hack, but it does the job - and also solves the issue with indentation.

To take these issues into account, the code could then be updated to as shown below (I've only updated the line where the new value is being generated):

// Has the product already been imported?IContentcontent;
if (children.TryGetValue(product.Id, outcontent)) {
// Get the old and new data valuesstringoldValue=content.GetValue<string>("productData");
// Serialize the product to a raw JSON stringstringnewValue="_"+JsonConvert.SerializeObject(product);
// Does the two values match?if (oldValue==newValue) continue;
// update product
} else {
// add product
}

Limiting the amount of content versions saved in the database

As mentioned earlier, each time you save a content item in Umbraco, a new version is created, but older versions are still kept in the database. Storing previous versions can take up a lot of extra space in the database, and eventually also start to affect the performance of the backoffice.

The step with the comparison explained above only prevents adding unnecessary content versions to the database, but if the data changes often, we can still end up with many content versions.

As we're importing the products from an external source, it may not be necessary to store more than a few versions back in time. Luckily Matt Brailsford has created a package calledUnVersion, which automatically deletes older versions when a new version is added.

UnVersion can be configured to apply to all content types or for specific content types. For instance, the configuration to keep a maximum of five content versions across all content types look like:

In this example, I've specified both themaxDaysandmaxCountattributes. When both attributes are specified, content versions not matchingbothattributes will be deleted.

Populating the properties in Umbraco

So far I've omitted the part that actually adds or updates the products in Umbraco as well as populates the properties with the right data. For this scenario, we're also using the content service.

Adding a new product

In this demo, the idea is that when we're adding a product, we should save all the JSON data from the source in a single property, but also update other properties with relevant information. Eg. the product name and description, which then later can be edited by an editor.

The first step here is to create a new content item (only in memory for now):

For theCreateContentmethod, the first parameter is the name of the product. The second parameter is the ID of the container node under which we wish to add the product. The third parameter is the alias of the content type - in this caseproduct.

Next, we'll need to update the individual properties. Most properties are either string or integer values, but the properties for the product features and product photos are a bit more complex, as they are using Nested Content and the media picker respectively.

For the product features, we can mimic the JSON data stored by Nested Content. The generation of this JSON value is handled by theGetFeaturesValue method. In a similar way, the photos are handled by theImportImages method, which will both import the images if necessary as well as generate the correct media picker value with the UDIs of the photos. The implementation of both methods can be found at the bottom of theProductsService.csfile.

As we now have the correct values to save in Umbraco, the logic for updating the properties looks something like:

Updating an existing product

Updating is a bit different from adding a product, as we already have populated the various properties. And an editor may have changed the different values - eg. if the name of the product that should be displayed on the website differs from the product name from the external source. This is of course an individual thing from solution to solution, but we should then probably only update certain properties, and thus the code could look as:

For our fictional shop, the price is controlled at the external source, so we should update it in Umbraco if it has changed. The JSON data stored in theproductDatacould also have information we haven't added to the other properties, so we should update that as well.

With the properties updated, we can now go ahead and save and publish the changes:

// Save and publish the productContentService.SaveAndPublishWithStatus(content);

Again, you could use theSavemethod instead if changes are not supposed to be published automatically. Or add a check so we use theSaveAndPublishmethod if the content item already is published, but theSavemethod if it isn't.

Deleting existing products?

As a product might be deleted at the source, this change should most likely also be reflected in the import logic. As we initially created a dictionary of all existing products in Umbraco, we can delete products from that dictionary as we either add or update them in Umbraco. Eg something like:

// Remove the product from the dictinary
existing.Remove(product.Id);

When we're done iterating over the products from the source, we can then delete the remaining products from Umbraco:

// Any products left in "existing" no longer exists in the JSON file and should be deleted
foreach (IContent content in existing.Values) {
// Delete the product in Umbraco
ContentService.Delete(content);
}

Nice to have

While I won't go much into detail about it here, theProductsServiceclass has a bit extra logic that might be useful.

For instance, when we change content using the content service, those changes will by default be attributed to the user with ID0. This isn't always ideal, so you can create a backoffice user only to be used for the products import. The ID of this user can then be specified as an optional parameter for theCreateContentandSaveAndPublishWithStatus methods. Having attributed actions triggered by the import job, the history of a product will then look like:

To keep track of how the import goes, the ImportProducts method is also building a list with the status of each product, whether it was Added, NotModified, Updatedor Deleted.

Setting up a scheduled task for importing the products at a certain interval

Umbraco natively comes with a way of setting up scheduled task, which then can execute our import logic with a certain interval. A scheduled task in Umbraco points to a URL, which is then accessed when the scheduled task is triggered.

<scheduledTasks>
<!-- add tasks that should be called with an interval (seconds) --><!-- <task log="true" alias="test60" interval="60" url="http://localhost/umbraco/test.aspx"/>-->
</scheduledTasks>

If you for instance have an API controller for triggering an import, you could add the a scheduled task like shown below:

<scheduledTasks>
<!-- add tasks that should be called with an interval (seconds) --><!-- <task log="true" alias="test60" interval="60" url="http://localhost/umbraco/test.aspx"/>-->
<taskalias="ProductsImport"interval="900"url="http://myshop.com/umbraco/api/Products/Import"/>
</scheduledTasks>

The interval is specified in seconds, so 900 seconds is the same as 15 minutes.

Alternatives?

The implementation for scheduled tasks in Umbraco is however a bit old, and don't really offer more options that the one shown above. So there may be better alternatives.

There are external tools that provide somewhat similar service. We have sometimes used Azure'sScheduler Job Collections, which provides a few more options than those of Umbraco's scheduled tasks. If you're using another hosting provider, they might have something similar.

And the winner is...

For some time now, we've been usingHangfire. Per their own description, it's a tool that gives you an:

A recurring job in Hangfire triggers a method in your own C# code, so you'll have access to the various services of Umbraco. But keep in mind that as the recurring jobs are running in a background thread, you won't have access toUmbracoContext.CurrentandHttpContext.Current.

Hangfire is also quite easy to setup. If you install their NuGet package, you'll only need to setup an OWIN (Open Web Interface for .NET) startup file. You can see an example on how to setup Hangfire from myOWIN startup filein the demo repository.

Hangfire also provides a dashboard with an overview of the various jobs you might have in your solution. It is however mostly limited to showing whether jobs have failed or succeeded, but not what's going on inside a particular job.

Luckily there is a great third party package calledHangfire.Consolewhich adds console-like support to Hangfire. This means that Hangfire can give us a result looking something like shown in the image below:

You should however be aware that my Hangfire configuration is outcommented by default, as SQL CE (as the demo solution is using) is not supported by Hangfire. You'll need a full SQL or SQL express database to use Hangfire.

Besides installing the two packages, you can see the steps to configure Hangfire fromthis commit.

Wrapping things up and getting to play around

The demo solution is based on The Starter Kit, and it therefore reuses the existing content type for a product. I have deleted all existing products (as well as their images). I've then created a dashboard in the content section that let's editors manually trigger a new import. Once the import finishes, a table with the status for each product will be shown.

Obviously if the shop has many products, the table might need some pagination. Or products not modified could just be hidden from the list.

As shown in the earlier examples, the JSON value for the product data is saved to it's own property. Normally I'm then creating a simple property editor with aShow JSON-button so the raw JSON is nicely tucked away, but still accessible if you need to have a look at it. You can also find the implementation for this in the repository.

The JSON data could have other information which we haven't populated to the individual properties, but still use on the actual website. So if the editors of your shop also need to look at this data, you could go a bit longer and visualize the data. It could just be something as simple as this:

If you have a look at theProductsService.cs, you might notice that it differs a bit from the examples in this article. For instance, theImportProductsmethod takes an instance ofPerformContextas it's only parameter (which is optional, and therebynullif you don't specify any parameters when calling the method).

ThePerformContextclass comes from Hangfire, and in combination with Hangfire.Console, it's used for logging what's going on inside the Hangfire job. TheWriteLine,WriteProgressBar, andWithProgress methods you'll see in my implementation are all extension methods from Hangfire.Console. They are totally fine to use even if the reference to thePerformContext isnull.

About the Author

Anders Bjerner is a System Developer at Skybrud.dk (an Umbraco Gold Partner located in Vejle, Denmark) with a background in Computer Science, and has been working with Umbraco since 2011. His work at Skybrud.dk typically consists of implementing Umbraco solutions for various government and private clients as well as developing custom packages (where a number of them can be found on Our Umbraco and NuGet). When not working and playing around with Umbraco, he can be found on his bike exploring the hills around his hometown (some of the steepest/highest in an otherwise flat Denmark).