The ability to merge records is a powerful one in keeping control of your data quality within PowerApps / D365. It can also be deceptively complex. As we’ve been living & breathing this area of the platform for several years during the development of data8 duplicare, we’ve seen a lot of unexpected things show up when merging particular records. To help understand why you might get an error when merging, I’ve tried to “deconstruct” the merge process below.

Standard Microsoft Dynamics 365 Merge screen

Basic Validation

There’s a whole bunch of conditions that must be met before two records can be merged. If these aren’t met, you should get an error message back pretty quickly. Most of these errors are quite self-explanatory, but there are some that took a little more digging to get to the bottom of.

You can’t merge a record with itself. Kind of goes without saying, but we had to check! If you try and merge a record with itself you’ll get the error Merge cannot be performed on master and sub-entities that are identical.

You can only merge accounts, contacts, leads and cases. Try to merge anything else and you’ll get the error This type: {name} is not supported with merge operation

Both the master and subordinate records must be active. If you try to merge an inactive record you’ll get one of the following messages:

master entity:{name}-{id} is deactive

sub-entity:{name}-{id} is deactive

Security checks. You need several different security privileges to be able to merge:

Privilege

Required On

Read

Master Record

Write

Master & Subordinate Records

Share

Master Record

Append To

Master Record

Merge

Global

If you’re missing any of these you’ll get the rather ambiguous message “Merge is not allowed: caller does not have the privilege or access”. You can use a tool use as the XrmToolBox Access Checker plugin to find out exactly which privilege you are missing. If you’re on-prem then you can take a look at the server trace logs to get more details.

Access Checker plugin in XrmToolBox

Advanced Validation

Now we’ve got the basic sorted, there are some more detailed validation checks that the system will run depending on the type of records being merged:

You can’t merge cases with different parents. If you try you’ll get the error “Child cases having different parent case can not be merged.”

You can’t merge accounts or contacts with different parents unless you specify the PerformParentingChecks parameter as false. Otherwise you’ll get one of the following messages:

Merge warning: sub-entity might lose parenting (if the subordinate record has a parent but the master doesn’t)

Merge warning: sub-entity will be differently parented. (if the master and subordinate records both have different parents)

You can’t merge cases that have more than 100 child cases between them. There is a system-defined limit of 100 child cases per parent case, so if you try to merge two cases that each have 60 child cases, that would result in the final merged case having 120 children, which isn’t allowed. This triggers the error message A Parent Case cannot have more than 100 child cases. Contact your administrator for more details

You can’t merge accounts or contacts that have an active quote. Specifically, the subordinate record can’t have an active quote, but the master record can. If you try to merge two records where the subordinate has an active quote you’ll get the error Merge cannot be performed on sub-entity that has active quote.

You can’t merge records if that would create a loop in a hierarchy. For example, if you have a hierarchy of accounts A <- B <- C, and try to merge A and C. That could result in the parent of A being B (copying the parent from C), and the parent of B being A. If you end up in this situation you’ll get one of the following errors:

Merge could create cyclical parenting. (account)

Loop exists in the contacts hierarchy. (contact)

Creating this parental association would create a loop in Contacts hierarchy. (contact)

Move Related Records

Now it’s done all the up-front checks, it’s time to move onto the interesting bit – moving the records that were related to the subordinate record over to the master.

At this point the system goes through all 1:N and N:N relationships with a Merge cascade type of Cascade All (note that you can’t change this cascade type – it will typically be Cascade All except for a few system-managed relationships).

All records related to the subordinate record through one of these relationships will be updated to refer to the master record. This requires Append and Write privileges on the related record. If you don’t have those privileges you’ll get an error like SecLib::AccessCheckEx2 failed. Entity Name:activitypointer,OwnershipTypeMask:UserOwned, ObjectId: 98dbe456-43b3-e711-80e6-3863bb35af60, OwnerId:843ef80e-6a51-e511-9759-f0921c100524, OwnerIdType:8, OwnerData: roleCount=2, privilegeCount=2993, accessMode=0 and CallingUser:100781f5-c4d6-e311-aec5-d89d6764507c, CallerBusinessId:0c92636a-d920-e511-b427-d89d67632c70 PrincipalData: roleCount=5, privilegeCount=1141, accessMode=0. ObjectTypeCode:4200, ObjectBusinessUnitId:0c92636a-d920-e511-b427-d89d67632c70, AccessRights: WriteAccess. Computed rightsToCheck=WriteAccess, grantedRights=8, hsmGrantedRights=None, grantedRightsWithHsm=8,

The highlighted parts help you identify the type of record and the ID of the individual record that the user is missing permissions on, and the permission that is missing.

It will also update any activities that have the subordinate record as a party (sender, to, organizer etc.) to use the master record instead, even though the Activity Party relationships have a Merge cascade type of Cascade None.

Each of these updates can in turn trigger workflows or plugins that can have any number of other side effects, so it’s quite common to see other errors here. It can be difficult to debug these – the simplest way beyond trial and error is to get access to the trace logs which should tell you what entity type and record ID is triggering the error.

If you find that merging is taking a long time, it’s normally this step that’s the problem. Check your foreign key indexes, synchronous plugins and workflows.

Sharing

The next non-obvious part of the process is that any related records that have been moved from the subordinate to the master will now be shared with the owner of the master record. For example, if you merge two accounts, the contacts that were associated with the subordinate account will be shared with the owner of the master account. This tends to make things “just work” as most users would probably expect, but may break your security model. If you want to disable this behaviour you can do so by changing the GrantFullAccessForMergeToMasterOwner organisation setting to False.

Similarly, the master record will be shared with the owner of the subordinate record. To turn this off, use the GrantSharedAccessForMergeToSubordinateOwner organisation setting.

Finally, reparenting rules are applied. Any 1:N relationships with a Reparent cascade rule will be processed to share the related records with the owner of the master record.

Updating the Master

Any updates to the master record are applied, e.g. if you want to take data from the subordinate record that is missing or incorrect on the master.

In the merging UI in the web application you can only select values to update that are in the same field on each record, i.e. use Email Address 1 from either the master or subordinate. Using the SDK however, there’s nothing to stop you taking values from elsewhere, or even just making up entirely new values that don’t exist in either record, by populating the UpdateContent parameter of the MergeRequest.

Deactivating the Subordinate

The subordinate record is linked to master record and deactivated. The state code and status code of the subordinate will be set as follows:

Entity Type

State Code

Status Code

Account

1

Default

Contact

1

Default

Lead

2

Default

Case

2

2000

This can trigger an error when restricted state transitions have been set up for cases: “The merge couldn’t be performed. One or more of the selected cases couldn’t be cancelled because of the status transition rules that are defined for cases.”

The default status code for a state code can be selected within the attribute customization screen, currently this can only be done through the classic UI:

Version 9 Changes

Merging has stayed essentially unchanged since at least version 4, but version 9 introduced a few new standard plugins that attach to the Merge message and can trigger some different error messages.

In the Move Related Records section above, we saw the rather unhelpful error message that was returned when the user doesn’t have permissions to move one of the related records from the subordinate to the master record. Version 9 introduces a new plugin Microsoft.Dynamics.Sales.Plugins.PreOperationAccountMerge, and while it’s not immediately clear what this plugin does, it does generate a more helpful version of the same error:

Rather less helpfully, it also introduces plugins for the Microsoft Dynamics for Marketing product. One of these, Microsoft.Dynamics.Marketing.Plugins.PreDisassociateEntitiesPlugin, will commonly trigger errors when the subordinate record being merged is a member of a marketing list. If you receive an error such as:

Disassociation cannot be performed between an entity of type list and an entity of type account.; [Microsoft.Dynamics.Marketing.Plugins: Microsoft.Dynamics.Marketing.Plugins.PreDisassociateEntitiesPlugin]

then you have hit the bug in this plugin. There isn’t anything wrong with your process, code or data, but to work around the bug you’ll need to first remove the subordinate record from any marketing lists then merge the records again. If you want to preserve the marketing list membership, you’ll need to manually add the master record to the same marketing lists as the subordinate record was part of.

If you’ve been working with plugins in Microsoft Dynamics 365 for a while, you’ve probably come across the IExecutionContext.Depth property. This tells your plugin how deep in a call stack of plugins it is. A depth of 1 means it is being triggered by a direct user action, e.g. updating a record. If that plugin does something that triggers another plugin, that plugin will have a depth of 2, and so on.

Preventing Infinite Loops

Because plugins can do operations that trigger other plugins, there is a danger that poorly written code can end up in an infinite loop of one plugin calling another, which calls the first plugin again, which calls the second one, and so on until all your server resources are used and it dies, giving you some very unhappy users.

To prevent this, the Microsoft Dynamics 365 platform enforces a limit on the plugin depth. By default, this is set to 8. This means that when the 8th plugin is fired, the platform will kill that entire stack with the error message:

This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.

New features in v9

Version 9 of Microsoft Dynamics 365 for Sales brings many new features. At the same time, the underlying Power Platform has been separated from the 1st party apps such as Sales. Those new features are now being implemented in a similar way to 3rd party customizations, including using plugins.

For example, the new Action Card functionality uses plugins attached to the Create and Update messages on contacts.

Errors after upgrading

If you were close to the maximum plugin depth limit before, you might start seeing some of these “infinite loop” errors after upgrading. This isn’t because of any new problem in your code, but because Microsoft’s own plugins are using up some of the available plugin depth.

We had an example of this today – a plugin that synchronizes a hierarchy of entities coming from an ERP system to a deduplicated hierarchy of account and contact entities started failing. Turning on plugin trace logging gave us the answer:

System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help. (Fault Detail is equal to Exception details:
ErrorCode: 0x80044182
Message: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.; [Microsoft.Dynamics.AppCommon.Plugins: Microsoft.Dynamics.AppCommon.Plugins.PostOperationHandleContactActionCard]
[7cb6ad1b-7e8d-4955-a2a9-694bef2c84dc: Create of acton card for contact]
Entered Microsoft.Dynamics.AppCommon.Plugins.PostOperationHandleContactActionCard.Execute(), Correlation Id: f44c7c54-f16d-408f-903e-e386f22aadc3, Initiating User: c1cab4c8-f553-432c-8430-10262d185543
Exception: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help. (Fault Detail is equal to Exception details:
ErrorCode: 0x80044182
Message: This workflow job was canceled bec...).

The error is coming from the new action card plugin that was introduced in v9. The other information in the plugin trace log lets us see a series of plugin executions that we would normally expect, getting up to depth 7, before this error is eventually triggered.

Fixing the problem

Luckily this solution is on-premise, so we have the option of increasing the maximum plugin depth as a quick fix:

This got us working again quickly, but isn’t an option when we look forward to moving to Online.

As a longer-term solution we’ll need to look at refactoring the plugin to do more of the work in each plugin step so it doesn’t need to recurse so much.

Looking to the future

This is a really good example of the “brave new world” for Microsoft Dynamics 365. A world in which the 1st party apps such as Sales are truly built as apps on the Power Platform with the same options available to other customizers and ISVs. One of the powerful features of CRM was always its extensibility model. With this shift in approach I think we can only see this extensibility story getting even more compelling. We just have to be mindful that, even in environments with no other outside customizations, we still have Microsoft “customizations” in the system as well.

As my son has Type 1 Diabetes (T1D), we use Nightscout to keep track of his blood glucose levels. With T1D it’s important to keep those levels in quite a tight range, so we use a few different apps to alert us when it’s going out of range. As I’ve started looking into Microsoft Flow, I thought it would be a good scenario to work through – take the readings that are stored in Nightscout and generate alerts on my phone when something’s going wrong.

Getting Started with Microsoft Flow

I did look to see if I could get Nightscout to trigger my Flow as it was needed. It has an IFTTT connector, but not Microsoft Flow, so to start with I built a flow using a Recurrence trigger to poll Nightscout every few minutes to get the latest data. This would cause us a problem in the real world as running it every 5 minutes would use 30 x 24 x 60 / 5 = 8,640 runs per month. Looking at the licensing page today, that would require a P2 license costing $15 per month, which isn’t ideal. Checking every 10 minutes brings us into the realm of a P1 license for $5 per month. Even so, it looks like a better long-term option is to get Nightscout to work out if the flow needs to run and use an HTTP trigger instead.

Extracting the Data from Nightscout

I used the “HTTP” action to get the latest 2 SGV readings from the Nightscout API

Fetch data from Nightscout

Note that I’ve used the Queries setting to specify I want to get 2 readings back. The Nightscout API will automatically apply a reverse date ordering for me, so these will be the most recent 2 without me having to add my own sort order.

This isn’t very usable, so the first thing is to parse it using the “Parse JSON” action. Although Nightscout is supposed to expose a Swagger endpoint I couldn’t get it to work, so I used the option to generate a schema based on a sample payload. This tells the action what structure to expect to see in the JSON text, and therefore what data subsequent steps in the flow will have.

Parsing JSON

The one critical part of the data we have got is the sgv value – this tells us the “skin glucose value” measured by the CGM device my son is wearing. However, this value is reported in units of mg/dL, while in the UK we tend to work in units of mmol/L. The next step I added then is to create a simple data set of two mmol/L values using the “Select” action. This process should be familiar to anyone used to working with databases – it transforms each row in the input to a new row in the output by applying a series of mappings. In my case I created a new data set with one column called mmol.

Converting mg/dL to mmol/L

The calculation used to do the mapping is simply to divide the mg/dL value by 18:

Converting mg/dL to mmol/L

Because Flow expressions are based entirely around functions, I had to use div(x, y) instead of x / y. I find this a difficult habit to get into, and caused me lots of frustrating “The expression is invalid” errors! The other thing to note here is how I used 18.0 instead of simply 18. This should be familiar to anyone from a development background – dividing integer values will give an integer result, but dividing floating point values will give a floating point result. So div(5, 2) = 2, but div(5 / 2.0) = 2.5

Using Variables

Now I’ve got all the data I need, in the format I need it. However, I’ve got it all in one table, but what I really need is two separate values. I need the latest value and the previous value separately in order to compare them later. To save me having to write some repetitive expressions later to keep extracting the right one, I used a couple of variables to extract them once and store them for later reuse using the “Initialize Variable” action.

The expression I used to get the latest value was first(body('Convert_SGV_to_mmol'))['mmol']. Because I gave my previous steps helpful names instead of the default “HTTP”, “Parse JSON”, “Select” etc., I now get a much easier to read and understand expression.

After repeating this with another “Initialize Variable” action and changing first to last to get the previous reading, I added one final variable to format the latest reading so a human can read it. Because we have been dealing with floating point numbers, they can appear as helpful things like 7.19999999999999 instead of 7.2, so I want to do a bit of tidying up of that value before anyone is going to see it. Unfortunately Flow doesn’t expose any functions to control this formatting itself, and firing it out to an Azure Function or similar seems like overkill, so I just used an expression to trim the string after the first decimal place: substring(string(variables('Latest SGV')), 0, add(indexOf(string(variables('Latest SGV')), '.'), 2))

Triggering Alerts

I’ve got all the data I need to hand and nicely formatted, so now I need to fire some alerts!

The basic criteria I wanted to try were:

Rising over 7.0 mmol/L

Falling below 5.0 mmol/L

We can work out whether the data matches these conditions by:

Comparing the latest and previous values to check if the values are rising and falling

Check if the latest value is above 7.0 and the previous value is below (or equal to) 7.0. This checks if the level has just gone over the boundary, rather than having passed it some time ago

Check for BG dropping below a threshold

If this condition is matched, I want to trigger a mobile notification. In the message of the notification I can use the formatted value stored in a variable earlier:

Send mobile notification

Once the notification has been sent, I used the “Terminate” action to stop the flow. Following the advice on Jonas Rapp’s blog, this helps stop the problem of exponentially-nesting conditional statements and makes the flow much easier to read.

First Thoughts on Flow

This was an interesting first use of Flow for me, and helped me start making sense of the platform. It’s probably not really a great use for Flow in the end though, for a few reasons:

As there’s no appropriate Microsoft Flow trigger in Nightscout at the moment, it needs to poll regularly to get real time alerts

The regular polling and use of HTTP action needs access to premium licensing

There are already various other apps that give alerts based on the sort of criteria I’ve implemented here

I was also looking at Flow on the basis of it being a “citizen developer” way of producing bespoke alerts. I’m not sure I saw that aspect of the system through this process however. Although I wrote substantially less code than I might otherwise have done, it still required a lot of developer-y knowledge:

How to access data from a REST API

An understanding of JSON and the concept of parsing

Data manipulation concepts

Integer vs. floating point calculations

String manipulation functions

That’s not to say that you need a degree in computer science to build this, but it does seem rather more complex than the hype might suggest. Maybe I just picked the wrong example to see the citizen developer experience at its fullest.

What’s Next?

I’ll hopefully have plenty more opportunities to play with Flow in a business context as we look at moving our D365 implementation to Online. As for looking at Nightscout data, I’m next going to have a look at using Azure Stream Analytics. Hopefully that’ll make it possible to perform the sort of queries I’d like against the data. In addition, I’ll be attending D365UG Manchester tomorrow to start learning about Power BI with Matt Collins, which I hope to put into practise with this data.

One thing I love digging into with Microsoft Dynamics 365 is all the “special” actions. Although just about any entity type can be used with Create, Update, Retrieve etc., there are a lot of other actions that do more specialised jobs, and QualifyLead is a great one of these.

QualifyLead entities

The standard case of qualifying a lead sets up the entities shown in the diagram above:

Account record is created

Contact record is created and linked to the new account

Opportunity record is created and linked to the opportunity

The lead is updated with references to the new account and contact in the parentaccountid and parentcustomerid fields

The lead status is changed to Qualified

Mappings

You can see details of how the system copies information from fields in the lead to those in the new account/contact/opportunity by using the Mappings configuration on the relationship between the lead and the other entity. For example, go to Settings > Customization, click Customize the System, expand the Lead entity and go to 1:N Relationships, double-click on the opportunity_originating_lead relationship, then click on Mappings in the left hand menu. Here you can select fields from the lead entity and the corresponding field in the opportunity entity. When the lead is qualified the system uses these mappings to copy information from the lead to the opportunity. You can repeat this process to change the mappings to the account and contact entities too.

As you might expect, you can set these to true or false to control whether or not the system will create each of those additional entities. If you don’t want to create an opportunity when qualifying your lead, simply set CreateOpportunity to false.

Four more optional parameters control some of the details of the opportunity that the system creates:

OpportunityCurrencyId

OpportunityCustomerId

SourceCampaignId

ProcessInstanceId

Once the lead is qualified, the statecode is changed to Qualified, and there is another parameter that controls what the statuscode is changed to:

Status

More Parameters!

So far, so good. However, there are some (as far as I’ve found) undocumented details to the behaviour of this action. Because they’re undocumented I’d assume they’re liable to change without notice, though I’d be surprised if they did:

Duplicate Detection. If creating the new account/contact/opportunity would create a duplicate that is identified by a duplicate detection rule, qualifying the lead will fail. You can avoid this if necessary by setting an additional parameter SuppressDuplicateDetection to true

Existing account & contact. If the parentaccountid or parentcontactid fields are set on the lead before qualifying, regardless of whether the CreateAccount and CreateContact parameters are set to true, no new account or contact will be created. The system won’t make any changes to the existing account & contact by mapping across data from the lead into the existing records – it ignores those mappings entirely.

Blank company name. If the companyname field on the lead is blank, regardless of whether the CreateAccount parameter is set to true, no new account will be created. Note that this doesn’t seem to be affected by what the mappings are between the lead and account entities – even if you map telephone1 on the lead to name on the account, it’s still the companyname field on the lead that controls this behaviour. If the lead contains data that only maps to the account record, you will effectively lose that data

Blank contact name. Similarly, if both the firstname and lastname fields on the lead are blank, no contact will every be created, regardless of the mappings from lead to contact or what the CreateContact parameter is set to.

Permissions

Because QualifyLead can touch 4 different entities, there is plenty of scope for it to fail with security errors. You might need to be able to create the account, contact and opportunity. Or if you’re using existing an account & contact you’ll need the AppendTo privilege on those entity types. If you’re missing a required permission you won’t get a nice helpful error, just get the standard “Access Is Denied”. Download the log file to get detailed information on what privilege is missing, but it’s not easy to parse:

Not too helpful at first glance, but all the information you need is in there:

ObjectID: the GUID of the record that you don’t have a required privilege on

ObjectTypeCode: the type of the record. Unfortunately the error message presents this as an object type code rather than logical name, which I find much easier to work with. You can use tools such as Metadata Browser in XrmToolBox to find this, or there are websites that list the type codes for standard entities

OwnerID: the GUID of the user or team that owns that object

OwnerIdType: the type of record that owns the object. This will be either 8 for a user or 9 for a team

CallingUser: the GUID of the user that is making the request and missing the privilege

AccessRights: the exact type of privilege that the user is missing on the record

Put this information together and you can work out either the additional security role the user might need, or the change to the existing security role to implement. Of course, you might then get what looks like the same error again but with subtly different values which will point you to the next privilege you need to add.

When you use the RetrieveMultiple method to query data from D365 CE, it’s easy to take the results and assume that’s everything, especially when you’re working with small test data sets.

However, to stop a rogue query killing the server while it retrieves millions of records, data is retrieved in pages, by default pulling back 5,000 records at a time. I think it’s because this is quite a lot of records for a simple dev system that it’s easy to overlook the requirement to move onto subsequent pages.

Part of each request identifies which page number you want to retrieve, and how many records to retrieve at a time, as in the following example:

The first page is number 1, not zero. 🤓 It’s also worth noting that the maximum page size is 5,000 records. You can ask for bigger pages and you won’t get an error, but you will only get 5,000 records back regardless.

If you need to go straight to a specific page you can just specify a higher page number, but if you’re moving from one page to the next it’s more efficient if you also use the PagingCookie option:

I’ve always just taken it as gospel from the documentation that using the PagingCookie is The Right Way™, but what difference does it actually make?

To check, I ran a quick test application to retrieve every email record in a test system (275,000 in total). With the paging cookie took 1:31, and without took 4:29, so definitely worthwhile doing one extra line of code for a 3x speed improvement!

If you’re displaying data to a user then you probably want to retrieve a single page and give the user the options to move through them using code such as this. Most of the code I write however interacts with data as a background operation and will always need to process all the records matched by a query, so instead of using RetrieveMultiple directly and having an outer loop to move between pages and an inner loop to process the records in a single page, I use a simple extension method to make things easier:

var qry = new QueryExpression("account");
// TODO: Set up the criteria etc. of the query
foreach (var account in org.RetrieveAll(qry))
{
// TODO: Do something with the results
}

This makes it much clearer what the code is actually doing and hides the details of how the paging is happening in the background.

Since I’d just been doing some work with Web API queries I decided to take a look. Thankfully a lot of the code was already in place from the existing OData 2 query builder (Web API uses OData 4), but there were a few interesting quirks that needed working through. Of course, now the work’s done you hopefully won’t need to dig into this too much, you can just use FetchXML Builder and let it do it for you! And because Web API is used by the Common Data Service Flow connector List Records action this also forms the basis for generating the required parameter values to help connect your CRM data with Flow.

Although you now shouldn’t need to dive into the details of OData syntax in order to use Web API or Flow with this tool, for the interested among you:

Property Names

The most difficult part for me in hand-crafting Web API queries has been working out the property names to use. As far as I can tell, it’s not described anywhere in the Web API documentation, so there’s a chance I’ve got some of this wrong, but here goes:

Most property names are the same as the attribute logical names (not schema names as in OData 2). The logical name is shown in the customization screens and is all lower-case

Lookup properties (if you want to select the guid of the related record) use the format _logicalname_value. I believe this is to differentiate them from…

Navigation properties (if you want to join to the related record to get additional details) use the navigation property name defined in the relationship metadata. This is largely the same as the schema name for the relationship, but I don’t believe this is required to be the same. In particular, lookup attributes that can target multiple different entity types (such as Customer, Owner and Regarding attributes) will have multiple different relationships, so remember it’s the relationship name, not the lookup attribute name, that’s required here

Query functions (more on these below) use the attribute logical names rather than the property names, even for lookup attributes

Filtering

OData 4 supports only a few simple filter operators:

eq (=)

ne (<>)

lt (<)

le (<=)

gt (>)

ge (>=)

On top of these operators, you can also use some standard functions to compare string values:

contains

startswith

endswith

However, FetchXML defines a huge number of additional comparisons, such as “Equals Current User”, “Last X Financial Periods” etc. These are translated into custom query functions and can be used as $filter=Microsoft.Dynamics.CRM.LastXFiscalPeriods(PropertyName='createdon',PropertyValue=5)

For these functions, any lookup attributes should be identified by their logical names, not with the _logicalname_value syntax. This is another example of where we need to use trial and error to work things out – the documentation talks about property names but trying to use that format gives an error.

Using Parent Related Entities

For “parent” related entities, i.e. records related to the main entity via an N:1 relationship, you can extract any of the values from the related record using the $expand query option in the format $expand=navigationproperty($select=property1,property2).

In pre-v9 there seems to be a bug that you cannot include a lookup property from the related entity. $expand=primarycontactid($select=_parentcustomerid_value) will give an error in v8.2, but work as expected in v9+.

Only the primary key field of the related entity can be used in filters. You can find accounts with a particular primary contact using $filter=primarycontactid/contactid eq <guid>, but you can’t find accounts with a primary contact that has a particular email address using $filter=primarycontactid/emailaddress1 eq '<email>' – this will give an error.

There’s no way to change the type of join being performed – it will always be an outer join. However, we can simulate an inner join by including a non-null filter on the primary key of the related record, e.g. $expand=primarycontactid($select=fullname)&$filter=primarycontactid/contactid ne null. This conversion is handled for you automatically as part of FetchXML Builder.

Using Child Related Entities

“Child” related entities, i.e. records related to the main entity via a 1:N relationship can be included using the same $expand syntax, e.g. $expand=contact_customer_accounts($select=fullname) to get the list of contacts from an account.

If you look at the results of the query however, you probably won’t actually see the contacts and their names. What you’ll see is a list of results that each look like:

You might expect the list of contacts to be in the contact_customer_accounts property, but this will normally be an empty array. Instead, you need to use the URL given in the contact_customer_accounts@odata.nextLink property to actually retrieve the related records. Presumably this has been done for performance reasons.

The documentation states that you will need to use the @odata.nextLink property to retrieve the related records. In my testing however, there do seem to be some instances where the details of the records are actually returned in the main body of the first response, so make sure to check both!

Again it’s not possible to change the join type – it will always be an outer join. You can apply sorts and filters to the list of child entities, but not use the child entities to filter or sort the main entity list. Those sorts and filters are included in the same $expand option as the $select that lists the fields to be included from the child entities, but separated by semicolons instead of ampersands, e.g. $expand=contact_customer_accounts($select=fullname;$filter=firstname eq 'Mark';$orderby=lastname)

Date Value Oddities

Date values can be either in yyyy-MM-dd format, or yyyy-MM-ddTHH:mm:ss.ttttt+ZZ:ZZ format. However, there seems to be some problems with how time zones are handled – a time zone behind UTC (i.e. starting with a -) seems to be handled correctly, but a time zone ahead of UTC (i.e. starting with a +) seems to trigger an error message. Playing safe and just using Z as the time zone (i.e. UTC) works consistently.

String Value Escaping

String values should be enclosed in single quotes, which immediately throws up the question of “what about values that contain single quotes”? What if I want to search for a value of “Mark’s Opportunity”?

The simple answer is that any single quotes should be doubled up, so the query would become $filter=topic eq 'Mark''s Opportunity'

Aggregates & Grouping

Aggregate queries are handled through the $apply query option. This doesn’t seem to be supported prior to v9 though (again, I can’t find any documentation detailing this for sure).

The count aggregate option isn’t supported by OData 4, only a distinct count. However, if you use the primary key column for your distinct count then you will still get a total number of records.

Grouping can be combined with filtering, but does not appear to work with sorting. The OData 4 specification indicates that it should be possible to combine aggregation with sorting, but I haven’t been able to get this to work. It should also be possible to use $select to specify the columns you want to return after the aggregation, but again, I haven’t been able to get this to work with Web API.

Breaking FetchXML Builder!

Finally, a quick reminder to myself – XrmToolBox and FetchXML Builder are used by a lot of people, so don’t break them by not thinking about how changes will work on different versions (sorry Jonas)!

If you’re a Microsoft Dynamics 365 (hereafter called CRM, because I’m too old to change) admin or developer, you’ve probably used XrmToolBox. If not, go and check it out now. I’ll wait.

Now, the first thing you’ll do when you start using XrmToolBox is set up a connection to your CRM instance. If you work at a partner organisation, you might end up adding connections to lots of different instances for all your customers. How do you manage this efficiently and securely across your team?

Although I’ve been using XrmToolBox for several years, one feature I’ve never appreciated before is the option to have multiple lists of connections.

XrmToolBox Connections Manager

By using this feature you can have a shared file containing the connection details that everyone else in your team can reference, giving everyone a consistent experience, making it quicker to get new team members up to speed and making sure that everyone immediately has access to any new environment that gets added.

Connection Groups

Once a new file of connections has been added it will show up as a separate group of connections within the status bar at the bottom of XrmToolBox ready for you to connect to any of the environments in the new list.

I recently added a change to the MscrmTools.Xrm.Connection package that XrmToolBox uses that allows these connection lists to be loaded from an HTTP URL as well as a file, and I was very pleased to see this included in the recent 1.2019.2.32 release of XrmToolBox so everyone can benefit from this.

The important difference this makes is that you can generate a connection list dynamically based on whatever criteria you want to implement. As an example, at Data8 we have the option for people to register the details of their CRM environments on our website and we can now use a simple internal site to generate a connection file appropriate for the user that’s requesting it, making sure each of our analysts can immediately access the connections that are relevant to them.

To generate a connection list dynamically you need to dive a little into the underlying XML format of the connection list. Assuming that CRM developers are probably also .NET developers I’ve put together a few simple C# classes that corresponds to the format that XrmToolBox expects, and an example ASP.NET MVC action that generates an XML document using these classes with the XmlSerializer class.

Now that we’ve got those helper classes in place we can add a controller to an ASP.NET MVC app to generate a connection list XML document that XrmToolBox can use:

Of course, the details of how to actually populate the list are up to you – read them from a database, filter them based on the current user, at this point the world is your oyster!

Once you’ve got your website written, deploy it to a local web server. XrmToolBox will use Windows authentication where necessary, so you can use that to easily lock down access to your site.

Finally, add the URL to XrmToolBox. Go to the Connection Manager screen, select <Add an existing connection file>, and enter the URL of your internal site. You should see the list of connections available straight away in XrmToolBox.

A few limitations of using a URL for a connection list instead of a file:

If you change the list, it won’t be automatically reloaded until you restart XrmToolBox

As I wrote about recently, I’ve started using Azure Batch to run OpenAPS Autotune jobs for AutotuneWeb. The other day however, I started a job of my own and got a notification that my job was 48th in the queue. Either the service has suddenly got really popular, or something’s gone wrong.

A quick look at my Azure Portal showed me that the Azure Batch pool was stuck trying to automatically scale up to 20 VMs, but it was getting a timeout error:

Pool resize timeout error

I initially wondered if it was a problem due to trying to go straight from 0 to 20 nodes, so I tried a few alternative scale formulas to just use a single dedicated node instead, but I still got the same problem.

I haven’t found a way to get detailed debugging information out at this point, but I wondered if there was some problem with the VM image I was using. The previous night I’d tidied up various unused resources from my previous implementations, so I thought I might have deleted something important.

Next I tried creating a single VM manually from the same image that the pool was using. That worked fine, so then I created a new image from that VM and created a new pool using the new image. The new pool using that image could scale just fine, so I tried creating a third pool using the original image. That one had the same problem, so there was definitely something up with the original VM image I was using.

To get things working, I moved all the pending jobs to use the pool with the new image while I tried to figure out what was up with the original one.

To figure it out without breaking the live system again, I created another new VM called testvm with the same image, then captured the image of it as testvm-image. With the VM now deleted there are still a bunch of resources left lying around, which are what I’d tried to tidy up the previous night:

VM Resources

Although the virtual machine itself has been deleted, there are still 4 resources left apparently unused:

Public IP address

Network security group

Network interface

Disk

So in the best traditions of trial and error I created a new Azure Batch pool using the new image, then tried deleting each of these resources in turn and checking if the pool could still resize afterwards.

Deleting the public IP address resource first, because it was at the top of the list, gives an error because it is linked to the network interface resource. Makes sense, so I deleted the network interface instead. Everything still worked fine.

I then deleted first the public IP address, then the network security group. Both times, the pool still resized correctly, so surely it was the disk resource that would trigger the problem…

Failed Pool

Yes! Now I deleted the disk resource from my imaged VM, I can no longer resize the pool.

Now I’m not sure quite what is causing the problem here. The image itself is still usable for creating VMs individually, apparently just not through Azure Batch. Presumably all the data required to create a VM is therefore wrapped up in the Image resource itself, but Azure Batch must have some additional dependency somewhere. I’d love to understand more if anyone can explain what’s going on here?

Displays the converted profile and the command line required to run it to the user

At this point we’ve covered the initial requirements to simplify access to Autotune. The remaining step is to actually run Autotune automatically and email the results back. To handle this we need a few more services:

The SQL Database stores details of each Autotune job that is requested, including a unique identifier and the email address to send the results back to.

The Storage Account holds the Autotune-format profile information and the results produced by Autotune.

The Batch Account processes Autotune using the profile stored in the Storage Account, places the results in the same Storage Account.

When Autotune has finished running, the App Service is notified by the Batch job. The MVC application then retrieves the Autotune results from the Storage Account and reformats it into a more easily-readable format using the same Razor engine used by the MVC views. The email address to send it to is retrieved from the SQL Database, and the email is finally sent using SendGrid.

Because Autotune is run using Azure Batch, the service can scale quickly based on the amount of jobs that users are requesting at any one time. An initial version that used the single VM I had set up originally to run only my own Autotune commands was quickly overwhelmed when more than 2 or 3 jobs were queued at once. By switching to Azure Batch I eliminated this problem and also reduced cost, as I could use low priority VMs for 20% of the cost of a regular VM. The scaling is handled automatically by a formula on the Batch Account pool that increases the number of VMs when there are more jobs waiting to run and removes VMs all the way down to zero when there is nothing waiting.

Want to run it yourself?

If you have a Nightscout site that you want to run Autotune against, you’re welcome to use AutotuneWeb to do it for you right now.

All the code for AutotuneWeb is available on GitHub if you want to run everything yourself or just see more detail about how any of it works. PRs welcome!

What’s Next?

I used a SQL database originally because that’s the technology I’m most familiar with. However, it’s probably overkill for what is required by this application and could be more efficiently replaced by a Table in the Storage Account. This is my first “cloud-native” application, so please let me know if there’s anything more I could do to make it more “cloudy”!

Three years ago our first son was diagnosed with Type 1 Diabetes (T1D). Over the next few months we quickly got used to the process of calculating the amount of insulin required for meals and high blood sugar, and adjusting those calculations on a trial-and-error basis. As a software developer I felt sure there should be a better way.

Thanks to following Scott Hanselman on Twitter I stumbled across some mentions of the OpenAPS project – an open-source artificial pancreas system. Simply amazing stuff! The basics of the system are:

A CGM system that monitors your blood sugar and sends a reading every 5 minutes to…

A processing system running on a Raspberry Pi, Intel Edison or similar that runs all the same calculations as we were doing manually, and sending the results to…

An insulin pump that delivers (or stops delivering, depending on the results of the calculations) the required amount of insulin

This seemed great, and as soon as we could get our hands on a compatible insulin pump we got up and running with the AndroidAPS system that uses the same core logic but simplifies the physical setup by using a standard Android phone as the “brain” and bluetooth communications with the CGM and pump.

However, there’s no clever machine-learning-y stuff going on with those calculations, it’s still essentially performing the same calculations as we always were, it’s just doing it for us automatically and reliably. While that’s definitely a step forward, if the ratios (the insulin:carb ratio that determines how much insulin should be required when eating, and the insulin sensitivity factor (ISF) that determines how much insulin should be required to bring down a high blood sugar level) and basal rates (the amount of insulin continuously delivered in the background) are incorrect then the results aren’t going to be as good as we’d hope for.

The process of refining what those settings should be normally seems to be done simply as trial-and-error, which can get very frustrating! Luckily, one part of the OpenAPS project is Autotune, a system that can run on historical data covering blood sugar, carbs and insulin and recommend changes to those ratios and basal rates based on hard data. Definitely much more appealing to my analytical nature!

The problem is, as a Windows and AndroidAPS user, I found it pretty difficult to use. The details of the ratios and basal rates currently in use weren’t stored in the format that Autotune was expecting, and I had to go back to the documentation each time to work out the format it needed and the details of the command line I had to run. I also had to run it on a Linux VM, just making it that bit more difficult and meaning I only ran it infrequently rather than as a regular process.

To try and make the process simpler for me, and hopefully for others in a similar situation, I started the AutotuneWeb project to:

Automatically convert the profile information stored in Nightscout to the format required by Autotune

Give me the command line details to run, and even,

Run the command for me on my Linux VM and email me the results

I’ve been through a few iterations on how to get this to work, so in Part 2 I’ll run through the various parts that make up this system and how I’ve currently got it running in Microsoft Azure for minimum cost.