Yet another IT blog. Focusing mostly on MS CRM.

October 2, 2015

This is the fourth post in an 8 part series describing most common errors made by Dynamics CRM developers. It will cover:

Issue #4 - No or invalid plugin filters

Whenever you create a plugin on an update event an important decision to make is which attribute changes the plugin should fire on. It should be obvious that due to the cost of executing plugins running them should be avoided if we know beforehand that there will be nothing to do.

If for example the logic of our plugin only depends on changes in Field1, there is no point in running it when Field2 value has changed.

The trigger filtering is part of plugin step registration and can be most directly seen in the plugin registration tool. Off course some CRM developments frameworks allow this to be defined more high level (for example as part of a Visual Studio project), but the principle is the same.

Example of plugin registration:

In this example you can see that the plugin will only start executing when the owner of a Lead changes. Not only is this saving resources, by avoiding unnecessary executions of this code when other attributes of the Lead change. Even more importantly it immediately gives us a good idea of what the plugin is doing. If we also notice that it's executed post-operation, we can have good reasons to believe it updates other records based on the changing owner of the lead (good guess would be re-assigning them to other users). All this we can see just from the registration, without looking into the code. This is another reason, except performance, why correct trigger filters are a good habit. We need to remember CRM systems have a long life span and in a lot of cases will be taken over by other developers or even companies. Let's make each other's life easier.

Of course there are exceptions - like a plugin that does duplicate detection or auditing. But in the 99,99% remaining cases only attributes that actually cause a need for calculation should trigger the plugin step.

How to spot this?

Plugin update step executed on all attribute changes, with no apparent reason.

September 25, 2015

This is the third post in a 8 part series describing the most common errors made by Dynamics CRM developers. It will cover:

Issue #3 - Querying for all columns

This issue is very obvious, but unfortunately still quite common. Whoever has any SQL experience knows that SELECT * is (almost) never a good idea. Why query for data you don't need?
Exactly the same rule applies for CRM. Do not retrieve data you don't need. If you need 3 fields from an entity to perform some calculation, retrieve those 3 field. Not one more. The bigger the number of columns queries, the slower the query will perform. How much slower? Significantly.

Each next version is more readable and less error prone. Why use bloated XML when you can build a QueryExpression? Why use QueryExpression if you can use, the more readable, type safe and with Intellisense support, LINQ?

Off course LINQ is not always the answer. Although possible it gets quite messy when using dynamic queries. If you don't know the attribute names during build - QueryExpression seems to be the best choice. Another reason to use QueryExpression is wanting to get passed the 5000 (or else defined) query limit by using paging.
When to use FetchXML? I know only one reason you would want to do that - when building aggregate queries, described here - https://msdn.microsoft.com/en-us/library/gg309565.aspx. If you want to retrieve a count of records or total of some field, use FetchXML aggregates. It will be much better performance wise, because the aggregation is done directly in SQL instead of retrieving all the data and calculating it later. Any other reason? Cannot think of any.

To summarize the following rules should be used:

FetchXML should only be used when utilizing aggregate queries, else:

QueryExpression should be used when:

The query has dynamic attribute names

Paging is required, in most cases when the expected number of results is greater than the limit (5000 by default).

September 22, 2015

Having spend some time working as a CRM focused developer I often see the same development issues reappearing again and again. The root of the problem is that CRM programming is kind of special and previous experience from working with SQL databases or Web Development, although very precious and desired, can cause certain aspects of the code to be developed not exactly how it should have been.
In this series of posts I’ll try to summarize the most common mistakes or misconceptions I often see. As a matter of fact, because of my previous SQL and Web experience, I made some of them myself when starting my work with CRM several years ago.
To keep this a blog I have decided to split it up in a series of posts. The subjects I intend to cover are:

Issue #1 – Not facilitating the pre-event plugins

This is probably the most common issue I see, and also the one causing the most problems when it comes to performance.

Simple rule is:If you want to update the record that triggered the plugin, update the values directly in the target entity, using a pre-operation plugin. Do not use the .Update() method of the CRM web service.
A good example would be filling in a “full name” based on the first and last names. There are off course several ways of doing this, but let’s assume we use a plugin (which is probably also the best choice – see Issue #8).

Let’s see a short reminder on how the Dynamic CRM plugin execution pipeline looks like:

Fig 1 – Dynamics CRM plugin execution pipeline

As we can see it start with an event, like a certain field being updated or a record being created.
Next we have the place to register pre-validation plugins. Pre-validation means that even if we, for example, decide to limit an integer field to the 0-100 range and a user types in 1000, this plugin will still get executed. We can correct the value in code, without the user seeing an error message.
Right after the pre-validation plugins the “platform” performs the validation, like throwing errors when values are not in expected range.

Now the pre-operation plugins get executed, and this is where it gets interesting. We are still before the core database operation, so nothing has been stored yet. On the other hand we have access to the data that will be passed over to SQL and can freely modify it. This is the place where all changes to the current record should be done. Whatever we change here will modify the state of the object passed to SQL.

In the above example we fill in the full name. See how the value we filled in is then later passed to SQL and stored in the DB. This is the correct way of doing it. No further plugins will be triggered.

The anti-pattern looks like this:

Fig 2 – Update anti-pattern

What happens here is that we execute and completely unnecessary update, causing the whole pipeline to execute two times instead of once. In the worse case scenario we have setup our plugins to trigger on change of all attributes and then check the execution depth to avoid infinite loops… That is a no no no…

Some code examples

Let’s say we want to have a plugin that changes the last name of a contact to upper case. This should be performed on each update of the last name.

November 7, 2014

When trying to register a plugin assembly containing custom workflow activities to CRM Online using the Plugin Registration Tool from the 2013 SDK, the custom workflow activities were not visible. Only plugins.

Tried changing the assembly major version, and the key which signs the assembly - no luck.

August 28, 2014

In this post I will try to describe how to properly, inside a plugin, change attribute values of the record that triggered the create or update operation. From time to time I stumble upon code that threats the context record (the one on which a create or update operation triggered a plugin) as any other record, which is wrong.

How it should not be done:

A plugin step is registered on the create event of entity E.

Record A of entity E is created.

The registered create plugin step fires.

(…) Some business logic (…)

Record A is updated by calling the organization service .Update() method.

Or even worse:

A plugin step is registered on the update vent of entity E.

Record A of entity E is updated.

The registered update plugin step fires.

(…) Some business logic (…)

Record A is updated by calling the organization service .Update() method.

Inside the plugin the .Depth property of the execution context is checked to avoid infinite loops.

What is wrong in both these scenarios is that an Update request is used to update the context record. This causes:

Unnecessary load on the CRM application and database.

Infinite loops in plugins.

Possible transactional errors and performance issues, which could lead to plugins failing.

The above approach might be used by analogy with SQL databases. But the CRM service is not a SQL database, and requires a different approach.

The proper way of doing this is registering the plugin step (create or update) in the pre event inside transaction phase (20) and modifying the attributes directly in the target entity. All changes done to entity attributes in the “20” phase will be stored, without any unnecessary round trips and service calls.

This is a simple example showing a plugin that sets new_attribute3 to a string concatenation of new_attribute1 and new_attribute2. It is of course simplified, does not check validate the values, check for nulls, etc. but it proves the point.

If this code is registered to execute on the pre operation (20) phase, the value of new_attribute3 will be updated, without any other operations. This is because when in phase 20 it modifies the record before CRM does the core database operation (30). So anything we change here, will be what gets later pushed to SQL.