David Jennaway - Microsoft Dynamics CRM

Friday, 27 June 2014

The CRM SDK describes the main differences in plug stages here. However, there are some additional differences between the pre-validation and pre-operation stages that are not documented.Compound Operations
The CRM SDK includes some compound operations that affect more than one entity. One example is the QualifyLead message, which can update (or create) the lead, contact, account and opportunity entities. With compound operations, the pre-validation event fires only once, on the original message (QualifyLead in this case) whereas the pre-operation event fires for each operation.
You do not get the pre-validation event for the individual operations. A key consequence of this is that if, for example, you register a plugin on pre-validation of Create for the account entity, it will not fire if an account is created via QualifyLead. However, a plugin on the pre-operation of Create for the account entity will fire if an account is created via QualifyLead.

Activities and Activity Parties
I've posted about this before, however it's worth including it in this context. When you create an activity, there will be an operation for the main activity entity, and separate operations to create activityparty records for any attribute of type partylist (e.g. the sender or recipient). The data for the activityparty appears to be evaluated within the overall validation - i.e. before the pre-operation stage. The key consequence is that any changes made to the Target InputParameter that would affect an activityparty will only be picked up if made in the pre-validation stage for the activity entity.

Monday, 7 April 2014

The CRM SDK messages CreateRequest and UpdateRequest support a configuration parameter "SuppressDuplicateDetection" that provides control over whether duplicate detection rules will be applied - see http://msdn.microsoft.com/en-us/library/hh210213(v=crm.6).aspx. However, this parameter is not available through over programmatic means (such as the REST endpoint) to create or update records.

To workaround this, I created a plugin that sets the "SuppressDuplicateDetection" parameter based on the value of a boolean attribute that can included in the Entity instance that is created or updated.

I created this because I had a need to apply duplicate detection rules to entities created via the REST endpoint in CRM 2011.

It may be that this plugin could also be used as a way to revert the CRM 2013 behaviour back to that of CRM 2011, to allow duplicate detection rules to fire on CRM forms. However, I've yet to test this fully; if anybody wants to test it, feel free to do so and make comments on this post. Otherwise, I'll probably update this post if I find anything useful with the CRM 2013 interface.

Friday, 13 December 2013

After a recent upgrade to Crm 2013 of an organisation that
had been a Crm 4.0 organisation, there were client script errors when
navigating to the Case or Queue entities. The underlying cause was some SiteMap
entries that referenced Crm 4.0 urls; these were being redirected to new urls,
but seemed to be missing some elements on the QueryString.

These are the only entries I’ve found so far with problems.
I think the entry for Queues is a one-off, but the entry for cases is notable
in that the original (Crm 4.0) SiteMap entry included a Url attribute, whereas
entries for most other entities did not include the Url attribute. So, it’s
possible that other entries that include both the Entity and Url attribute could
have the same issue.

Although annoying at the time, I don’t see this as a major
issue, as reviewing the SiteMap will be one of the standard tasks we do for any
upgrades to Crm 2013. This is due to change in navigation layout, which means
the overall navigation structure deserves a rethink to make best use of the new
layout. When doing this, we find it is best to start with a new clean SiteMap
and edit this to a customer-specific structure for Crm 2013, rather than trying
to edit an existing structure. It’s also worth noting that a few of the default
permissions have changed (spot the difference above for the privilege to see
the Queues SubArea), and it’s worth paying attention to these at upgrade time
for future consistency.

Monday, 9 December 2013

This post should only affect a small fraction of Crm 2013
users, but if you do have a CRM organisation that was first created in Crm 1.2,
and upgraded through the versions to Crm 2013, you may get an “unexpected
error” message when opening account contact or lead records that had been
created in Crm 1.2 (I told you this wouldn’t affect many people, but we do
still have, and interact with, customers from Crm 1.2 days).

The cause of this is the ‘merged’ attribute. Record merging
(for accounts, contacts and leads) was introduced in Crm 3.0, and a ‘merged’
attribute was created to track if a record had been merged. For all records
created in Crm 3.0 and higher, this attribute was set to false, but for records
created in Crm 1.2, the attribute was null.

This causes a problem in the RTM build of Crm 2013. If you
enable tracing, you will see an error like the following:

Crm Exception:
Message: An unexpected error occurred., ErrorCode: -2147220970, InnerException:
System.NullReferenceException: Object reference not set to an instance of an
object.

If you’ve already upgraded, then the quick, but unsupported, fix is via direct SQL statements that set the merged attribute to false (see below)

If you have not yet upgraded, you can merge each affected record in turn with a dummy record, which will set the merged attribute.

You can automate the merge process programmatically by submitting a merge request for each record, and passing appropriate parameters. I’m not sure if this will work after the upgrade, or only before, as I’ve not tried it

Unfortunately (but unsurprisingly), the merged attribute is
not ValidForUpdate, so you can’t use a simple, supported update request to set
the attribute

Friday, 6 December 2013

So, Dynamics Crm 2013 is here, and there’s lots to say about
the new UI, and the new features. But, many others are talking about these, so
I thought I’d start with what may seem to be an obscure technical change, but
it’s one that I welcome, and which is a significant contribution to the
stability and performance of Crm 2013.

With Crm 3.0, Microsoft changed the underlying table
structure so that any customisable entity was split across 2 tables; a base
table that contained all system attributes, and an extensionbase table for
custom attributes. For example, there was an accountbase and an
accountextensionbase table. Each table used the entity’s key as the primary
key, and the extensionbase table also had a foreign key constraint from the
primary key field to the primary key in the base table. Each entity has a SQL
view that joined the data from these table to make it appear as one table to
the platform. As I understand it, the main reason for this design was to allow for
more custom attributes, as SQL Server had a row-size limit of 8060 bytes, and
some of the system attributes were already using ~6000 bytes.

The same table design was retained in Crm 4.0 and Crm 2011.
However, Crm 2011 introduced a significant change to the plugin execution
pipeline, which allowed custom plugins to execute within the original SQL
transaction. This was a very welcome change that provided greater
extensibility. However it did mean that the duration of SQL transactions could
be extended, which means that SQL locks may be held for longer, which means
potentially more locking contention between transactions. In very occasional
circumstances, a combination of certain plugin patterns, the design of the base
and extensionbase tables, and heavy concurrent use, could give rise to
deadlocks (see below for an example).

Given this, I’m very glad that the product team retained the
facility to have plugins execute within the original transaction (then again,
it would be hard to remove this facility from us). It wouldn’t be realistic to
ask customers to reduce concurrent usage of CRM, so the only way to reduce the
potential deadlock issue was to address the design of the base and
extensionbase tables. From my investigations (sorry, but I actually quite like
investigating SQL locking behaviour), a substantial improvement could have been
made by retaining the table design, but modifying the SQL view, but a greater
improvement comes from combining the tables into one. An added advantage of
this change is that the performance of most data update operations are also improved.

Deadlock example

Here are two SQL statements generated by CRM:

select

'new_entity0'.new_entityId
as 'new_entityid'

,
'new_entity0'.OwningBusinessUnit as 'owningbusinessunit'

,
'new_entity0'.OwnerId as 'ownerid'

,
'new_entity0'.OwnerIdType as 'owneridtype'

from new_entity as
'new_entity0'

where ('new_entity0'.new_entityId
= @new_entityId0)

And

update
[new_entityExtensionBase]

set [new_attribute]=@attribute0

where ([new_entityId]
= @new_entityId1)

These were
deadlocked, with the SELECT statement being the deadlock victim. The locks that
caused the deadlock were:

The SELECT statement had a shared lock on the new_entityExtensionBase table, and was requesting a shared lock on new_entityBase table

The UPDATE statement had an update lock on the new_entityBase table, and was requesting an update lock on new_entityExtensionBase table

The likely
reason for this locking behaviour was that:

Although the SELECT statement was requesting fields from the new_entityBase table, it had obtained a lock on the new_entityExtensionBase table to perform the join in the new_entity view

The UPDATE statement that updates a custom attribute (new_attribute) on the new_entity entity would have been the second statement of 2 in the transaction. The first statement would modify system fields (e.g. modifiedon) in the new_entityBase table, and hence place an exclusive lock on a row in the new_entityBase table, and the second statement is the one above, which is attempting to update the new_entityExtensionBase table

Both operations
needed to access both tables, and if you’re very unlucky, then the two
operations, working on the same record, may overlap in time, and cause a
deadlock.

The new
design in Crm 2013 solves this in three ways:

With just the one entity table, the SELECT statement only needs one lock, and does not need to obtain one lock, then request another

Only one UPDATE statement is required in the transaction, so locks are only required on the one table and they can be requested together, as they would be part of just one statement

Both operations will complete more quickly, reducing the time for which the locks are held

Of these 3
improvements, either no. 1 or 2 would have been sufficient to prevent deadlocks
in this example, but it is gratifying that both improvements have been made.
The third improvement would not necessarily prevent deadlocks, but will reduce
their probability by reducing overall lock contention, and will also provide a
performance improvement.

Wednesday, 12 June 2013

When using new versions of software (in this case SQL Server 2012 service pack 1), there's always the chance of a new, random error. In this case it was "Registry properties are not valid under this context" when attempting to add a component (the Full-text service) to an existing installation.

It seems like the issue comes down to the sequence of installing updates, both to the existing installation, and to the setup program. The specific scenario was:

The initial install of SQL Server had been done directly from the slipstreamed SQL Server 2012 service pack 1 setup. At this time, the server was not connected to the internet, so no additional updates were applied either to the installed components, or the setup program

When attempting to add the Full-text service, the server was connected to the internet, and had the option set to install updates to other products via Microsoft Update. When I started the setup (which used exactly the same initial source), the setup downloaded an updated setup, and also found a 145 MB update rollup that would also be installed

Part way through the setup steps, setup failed with the message "Registry properties are not valid under this context"

The problem seemed to be that the setup program was using a more recent update than the currently installed components. Even though the setup program had identified updates to apply to the current components, it had not yet applied them before crashing out with the error.

The solution was to go to Microsoft Update and install the SQL Update Rollup, then go back and run SQL Setup to add the extra component. Interestingly, SQL Setup still reported that it had found this 145 MB rollup to apply, even though it was already installed

Monday, 29 April 2013

Error messages frequently include error codes; sometimes
they also include useful text that describes the code, but sometimes they
don’t, leaving you to discover what the code means. Here's how I
decipher CRM and Windows error codes.

CRM Error Codes

The CRM error codes are (reasonably) well documented here in
the CRM SDK. If you’re searching for the code, one thing to watch for is that
the code may be referenced with a prefix of 0x (which indicates the code is represented
in hex) – e.g. 0x80040201. If you search for the code, it’s best to remove the
0x prefix.

It is also possible that you may receive the code as an
integer (e.g. -2147220991). If you do, convert if to hex (I use the Calculator
application), then search for it.

Windows Error Codes

There is more variation in how you may identify a Windows
error code, but they are ultimately numerical values starting from 1, and (as
far as I’m aware) are consistent across versions of Windows. Newer versions of
Windows may include error codes that don’t exist in previous versions, but the
same error code should have the same meaning across versions.

There is a quick and easy way to get the message associated
with a given code – go to a command prompt and enter NET HELPMSG -
e.g.

NET HELPMSG 5

And you’ll get the result

“Access is denied”

This is the message for error code 5 (which is probably the
most common code I encounter, though I don’t keep stats on this…)

So, that’s fine if you’ve been given the error code as an integer
value (I don’t know what the highest valued error code is – it’s probably
either in the high thousands, or maybe 5 digits), but it’s not always that
easy.

The code may be in hex(aka hexadecimal). If it contains one
of the characters a-f, then it’s in Hex and you’ll need to convert it to
decimal. I use the Calculator application to do this. Also, the value may be
provided in hex but comes out just as digits. So, if I have an error code, and
the message seems entirely irrelevant, I normally convert the code as if it
were in hex to decimal, then pass it to NET HELPMSG.

The code may be in hex, but supplied as a 32-bit (or maybe
64-bit) integer with some higher bit flags set, for example:
80070005
0x80000002
&H80040035
The prefixes 0x and &H are some ways to indicate the value is in hex, and
these prefixes can be discarded. You can also discard all but the last 4
characters (in these examples 0005, 0002 and 0035) and convert them from hex to
decimal (in these examples giving 5, 2 and 53 respectively).

Finally, you may get a 32-bit integer with some higher bit
flags set, receive the value in decimal, rather than hex. These almost always
have the highest bit of a 32-bit value set, which means that in decimal they
come out around 2 147 000 000 (or more commonly as a negative number, as they
are typically signed integers). So, if I got an error code of -2147024891, I
would:

Convert it to hex, giving 80070005

Discard all but the last 4 characters, giving
0005

Convert it back to decimal, giving 5

Run NET HELPMSG 5, and find that I’ve got
another ‘Access is denied’ message

Friday, 26 April 2013

A common error posted on the CRM Development forum is ‘the given key was not present in the dictionary’. This is a relatively easy error to diagnose and fix, provided you know what it means. It will also help to identify the line in the code at which the error occurs, which is most easily determined by debugging.

The error refers to a ‘dictionary’, and a ‘key’. The ‘dictionary’ is a type of collection object (i.e. it can contain many values), and the ‘key’ is the means by which you specify which value you want. The following two lines of code both show an example:Entity e = context.InputParameters["Target"];string name = e.Attributes["name"]; // Note that this is equivalent to: string name = e["name"];In the first line, InputParameters is the dictionary, and "Target" is the key. In the second line, Attributes is the dictionary, and "name" is the key. The error ‘The given key is not present in the dictionary’ simply means that the dictionary does not have a value that corresponds to the key. So, this error would occur in the first line if InputParameters does not contain a key called "Target", or in the second line if there were no "name" in Attributes.

The way to avoid these errors is simple; test if the key exists before trying to use it. Different collection classes can provide different ways to perform this test, but the collection classes in the CRM SDK assemblies all inherit from an abstract DataCollection class that exposes a Contains method, so you can use a consistent approach across these collection classes.if (context.InputParameters.Contains("Target")) { Entity e = context.InputParameters["Target"]; if (e.Attributes.Contains("name")) { string name = e.Attributes["name"]; }}There are a few common reasons of the use of CRM collection
classes where a key might not be present when you expect it:

Within a plugin, the values in
context.InputParameters and context.OutputParameters depend on the message and
the stage that you register the plugin on. For example, "Target" is present in
InputParameters for the Create and Update messages, but not the SetState
message. Also, OutputParameters only exist in a Post stage, and not in a Pre
stage. There is no single source of documentation that provides the complete
set of InputParameters and OutputParameters by message and stage, though this
post provides a list of the most common ones for CRM 4, and most of these still
apply in CRM 2011

The Attributes collection of an Entity will only contain values for attributes that have a value. You may get the Entity from a Retrieve or RetrieveMultiple having specified a ColumnSet with the attribute you want, but this attribute will not be present in the Attributes collection if there were no data in that attribute for that record

Within a plugin, the Attributes collection of an Entity that you obtain from the "Target" InputParameter will only contain attributes that were modified in the corresponding Create or Update method. Using the example above, if this were in a plugin registered on the Update message, the "name" attribute would only be present if the "name" attribute was changed as part of the Update; the "Target" InputParameter will not contain all the attributes for the entity

Wednesday, 14 November 2012

This should be a very brief post. I find myself replying to many posts in the CRM forums about reporting problems, and most commonly direct people to the Reporting Services log file(s), which provides a lot of potentially useful information. The main information you can get is:

More detail than CRM will provide on errors running a given report

Whether or not a report is being run

In general, I find the information reported in the log files to be self-explanatory, but if I find examples of messages than could benefit from further explanation, then I'll post them here.

The log files are in the Reporting Services\LogFiles folder within the Reporting Services directory. On a default instance on SQL 2008 R2, this would be:

Wednesday, 11 July 2012

A recent change applied to Crm Online in EMEA affects the information needed to connect to Crm Online if you don't / can't use the .Net 4 CRM SDK assemblies. This article gave information that was valid at the time, but the AppliesTo parameter now needs to be urn:crmemea.dynamics.com, instead of urn:crm4.dynamics.com

I would expect similar changes would occur in other data centres, but unfortunately I don't know of a way to know when such changes will occur

Feeds

Who I am

Professionally:I'm a founder member of Excitation Ltd, a Microsoft Gold Partner in the UK that specializes in Microsoft CRM, and I've been the technical lead in over 50 CRM implementations since the release of CRM 1.2.This is a personal blog, and any views expressed here do not necessarily reflect those of Excitation; sometimes they will, but that should be treated as a happy coincidence rather than a normal state of affairs.

Personally: We'll see if I get onto this in the blog; if so, I expect it will include some permutation of mountains, snow and gravity