The CRM Guyshttps://blogs.msdn.microsoft.com/thecrmguys
The blog of the Dynamics CRM Guys from Microsoft Consulting Services, UK.Wed, 09 Jan 2013 02:38:00 +0000en-UShourly1Hands On with Dynamics CRM 2011’s New ExecuteMultiple Requesthttps://blogs.msdn.microsoft.com/thecrmguys/2013/01/09/hands-on-with-dynamics-crm-2011s-new-executemultiple-request/
https://blogs.msdn.microsoft.com/thecrmguys/2013/01/09/hands-on-with-dynamics-crm-2011s-new-executemultiple-request/#commentsWed, 09 Jan 2013 02:38:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2013/01/09/hands-on-with-dynamics-crm-2011s-new-executemultiple-request/Microsoft Dynamics 2011 Update Rollup 12 is now available, and includes a brand new OrganizationRequest which allows developers to batch process other requests. In this article, we’ll take a look at the new request, and explore why this new functionality can introduce huge performance gains in certain scenarios.

The Problem

Over the last couple of months, I’ve been working with a customer making use of this new functionality to increase data load performance. Data load scenarios can be a huge challenge where there are performance requirements, especially when dealing with large data sets, and particularly when working with Dynamics CRM 2011 Online.

To understand why this can be a challenge, we need to understand how records are typically processed without ExecuteMultiple. Imagine we’re writing an application that loads contact records into Dynamics CRM. At some point in the code, for each row in our data source that represents a contact, we’ll send a single CreateRequest to the server. The journey that each request takes is illustrated below.

Each of the stages (1 – 4) takes a certain amount of time. The time taken for the request to travel from the client to the server (or the other way – stages 1 and 4) is known as the latency. You can view the latency you’re getting between your client and server at http://<server>:<port>/<org>/tools/diagnostics/diag.aspx. For example, typically I get around 15ms latency between my client and server. To put that into context, that means 30ms of every request sent to Dynamics CRM is spent travelling. That’s 8 minutes 20 seconds on a very simple one million record import without even doing any processing on either the client or server ((1,000,000 * 0.03) / 60). Furthermore, 15ms is a very low latency. Additionally, each request is received and authenticated by the server as it’s received. Again, this fixed overhead on each request adds time to our data import.

It’s also common to need to perform additional OrganizationRequests for each row. Imagine we’d written our routine to perform a RetrieveMultiple query prior to creating the request, perhaps to look up a related value, or check if the record already exists. This doubles the amount of web requests, thus doubling the effects of the latency and processing overhead on the server.

ExecuteMultiple

ExecuteMultiple allows us to batch a number of other OrganizationRequests into a single request. These can be requests of different types, for example, a mixture of CreateRequest, UpdateRequest, and RetrieveMultipleRequests if desired. Each request is independent of other requests in the same batch, so you could not, for example, execute a CreateRequest followed by an UpdateRequest using the newly created ID within the same batch. Requests are executed in the order in which they are added to the request collection, and return results in the same order.

The default batch size (which is configurable for On-Premise installations – more on that later) is 1000. So in the 1 million record import example above, instead of sending 1,000,000 CreateRequests, we can send 1,000 batches of 1,000. With a latency of 15ms, the travelling time is cut from 8 minutes 20 seconds, to only 30 seconds ((1,000 * 0.03) / 60). Additionally, the server only receives and authenticates 1,000 records, saving additional processing time.

Obviously the impact of such a reduction in time needs to be looked at in context. If your 1 million record data import takes hours, a time saving of a few minutes may be negligible. Having said that, preliminary testing of the functionality I’ve been working on recently has shown the use of ExecuteMultiple to increase the speed of simple imports to un-customized On-Premise environments by around 35-40%, which is a significant improvement. I would expect this difference to be less in more typical environments, for example, where server side customizations are present.

The maximum batch size allowed for an online deployment of Microsoft Dynamics CRM 2011 is 1,000, which is also the default value for an On-Premise installation. This value can be configured On-Premise if necessary, but it’s important to understand the implications of doing so.

So why not insert say, 100,000 records in a single web request? There are a number of reasons why this is not a good idea. I decided to test this with another simple data import routine, creating 100,000 contact records, with first and last names, and an email address. I configured the ExecuteMultipleMaxBatchSize to 100,000 and ran my console application.

Request Size

Almost immediately, my application failed with a 404 error, which was odd as I had previously tested the application with a more sensible batch size (so I knew all URLs were correct) and the server was still working in the browser. On further investigation I noticed my single web request was 147MB, far exceeding the maxAllowedContentLength parameter set in the web.config on the server. I altered this accordingly and retried.

Again the application failed, although this time it took a little more time to do so. This time I found the answer in the server trace files – the maximum request length had been exceeded. Again I altered the server’s web.config accordingly.

Request Execution Time

With the necessary web.config values in place, I retried my application. This time the application timed out after two minutes. My Dynamics CRM 2011 environment was a single machine, self-contained CRM and SQL machine with a small 4GB RAM. I find this is typically fine for development, but clearly the machine was struggling to create 100,000 records within the default timeout window. To accommodate this, I increased the timeout values on my OrganizationProxy object, setting them to timeout after 1 hour, rather than the default two minutes.

I ran the application again, which ran for an hour and then timed out.

Risk of Failure

A common misconception that I’ve heard a few people mention with ExecuteMultiple is that it is transaction aware. This is absolutely not the case. Each child request is processed in isolation, and if your ExecuteMultiple request fails midway through, any requests executed prior to failure are not rolled back. For example, after the 1 hour timeout mentioned above, I checked my database and noted 93,000 records had been successfully created. Unfortunately, my client code was unable to capture this information because the request timed out before a response was sent back, which, if this was a real environment, would have left me with a potentially difficult data clean-up exercise.

Conclusion

So when considering a non-default batch size, it’s important to have an understanding of how this effects your ExecuteMultiple requests, to ensure the server will allow large requests to be accepted, and processed in a time that does not risk timeouts.

The ExecuteMultiple request is a great enhancement to the SDK that offers significant performance gains when dealing with bulk operations. Version 5.0.13 of the SDK is available now, here.

]]>https://blogs.msdn.microsoft.com/thecrmguys/2013/01/09/hands-on-with-dynamics-crm-2011s-new-executemultiple-request/feed/5Unit Testing Microsoft Dynamics CRM 2011 Plug-ins with Microsoft Fakeshttps://blogs.msdn.microsoft.com/thecrmguys/2012/10/10/unit-testing-microsoft-dynamics-crm-2011-plug-ins-with-microsoft-fakes/
https://blogs.msdn.microsoft.com/thecrmguys/2012/10/10/unit-testing-microsoft-dynamics-crm-2011-plug-ins-with-microsoft-fakes/#respondWed, 10 Oct 2012 05:48:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2012/10/10/unit-testing-microsoft-dynamics-crm-2011-plug-ins-with-microsoft-fakes/Visual Studio 2012 Ultimate ships with a new isolation framework called Microsoft Fakes. This article describes how we can use Microsoft Fakes to unit test plug-in code without a dependency on a Microsoft Dynamics CRM 2011 server, allowing us to easily test logic in isolation, from a known state. This post assumes a certain level of familiarity in coding Microsoft Dynamics CRM 2011 plug-ins.

Why unit test?

Bugs are an inevitable side effect of any software development. As software evolves, changes in code can lead to unintentional side effects. Bugs identified during development are much easier and cheaper to fix than those discovered during later stages of the application lifecycle. Although there is a certain level of investment required to build a suite of unit tests, once built they are a valuable asset for the on-going early detection of bugs.

What’s the challenge?

There are a number of challenges in writing unit tests for Microsoft Dynamics CRM plug-ins, generally introduced by external dependencies. For example, plug-in logic is typically heavily dependent on the contextual information passed into the plug-in by Microsoft Dynamics CRM when the plug-in is triggered. Any unit test that invokes plug-in logic must somehow pass in the required information.

Similarly, plug-in code commonly interacts with the Microsoft Dynamics CRM server using the IOrganizationService object to query and modify Microsoft Dynamics CRM data, so it’s common for plug-in logic to become heavily dependent on the state of that data. To write effective unit tests, this data dependency needs to be removed.

Also, plug-in logic is commonly dependent on logic stored in external assemblies. For example, a plug-in may reference the System.DateTime.Now property, and change its behaviour dependent on the returned DateTime value. This value is obviously dependent on the time at which the code is executed, so any unit test needs to have some way to overcome this.

This is not an exhaustive list. Plug-ins can be dependent on all sorts of external dependencies; web services, databases, local files and registry settings to name a few, but the concept is always the same; to effectively unit test, we need to be able to run the code in complete isolation.

Enter Microsoft Fakes

Microsoft Fakes allows us to address these types of challenges by providing a way of simulating external behaviour without modifying our original logic. Let’s consider a trivial plug-in to demonstrate how this can be achieved.

Example 1 –A plug-in that sets a field value

Imagine you have a requirement to write a plug-in that sets the value of a field during record creation. We may start with an implementation such as the one below, which could be triggered by the Post Create event of the entity in question:

The implementation obtains the IPluginExecutionContext object from the IServiceProvider object passed in by Dynamics CRM. From there, we are able to access the record being created, and set a value accordingly.

We can now write a test method in a separate unit test project to check that this execution produces the expected outcome. Rather than writing a custom class that implements IServiceProvider, we can use the new Fakes functionality to create stubs of the required objects, and test that the plug-in code performs correctly.

Create a unit test project through the new project wizard, you’ll find the project template in the Visual C# -> Test area (or Visual Basic – this walkthrough assumes C#). In the new project, add a reference to the plug-in project, and to Microsoft.Xrm.Sdk.dll and System.Runtime.Serialization.dll. If you right click the newly added Microsoft.Xrm.Sdk.dll assembly (or any other referenced assembly), you’ll notice the option to “Add Fakes Assembly”. Click this option, and you should see a new reference to Microsoft.Xrm.Sdk.5.0.0.0.Fakes assembly. Also select this option for System, which should create a System.4.0.0.0.Fakes reference.

Locate the TestMethod1() method within the UnitTest1.cs class file. You can rename the class and method if you wish. In this method, we’ll create a number of stub objects, and then pass them into the plug-in method to test the functionality.

Our plug-in’s Execute method needs a System.IServiceProvider parameter. Because we’ve faked the System assembly, we already have a class that we can use – System.Fakes.StubIServiceProvider. This class is interchangeable with the System.IServiceProvider (so we can pass an instantiated StubIServiceProvider into the Execute method), and allows us to specify a delegate that is executed when properties or methods are invoked. We can set properties on this object to return other stub objects, ultimately building up an object that will respond to method calls executed by our plug-in in a predictable way. To demonstrate this, let’s have a look at what our plug-in will do.

Call the GetService method on the IServiceProvider object, passing in the IPluginExecutionContext type as a parameter, and receive an IPluginExecutionContext object

Access the InputParameters indexed property, passing the string parameter “Target”, and cast the returned value to an Entity object.

Set the “new_pluginexecuted” attribute on the entity to true.

The following test code sets up the StubIServiceProvider object with just enough state to react to the above logic in a manner that will test the effect of the plug-in code.

After we’ve arranged the objects, we can perform the action we wish to test, which in this case is the Execute method. Following this execution, we can check our test objects have been affected in the right way using assertion methods. Here’s the complete test class:

My assertion in this test is quite straightforward; I am looking at the Entity object I set up, and checking that the field has been set to true. Running this unit test indicates success, and this plug-in hasn’t even been deployed to a CRM server yet.

Unit test results

The fact that my unit test has run successfully does not mean my plug-in is correct. I could have made some logical error in my plug-in code that has an unintended side effect, or even have logical errors in my test code. The test merely indicates that it affects my prepared stub objects in the way I expected. The value in the unit test is in the fact that I now have some code that consumes my plug-in code, which will always execute in a predictable way with no external dependencies. Other testing that invokes my plug-in code should be performed, and ideally if a bug is found, more unit tests should be written to expose the bug for continued retesting at a later date.

Example 2 – A more typical plug-in

Let’s consider a slightly more complex plug-in which may be used in a real system. Suppose your customer has a requirement to prevent cases being re-activated more than 30 days after they are have been closed. This could be implemented with the following code (for the purposes of this example, imagine a different plug-in sets the value of new_dateclosed when case is closed).

thrownewInvalidPluginExecutionException("This case has been closed for too long to reopen.");

}

}

We can write a unit test similar to the last example to ascertain whether the exception is correctly thrown under the right conditions, and we can control the value of new_dateclosed very easily.

Here’s an example of two unit tests that test these conditions. TestMethod1 expects an exception (note the [ExpectedException] method attribute), and TestMethod2 does not. The code is slightly differently structured here, as I’ve used TestInitialize and TestCleanup methods to setup and tear down my tests, saving me repeating the context setup code.

However, there is another subtle dependency in this code; DateTime.Now. The value returned by this property varies depending on the date and time that the code is run, which means our unit tests may behave differently depending on when we run them. Both tests pass now, but TestMethod2 will fail if I run the unit test a couple of months from now. To complicate matters, DateTime.Now is a static property, so we cannot use a dependency injection approach to alter the behaviour of the property. To solve this problem without altering the plug-in code, we need to use a shim.

Shims

Like stubs, shims can also be used to alter method behaviour, but work by intercepting method calls at runtime, rather than being injected into method calls as parameters. By using the ShimDateTime class (generated by faking the System assembly earlier on), we can redirect calls to DateTime.Now to our own lambda expression.

The following code demonstrates this approach, ensuring that any call to DateTime.Now within the context of the ShimsContext returns a fixed DateTime object.

Code Snippet

[TestMethod]

publicvoid TestMethod3()

{

TestEntity["new_dateclosed"] = newDateTime(2012, 11, 1);

var plugin = newPreventCaseReopenPlugin();

using (ShimsContext.Create())

{

System.Fakes.ShimDateTime.NowGet = () => {

returnnewDateTime(2012, 1, 15);

};

plugin.Execute(ServiceProvider);

}

}

Conclusion

The techniques here can be extended with relative ease to simulate IOrganizationService operations, database calls, external web service calls and beyond. A suite of unit tests is a valuable asset to a developer writing code that is susceptible to change in the future, and the new Microsoft Fakes isolation framework included in Visual Studio 2012 Ultimate allows developers to simplify the process of writing test code that effectively tests plug-in logic.

]]>https://blogs.msdn.microsoft.com/thecrmguys/2012/10/10/unit-testing-microsoft-dynamics-crm-2011-plug-ins-with-microsoft-fakes/feed/0Silverlight Async with Visual Studio 2012https://blogs.msdn.microsoft.com/thecrmguys/2012/08/28/silverlight-async-with-visual-studio-2012/
https://blogs.msdn.microsoft.com/thecrmguys/2012/08/28/silverlight-async-with-visual-studio-2012/#commentsTue, 28 Aug 2012 02:07:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2012/08/28/silverlight-async-with-visual-studio-2012/A frequent complaint I hear from developers working on Silverlight applications for Microsoft Dynamics CRM is that they have to use asynchronous methods when communicating with the CRM web services. Asynchronous methods and callbacks are not particularly complex concepts or even challenging to program, but real complexity arises when you have a dependency chain of asynchronous methods, leading to high volumes of heavily indented code or scattered logic that is counter-intuitive to the intended logic of the application. With the upcoming release of Visual Studio 2012, this is no longer a problem!

UI Responsiveness

To ensure the UI remains responsive, Silverlight does not allow us to perform a call to the WCF endpoint whilst blocking the UI thread. This means when we perform a web service request, a thread is spawned to wait for the response. This is great for users, as it ensures their Silverlight applications do not lock up when web service queries are made. However, for developers, this can be a real headache. Multithreaded applications are far more complicated to write than single threaded ones. For one, developers need to worry about cross-thread object access; code run in a background thread needs to be dispatched back up to the UI thread before it can modify UI components. Secondly, developers are required to write callbacks to handle responses from the server. These quickly become unmanageable, especially when nested.

Asynchronous Methods & Callbacks

Typically when you retrieve data from CRM, you want to do something with the retrieved data. This could be as simple as counting the results of a query, or inspecting the value of a field. I can’t imagine a reason you would want to invoke the Retrieve or RetrieveMultiple operations and not want to inspect the returned value(s).

Let’s consider an example to demonstrate the differences between programming this synchronously and asynchronously. Let’s say you’re working with the WCF endpoint. Your requirement is to query CRM for all contacts in the system, and display their names to the user.

This is how it’s done in a synchronous way:

Code Snippet

privatevoid SynchronousExample(IOrganizationService service)

{

//Build the query

var query = newQueryExpression

{

EntityName = "contact",

ColumnSet = newColumnSet { Columns = new[] { "fullname" } }

};

//Send it to the server and receive a response

var response = service.RetrieveMultiple(query);

//Iterate across returned entities and output fullname

foreach (var contact in response.Entities)

{

Console.WriteLine(contact.Attributes[0]);

}

}

However, we can’t do this synchronously in Silverlight, so we’d have to write a callback to specify what needs to happen when the response is received.

//Dispatch back up to the UI thread to avoid thread permission exceptions

Dispatcher.BeginInvoke(newAction(

() => {

//Iterate across results and output fullname

foreach (var contact in response.Entities)

{

MessageBox.Show(contact.GetAttributeValue<string>("fullname"));

}

}

)

);

}, service);

}

So even with this simple example we can see there’s a lot of added complexity in the code. Not only is the code to send and receive the request more verbose, there’s also added complexity introduced as the response is executed on a different thread to the UI. Rather than using the IOrganizationService interface, we can use the events available to us on the OrganizationServiceClient, but the structure of the code is almost identical, with the same problems.

Taking this one step further, suppose we want to run a separate query against each of these contacts to retrieve some related information. In the asynchronous example we have to introduce another nested callback of a similar quantity. This quickly becomes unmanageable.

One approach to making this more manageable is to break the callback into a separate method. This approach goes some way to improving the maintainability of the code. However, it’s still quite verbose and we’ve still got the problem of having to dispatch back up to the UI thread in the response.

A better way

Writing code like this quickly hides the fundamental purpose of the code, making the logic very difficult to read. Wouldn’t it be great if we could write single threaded Silverlight applications without blocking the UI thread? Well now we can!

.NET 4.5 introduced new language keywords to overcome exactly this problem, and with Visual Studio 2012 we can use these keywords with Silverlight 5. These enhancements were first introduced as the Async CTP for .NET 4.0, and although the CTP is licenced for production use, it is not recommended. However, with the upcoming release of .NET 4.5, the keywords have been built into the language, and with the help of the Async Targetting Pack, we can use these features in Silverlight 5 applications.

I won’t go into huge detail on the inner workings of the new language features, rather a brief overview, but if you’re interested in some of the detail, I would highly recommend giving this video a watch.

The new async features are built on top of the TPL introduced in .NET 4.0, and introduce two new language constructs; async and await. Developers use async to indicate that a method runs asynchronously, and await to indicate that execution should pause until a task has been completed. Conveniently for Silverlight developers, this can all happen on the UI thread without having to block.

Setting It Up

The best way to demonstrate the benefits is to see it in action, but before we can do that we need to install the Async Targetting Pack. Begin by creating a new Silverlight application in Visual Studio 2012:

and in the Package Manager Console type the following and press enter:

Install-Package Microsoft.CompilerServices.AsyncTargetingPack

This only takes a few seconds to install, and once complete allows you to use the new language features within your application.

For this example, I have also configured the project to use the CRM SOAP endpoint as detailed in this SDK article.

Finally, I have also created a class of extension methods that enable the IOrganizationService methods to be used asynchronously. Create a new class in your project called AsyncExtensions.cs and insert the following code. Ensure the namespace matches the one that your service reference has.

There are several points to notice here. Firstly, notice the use of the async keyword in the method signature, this indicates that this method runs asynchronously and allows us to use the await keyword within the method body. The await keyword is used to await the results of the RetrieveMultiple extension method.

More interestingly though, there are several things to notice that are not here. Firstly, there no callbacks and all of the logic is defined within the same method body. This makes the code much more readable. Secondly, because execution is resumed on the UI thread when service.RetrieveMultiple returns a result, we do not have to dispatch back up to the UI to display the results to the user.

Also note there is nothing special required to invoke this method. Although the method is marked as asynchronous, the consuming code does not need to be aware of this:

Code Snippet

public MainPage()

{

InitializeComponent();

var service = SilverlightUtility.GetSoapService();

AsynchronousExample(service);

}

Under The Hood

So what’s going on in the extension method that we called? For a method to be awaitable, it needs to return one of three types; void, Task or Task<T>, so to await CRM web service calls we need to wrap the available methods up into a new method that returns an awaitable type. For example, this is the RetrieveMultiple extension method used in the previous example:

The TaskCompletionSource<T> object is used to create a task of the appropriate type, which allows us to await the results of the standard BeginRetrieveMultiple method. Also note that error handling is also catered for, so if the web service throws an exception the awaiting code will be notified.

Some Considerations

This feature does require Visual Studio 2012, which is due for launch on September 12th 2012. Unfortunately, the excellent CRM 2011 Developer Toolkit does not currently work with Visual Studio 2012, which is worth bearing in mind if you’re contemplating a switch.

This feature also requires Silverlight 5. Silverlight 5 support within CRM is not straight forward; it is not supported when embedding directly on to forms, but wrapping your SL5 XAP resources inside an HTML web resource is supported.

Conclusion

So there we have it, the new async features in .NET 4.5 combined with the Async Targeting Pack provide Silverlight developers with the tools to transform their asynchronous code back to readable, maintainable methods, by removing the need to litter business logic with callbacks and UI dispatching code. We’ve seen how to install and use the basic features of the language enhancements, and have used a class of extension methods that enable us to work with the CRM WCF web service with the new async features.

Visual Studio 2012 RTM is now available to download on MSDN, and I would highly recommend taking a look!

A common issue that I have come across in CRM deployments, particularly for Enterprise customers, is deadlocking on the SQL server resulting in a degraded performance of our CRM environment.

I’ve experienced severe impact of this deadlocking to the degree that other CRM SQL processes are aborted resulting in errors being presented to the user, or systems integrating with CRM fail to retrieve data – In the early stages of a project and especially post go live, this can have a crippling effect on the user adoption of the new deployment.

More often than not, in my experience, the deadlocks were a result of long running reports and integrations being executed against SQL that could ultimately be better managed.

Wouldn’t it be nice if we could just offload these long running reports and redirect our integration to a second copy of our CRM data, so that we didn’t impact user experience?

PreviousMitigationOptions

To date, I have come across two ways to offload our read only workloads such as reporting or integration (where CRM is the data master). One of these options, Transactional Replication, was always chosen as a last resort over the other, SQL Mirroring, due to some constraints.

If you are reading this post, I will assume you have some knowledge of SQL high availability options such as SQL Mirroring and SQL Transactional Replication. Additionally, this article will not focus on how to create reports so a working knowledge of writing reports using tools such as SQL Data Tools is assumed.

SQLMirroring

This is a common option that I’ve deployed a number of times. As you may know, SQL mirroring requires that the secondary instance database remains in a ‘recovery’ state, thus rendering the database useless until you actually invoke DR or a manual failover. To overcome this need for the database to remain inaccessible, we would implement snapshot technology to produce essentially a 3rd copy of your data, this time in an accessible state which we can use to offload our workloads against.

This solution works well but comes with some limitations. For example, your report data would only be as up to date as your most recent snapshot and snapshots can take some time to produce, depending on the size of your database. Additionally, because we need to implement snapshots, you get yourself into a position where you need enough storage for the 3rd copy of your data. This probably isn’t such an issue with the cost of disk nowadays but if you have a VERY large database, which takes a long time to snapshot and you need to run reports frequently, that would mean that you may need enough storage for a 4th copy of your database so that reports can continue to run against the 3rd whilst the 4th is generated.

SQLTransactionalReplication

I pointed out that this option wouldn’t be something I recommended in the past. This is due to the fact that replication is not officially supported by Microsoft, definitely not something you want to risk. The second problem is that replication depends on a stable schema, so if you did chose to deploy Transactional Replication due to up to date report data requirements, you would need to essentially break replication prior to the deployment of any CRM customizations and then re-establish replication post deployment.

This does add complexity and additional time to your deployment process but transactional replication does help you meet data requirements and provide for more up to date data than SQL Mirroring with Snapshots would - You just need to keep the trade-offs in mind during design and when trying to maintain supportability.

TheSolution

An Availability Group is essentially a host name that allows you to host a set of primary databases on a single primary replica, you can then host up to four secondary replicas that can be used as failover targets.

Availability Groups are in a way, similar to SQL Mirroring but one of the key differences that we leverage in this solution, is the fact the secondary instances (replicas) can remain in an accessible state. So with this capability and some configuration, we would be able to offload our CRM workloads against a secondary replica.

Configuration

SQLServerAlwaysOnAvailabilityGroups

This blog won’t focus too much on the configuration of SQL but there are some key points that you need to keep in mind. You would need to configure:

A Availability Group Listener.

Secondary replicas that would host read only copies of your CRM Organization database.

Availability Group routing.

The listener is essentially a DNS hostname that you would use when configuring your SQL Server Reporting Server (SSRS) data source connection and the routing tells SQL which secondary replica should handle the incoming connection, depending on whether the connection intent is read-only or read-write.

The trigger that tells SQL what the connection attempt is, is a new parameter within the SQL Native Client 11.0 called Application Intent. So in short, SQL will access all incoming connections on that hostname, access the connection and if it is read-only, route the connection to your secondary replica that you have provisioned for offloading workloads.

After you have created your Availability Group and configured routing, there is a very simple way to test that the routing is working. If you run the SQLCMD below (specifying the –K switch to imply ‘ReadOnly’) and select server name, you will notice that prior to configuring my routing, the connected server is my primary SQL server, ‘APC-SQL1’.

Post routing configuration, if we were to run the same command, we would expect the connected server to be the secondary replica defined whilst creating your AG, as seen below.

CRM &SSRS

Right, so we have tested that our Availability Group and associated routing is working correctly. If I now change roles and become a CRM administrator and the SQL guys had informed me that moving forward, I need to use some new parameters for the data source connection utilized by CRM reports, how would I go about implementing these parameters.

Before we go ahead with this section, we need to keep in mind, some limitations with offloading reports. These aren’t new limitations when comparing to the previous options but worth thinking about.

FetchXML reports cannot currently be offloaded due to the fact that they connect via web services. The hope is that in future releases of CRM, this will be configurable.

Related to the 1st, reports created through the CRM UI using report manager, are created by default in FetchXML format. As such, they would need to be recreated by administrators if you identify them as long running reports or that they are causing locking.

Out the Box reports are created using a special data source that lets SQL know which organization the report is being run against. Again, this is a report that would need to be recreated by administrators if desired.

I have in my test environment, a very simple report created using SQL Data Tools that pulls a list of all accounts currently in CRM. During creation, this report was configured to use a custom data source.

To load this into CRM, we simply navigate to Workspace > Reports and select New from the ribbon. I select to use an Existing File and navigate to my saved report, then click to Save and Close the form.

Doing so will save the report in CRM and SQL Server Reporting Services under Home > ‘MyCRMorg’ > Custom Reports. As mentioned above, the report was created to use a custom data source so the first thing I do is manage this report and change the data source to use a shared data source, in this case, I configure it use the data source in the root of my organization tree in SSRS.

If I were to now run this report from within CRM, the report would run as expected and return a list of my accounts. This is standard functionality where the data is being pulled out of our production CRM database, as shown in this very simple SQL Profile Trace.

What we want to do, is leverage the parameters given to us by our SQL guys to essentially offload our CRM reports to a copy of our CRM database. In order to achieve this, I modify the shared MSCRM_DataSource above, to use these new parameters.

It is important at this stage, to note that the connection string contains two key parts used to connect to our AG secondary replica.

Data Source = AO-TR. This is the listener that I created during setup of the Availability Group.

Application Intent = ReadOnly. There are two options for this switch, ReadOnly and ReadWrite. If we were to choose ReadWrite, SQL would simply route the connection to our primary instance. Because we choose ReadOnly, and have the necessary routing defined for the AG, SQL will route the connection to our secondary replica.

Saving this off and rerunning the report, we can see that the select statement using SQL profiler no longer runs against the primary database, instead, it is being offloaded to our secondary replica for processing.

InSummary

To summarize, we have seen how new functionality introduced in SQL Server 2012 can be leveraged to improve the system performance and user experience of CRM 2011. This functionality provides a way to balance the load on SQL Server for CRM.

I see this functionality as being key to how we improve the scalability and performance of CRM Infrastructure for on premise customers however, please be aware that support for this solution is currently non-existent. Always engage the Microsoft Support team before customizing any of the Dynamics CRM reporting components to ensure that your solution is valid.

In September 2011, I joined Microsoft as a Consultant within the Microsoft Consulting Services Dynamics CRM UK team. My primary focus is the design and implementation of Dynamics CRM Infrastructure supporting primarily Enterprise customers. My roles since 2000 have ranged from a system engineer to consultant and support analyst roles across the Microsoft stack. Since 2008, my focus has turned to predominately CRM and dependent infrastructure such as Active Directory, SQL Server and SQL Server Reporting Services.

I hope to add value to your role by sharing my experiences whilst working on cutting edge deployments of CRM in the Enterprise.

]]>https://blogs.msdn.microsoft.com/thecrmguys/2012/08/09/adam-caulkett/feed/0User Impersonation with Silverlighthttps://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/user-impersonation-with-silverlight/
https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/user-impersonation-with-silverlight/#commentsWed, 14 Mar 2012 14:30:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/user-impersonation-with-silverlight/Sometimes it’s useful for Dynamics CRM to think you’re someone else. An administrator may wish to retrieve or alter data as if they were another user, or execute certain special commands on someone else’s behalf. Developers using the Microsoft.Xrm.Sdk will find that implementing this functionality is quite straight forward, as it’s built into the SDK assemblies. However, when the SDK assemblies are unavailable (for example, in a Silverlight application), implementing this impersonation is more difficult.

Using the SDK

Developers using the Microsoft.Xrm.Sdk assembly have access to the OrganizationServiceProxy and CallerImpersonationScope classes. The former is a proxy class that acts as a local reference to the CRM Organization web service, and provides more functionality than directly referencing the service. One such feature is the ability to specify a CallerId when performing web service requests. When a value is given, CRM reacts as if the specified user were calling them, rather than the authenticated user. CallerImpersonationScope works in a similar way, although you pass an existing service reference into it rather than creating a new connection, which is more useful in plug-ins, for example.

Code Snippet

publicvoid CreateAccount(OrganizationServiceProxy service)

{

var account = newEntity("account");

account["name"] = "SDK Impersonated Account";

var testUser1 = newGuid("C29551F3-123D-E111-8463-00155D001003");

service.CallerId = testUser1;

var response = service.Create(account);

}

Silverlight applications

Silverlight applications cannot reference the SDK assemblies. However, we can still achieve the same functionality using the organization service WCF endpoint. We can actually inject the CallerId parameter as a SOAP header before making a web service call, and CRM has some great built in functionality to accommodate this approach.

Another approach is to authenticate against the web service with another users credentials. There are a number of security risks associated with this approach, and it is generally a bad idea.

The following code demonstrates an example of this CallerId injection. Please note this example depends on classes introduced in this walkthrough.

//Create third account within the same OperationContextScope to indicate

//all operations within this using block are performed with impersonation

service.CreateAsync(account3);

}

//Create fourth account

service.CreateAsync(account4);

The use of OperationContextScope allows us to modify the header of each request sent to the CRM web service within the scope of the using block. To impersonate, we need to include the CallerId parameter. CRM interprets this parameter and executes messages on the specified user’s behalf.

The result

So what does the impersonation look like in CRM? In the example, I’m creating four entities. I’ve omitted some of the initialization logic to keep the sample brief, but assume the entities are all identical Entity objects representing accounts, named account1, account2, account3, and account4. Accounts 2 and 3 are created within the scope of the OperationContextScope, and 1 and 4 are not. Also note I injected the systemuserid of Test User 1 as a SOAP header within the using block, and I’ve not had to specify any additional usernames or passwords anywhere.

When I run the code, I authenticate against the service using my own account (Dave Burman in CRM). The four accounts are created as pictured below.

Notice that the owner and created by fields are different for accounts 2 and 3. CRM has created these records almost as if I were authenticated as Test User 1. Almost, but not quite. Notice the Created By (Delegate) field is populated for those two records. CRM is aware I’m impersonating a user, and notes down my user name in this field.

Security considerations

So, if we can impersonate other users, what’s to stop a standard user impersonating a system administrator and retrieving data they wouldn’t normally be able to see? Fortunately we have a way of preventing this, via the security role functionality built into CRM. The “Act on Behalf of Another User” privilege allows administrators to control who can impersonate other users. This is the only privilege set in the built in “Delegate” role, so assigning this role to a user or team will allow them to impersonate. The web service will return an error back to an authenticated client attempting impersonation without this privilege.

However, the “Act on Behalf of Another User” privilege should be used with caution. Care must be taken to ensure impersonation does not inadvertantly give users the ability to access or modify data that they would not normally be able to. Whilst CRM does not allow users to gain new privileges via impersonation (e.g. a user who does not have Account read access will not be able to gain it by impersonating a user that does), privilege scope can be elevated using this method. For example, a user with user scope read access to Accounts could gain organization wide access by impersonating a user with this privilege (e.g. the system administrator).

Tidying it all up

Finally, the Silverlight example above works well, but the code becomes cumbersome where there’s lots of impersonation happening. The CRM Guys have developed a class of extension methods to provide each of the web service proxy methods with an extra CallerId parameter, which can be downloaded here. To use the class, simply drop the file into your Silverlight application, and alter it’s namespace to match the namespace of your web service reference.

As an example, the Silverlight example above can then be rewritten as follows.

This article has provided an overview of the impersonation functionality available to CRM developers interacting with the application’s web services. We’ve looked at a number of different implementation options, including scenarios where the CRM SDK assemblies are unavailable. We’ve also looked at some of the built in features that CRM has to accommodate this functionality. Hopefully you’ve found it useful.

]]>https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/user-impersonation-with-silverlight/feed/6Dave Burmanhttps://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/dave-burman/
https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/dave-burman/#respondWed, 14 Mar 2012 13:35:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/dave-burman/I’ve been developing software professionally since 2007. During this time, I’ve been fortunate enough to work on a huge variety of projects, whilst gaining exposure to most of the Microsoft development stack. I’ve been working with Dynamics CRM since 2010 and thoroughly enjoy pushing the boundaries of the product on a daily basis.

I joined Microsoft in late 2011 as a CRM Development Consultant. My role involves helping our customers design and implement Dynamics CRM customizations in line with best practices, allowing them to really tailor their xRM applications to suit their business needs in a supportable and maintainable way.

Dave BurmanConsultantMicrosoft Consulting Services UK

]]>https://blogs.msdn.microsoft.com/thecrmguys/2012/03/14/dave-burman/feed/0Dynamics CRM 2011 Demos – Faking Multiple Personashttps://blogs.msdn.microsoft.com/thecrmguys/2011/10/12/dynamics-crm-2011-demos-faking-multiple-personas/
https://blogs.msdn.microsoft.com/thecrmguys/2011/10/12/dynamics-crm-2011-demos-faking-multiple-personas/#respondWed, 12 Oct 2011 06:33:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2011/10/12/dynamics-crm-2011-demos-faking-multiple-personas/Here’s a quick post on a little tip that I find useful when trying to show how the systems we are building will actually work for the customer…

One of the key business benefits of a Dynamics CRM implementation is turning ‘personal knowledge’ into ‘corporate knowledge’ so that when one user does something another user can benefit from it. Trying to get this concept across on a single screen projected at the front of the room where you run everything as a single proposed user can be challenging, but unless you achieve it your audience may not have that light bulb moment.

What I’ve found works pretty well is to run your demonstration from a single machine installation (Windows Server hosting SQL Server and Dynamics CRM), and then use Remote Desktop Connection to loop back into the same machine as a different user so that you can easily switch from one persona to another. By setting the desktop background and maybe the Windows colour scheme to something different you can help give a visual cue as to which demo persona you are now.

I can sense you rushing off to try it now, but you may quickly hit a few snags that you’ll need to iron out. All of the instructions below relate to Windows Server 2008 R2, but can be applied to previous OSes as well.

Enable remote access on server

The full article on this can be found in the ‘Enable Remote Desktop’ TechNet article, but the quick version is…

Open up Server Manager and use the circled bits of the image below to enable access:

Navigate the location shown and again add the relevant user accounts to the highlighted item:

Connect!

You should now be able to open and RDP connection to ‘localhost’ and logon as one of your additional personas:

Once logged in just set the background as normal. Then expand/collapse the Remote Desktop window and your audience should be able to keep track of who you are at any particular moment.

Bonus scenario

If you’re wanting to run your demo by connecting to your Hyper-V guest partition from your parent partition then you can use Remote Desktop if you have a network route available. One great result of this is that you can then use Remote Desktop features like cut and paste, drive share, etc between parent and guest partitions. To make that work you’ll need a suitable IP address on each partition, which you can do with a static address for an interface connected to the Hyper-V Internal Virtual Network. Configuring that is a post all of its own, so let us know if you need that info and we’ll be sure to do one.

Summary

In this post we described three things you may need to do to allow you to Remote Desktop to your Dynamics Server to help demos run smoothly:

Enable remote access to the server

Grant users remote desktop access

Grant users remote logon rights

As ever thanks for reading and I hope you feel more informed. Please be sure to give your feedback on whether you like this type of content, and to ask any questions to help you and others understand the material better.

]]>https://blogs.msdn.microsoft.com/thecrmguys/2011/10/12/dynamics-crm-2011-demos-faking-multiple-personas/feed/0Peter Simonshttps://blogs.msdn.microsoft.com/thecrmguys/2011/09/19/peter-simons/
https://blogs.msdn.microsoft.com/thecrmguys/2011/09/19/peter-simons/#respondMon, 19 Sep 2011 05:26:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2011/09/19/peter-simons/After graduating from the University of Surrey in 1995 I initially worked in both software development and support roles until my first CRM project as a contractor on a UK wide Siebel implementation for Prudential Assurance starting in 1998. Since then I have continued my focus on CRM, moving into a permanent EMEA consulting role with Siebel Expert Services for approximately four years between 2002 and 2006 and then Microsoft Consulting Services from 2006 to date.

In my role at Microsoft I have continued to deepen my CRM expertise whilst broadening into other technologies including roles on large scale Microsoft Exchange, BizTalk and SharePoint projects.

My passion however still remains in the CRM and xRM space, using Microsoft Dynamics CRM to implement enterprise class solutions across many sectors including: Finance, Health, Education and other Government departments. Within this area my work is varied including: CRM Architecture; CRM business analysis and functional design; CRM Configuration; infrastructure design and performance optimization; and regular contributions / review of published CRM articles and methodologies including:

In some projects both with Siebel and Dynamics CRM I have worked on the end to end solution design and delivery whilst in many cases I have had the pleasure of working with partner organizations in providing proactive advice and guidance and short term reviews. This combination has allowed me to work on over 100 Siebel implementations and more that 50 Dynamics CRM projects to date and looking forward to the next 50!

Peter SimonsPrincipal Consultant Microsoft Consulting Services UK

]]>https://blogs.msdn.microsoft.com/thecrmguys/2011/09/19/peter-simons/feed/0Auditing and Table Partitions in Dynamics CRM 2011https://blogs.msdn.microsoft.com/thecrmguys/2011/09/07/auditing-and-table-partitions-in-dynamics-crm-2011/
https://blogs.msdn.microsoft.com/thecrmguys/2011/09/07/auditing-and-table-partitions-in-dynamics-crm-2011/#commentsWed, 07 Sep 2011 02:13:00 +0000https://blogs.msdn.microsoft.com/thecrmguys/2011/09/07/auditing-and-table-partitions-in-dynamics-crm-2011/In this article we’ll be looking behind the scenes at how Dynamics CRM 2011’s new audit logs are engineered to maintain performance, and explain why what looked like anomalous behaviour to one of our customers was actually working just fine.

If you’ve been looking closely at how the auditing feature works you may have noticed that it makes use of Microsoft SQL Server’s partitioning feature as a means of improving the scalability and performance of the audit logs while retaining the audit data in the organisation database. Specifically, the SDK article Auditing Feature Overview says:

“For Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online servers that use Microsoft SQL Server Enterprise editions, auditing data is recorded over time (quarterly) in partitions. A partition is called an audit log in the Microsoft Dynamics CRM Web application. Partitions are not supported, and therefore, not used, on a Microsoft Dynamics CRM 2011server that is running Microsoft SQL Server Standard edition.”

General information about partitioning is described in the MSDN article Partitioned Table and Index Concepts, and in more detail in the white paper “Partitioned Table and Index Strategies Using SQL Server 2008”. It’s well worth understanding these concepts if you have to design the database storage for applications using large data sets. For now we’ll briefly explain that the audit data table is partitioned on a date column function that places each calendar quarter’s data into a different partition, and each partition behaves like a ‘mini table’ ensuring that audit operations aren’t overly slowed by having to navigate a single ever growing table.

If you decompose that quote above you might think it means the following two things:

If you are using SQL Server Enterprise edition partitioning will be in use.

If you are using SQL Server Standard edition partitioning will not be in use.

So a customer was a bit surprised when they looked at a Dynamics CRM 2011 instance that was using SQL Server 2008 Standard Edition and saw this under Settings –> System –> Auditing –> Audit Log Management:

Surely this suggests that quarterly partitioning is in use on Standard Edition? Adding to their confusion even more was that if they then looked at the same screen on their SQL Server Enterprise Edition environment they saw only a single row (so no hint of partitions). It looks as if it works exactly the opposite way to that statement above! And that would be strange since Standard Edition just doesn’t support partitioning.

So let’s dig a bit deeper and see what is actually happening…

In order to get to the bottom of this we started by looking at what was happening on another SQL Server Enterprise Edition deployment.

First of all we ran a query on the tenant’s database to see what partitions existed for the table that stores the auditing data, AuditBase. ContactBase is in there just as a control: it should have just the one storage partition.

select t.name, p.partition_number
from sys.partitions p,sys.tables t
where t.object_id = p.object_id
and t.name in ('ContactBase','AuditBase')
and p.index_id in (0,1)

Here’s what we saw:

name

partition_number

ContactBase

1

AuditBase

1

AuditBase

2

AuditBase

4

AuditBase

3

So it looks as if on Enterprise we can have partitioning working, which starts to disprove the idea that it never works.

Next we turned SQL Server Profiler on while refreshing the Audit Log grid, and see that this query is executed:

Now we start to see what is really happening. The Dynamics CRM platform is executing a query that will return results that look similar, but are critically different, based on the edition of SQL Server Database Engine that is in use.

The SERVERPROPERTY documentation tells us that the value 3 indicates Enterprise Edition. If this is detected then the result set consists of information about each physical partition, including the storage size in kilobytes.

For other editions (such as Standard) the actual AuditBase table is queried to group the data it contains by calendar quarter, and the number of rows in each quarter is calculated.

So we can now explain why the Standard Edition gave the illusion of having partitioning in use. The grid isn’t showing partition data, but the results of a GROUP BY query that looks very similar.

This gives us a hint that although Dynamics CRM is gathering information to be able to show a set of quarterly results in the grid in both cases there will be a subtle difference in the last column: one will be a size and the other a number of rows. Sure enough if we compare what we see in Dynamics CRM for the Standard Edition above with what we see for Enterprise Edition we have ‘Rows’ in one and ‘KB’ in the other:

One mystery remains. If the Dynamics CRM gives a set of rows on both Standard and Enterprise why could the customer see only a single row on their Enterprise environment, suggesting that partitioning is not in place?

Well it turns out that partitioning is indeed missing from their environment. This had happened because a backup of an organisation database had been taken from a Standard Edition deployment and redeployed to Enterprise Edition. We already have KB articles describing that the partition information isn’t ‘magically’ fixed when redeploying from Enterprise to Standard and so has to be removed prior to the backup on the source deployment. See KB2567984 for example.

The opposite is also true: when moving from Standard to Enterprise the overall redeployment operation works but you don’t get partitions applied to AuditBase. However, the EngineEdition in the query above will now be Enterprise, so you get a single row describing the non-partitioned table and not the set of ‘fake’ partition rows that you would have seen (for the same database) on Standard Edition.

The bad news here is that in turn you lose the ability to be able to delete just a portion of the audit log, and you obviously can’t delete the active storage unit. So its important that you clear down any aged log data that you don’t want in the source Standard Edition deployment, and then potentially add the partitioning behaviour to AuditBase in the Enterprise Edition. I haven’t found the correct way to do that last task yet, but will try to do so and add a follow-up post.

Summary

In this post we established that:

Table partitioning is only used on SQL Server Enterprise Edition, and separates the data into quarterly ‘chunks’ for management (i.e. deletion)

Dynamics CRM ‘fakes’ a similar set of data on Standard Edition using a GROUP BY clause on the unpartitioned data

Restoring organisation databases across SQL Server editions can lead to complications

Thanks for reading and I hope you feel more informed. Please be sure to give your feedback on whether you like this type of content, and to ask any questions to help you and others understand the material better.