In the above blog post I described how to use the SysExtension framework in combination with an Instantiation strategy, which applies in many cases where the class being instantiated requires input arguments in the constructor.

At the end of the post, however, I mentioned there is one flaw with that implementation. That problem is performance.

A heavy use of reflection to build the cacheKey used to look up the types

Interop impact when needing to make Native AOS calls instead of pure IL

The second problem is not really relevant in Dynamics 365 for Operations, as everything runs in IL now by default.

And the first problem was resolved through introduction of an interface, SysExtensionIAttribute, which would ensure the cache is built by the attribute itself and does not require reflection calls, which immediately improved the performance by more than 10x.

Well, if you were paying attention to the example in my previous blog post, you noticed that my attribute did not implement the above-mentioned interface. That is because using an instantiation strategy in combination with the SysExtensionIAttribute attribute was not supported.

It becomes obvious if you look at the comments in the below code snippet of the SysExtension framework:

So if we were to use an Instantiation strategy we would fall back to the "old" way that goes through reflection. Moreover, it would actually not work even then, as it would confuse the two ways of getting the cache key.

That left you with one of two options:

Not implement the SysExtensionIAttribute on the attribute and rip the benefits of using an instantiation strategy, but suffer the significant performance hit it brings with it, or

Use the SysExtensionIAttribute, but as a result not be able to use the instantiation strategy, which limited the places where it was applicable

No more!

We have updated the SysExtension framework in Platform Update 5, so now you can rip the benefits of both worlds, using an instantiation strategy and implementing the SysExtensionIAttribute interface on the attribute.

Let us walk through the changes required to our project for that:

1.

First off, let's implement the interface on the attribute definition. We can now also get rid of the parm* method, which was only necessary when the "old" approach with reflection was used, as that was how the framework would retrieve the attribute value to build up the cache key.

As part of implementing the interface we needed to provide the implementation of a parmCacheKey() method, which returns the cache key taking into account the attribute value. We also need to implement the useSingleton() method, which determines if the same instance should be returned by the extension framework for a given extension.

The framework will now rely on the parmCacheKey() method instead of needing to browse through the parm methods on the attribute class.

2.

Let's now also change the Instantiation strategy class we created, and implement the SysExtensionIInstantiationStrategy interface instead of extending from SysExtAppClassDefaultInstantiation. That is not necessary now and is cleaner this way.

3.

Finally, let's change the construct() method on the base class to use the new API, by calling the getClassFromSysAttributeWithInstantiationStrategy() method instead of getClassFromSysAttribute() (which is still there for backward compatibility):

Introduction + example

As you know, we have been focusing on extending our Extensibility story in the application, as well as trying to document the various patterns common to the application and how to address them if you are an ISV and need to extend some existing functionality.

mfp has recently written a blog post describing how you can extend the information shown on a RunBase-based dialog, and how to handle that information once the user enters the necessary data.

What that example did not describe is how to preserve the user entered data, so that next time the dialog is opened, it contains the last entries already populated. This is the typical pattern used across all AX forms and is internally based on the SysLastValue table.

In RunBase classes it is done through the pack and unpack methods (as well as initParmDefault).
For ensuring seemless code upgrade of these classes they also rely on a "version" of the stroed SysLastValue data, which is typically stored in a macro definition. The RunBase internal class state that needs to be preserved between runs is typically done through a local macro.
A typical example is shown below (taken from the Tutorial_RunBaseBatch class):

We save the packed state of the class with the corresponding version into the SysLastValue table record for this class, which means that all variables in the CurrentList macro need to be "serializable".

The container will look something like this: [1, 31/3/2017, "US-0001"]

When we need to retrieve/unpack these values, we retrieve the version as we know it's the first position in the container.

If the version is still the same as the current version, read the packed container into the variables specified in the local macro

If the version is different from the current version, return false, which will subsequently run initParmDefault() method to load the default values for the class state variables

Problem statement

This works fine in overlayering scenarios, because you just add any additional state to the CurrentList macro and they will be packed/unpacked when necessary automatically.

But what do you do when overlayering is not an option? You use augmentation / extensions.

However, it is not possible to extend macros, either global or locally defined. Macros are replaced with the corresponding text at compile time which would mean that all the existing code using the macros would need to be recompiled if you extended it, which is not an option.

OK, you might say, I can just add a post-method handler for the pack/unpack methods, and add my additional state there to the end of the container.

Well, that might work if your solution is the only one, but let's look at what could happen where there are 2 solutions side by side deployed:

Pack is run and returns a container looking like this (Using the example from above): [1, 31/3/2017, "US-0001"]

Post-method handler is called on ISV extension 1, and returns the above container + the specific state for ISV 1 solution (let's assume it's just an extra string variable): [1, 31/3/2017, "US-0001", "ISV1"]

Post-method handler is called on ISV extension 2, and returns the above container + the specific state for ISV 2 solution: [1, 31/3/2017, "US-0001", "ISV1", "ISV2"]

Now, when the class is run the next time around, unpack needs to be called, together with the unpack method extensions from ISV1 and ISV2 solutions.

Unpack is run and assigns the variables from the packed state (assuming it's the right version) to the base class variables.

ISV2 unpack post-method handler is called and needs to retrieve only the part of the container which is relevant to ISV2 solution

ISV1 unpack post-method handler is called and needs to do the same

Steps 2 and 3 cannot be done in a reliable way. OK, say we copy over the macro definitions from the base class, assuming also the members are public and can be accessed from our augmentation class or we duplicate all those variables in unpack and hope nothing changes in the future :) - and in unpack we read the sub-part of the container from the base class into that, but how can we ensure the next part is for our extension? ISV1 and ISV2 post-method handlers are not necessarily going to be called in the same order for unpack as they were for pack.

All in all, this just does not work.

Note

The below line is perfectly fine in X++ and will not cause issues, which is why the base unpack() would not fail even if the packed container had state for some of the extensions as well.

Solution

In order to solve this problem and make the behavior deterministic, we came up with a way to uniquely identify each specific extension packed state by name and allow ISVs to call set/get this state by name.

With Platform Update 5 we have now released this logic at the RunBase level. If you take a look at that class, you will notice a few new methods:

packExtension - adds to the end of the packed state (from base or with other ISV extensions) container the part for this extension, prefixing it with the name of the extension

unpackExtension - look through the packed state container and find the sub-part for this particular extension based on extension name.

isCandidateExtension - evaluates if the passed in container is possible an extension packed state. For that it needs to consist of the name of the extension + packed state in a container.

You can read more about it and look at an example follow-up from mfp's post below:

At this point the catch-all block caught this exception without aborting the transaction, meaning the item is still updated. We did not abort the transaction because we did not think about this case before

We exit the doSomething() and commit the transaction, even though we got an exception and did not want anything committed (because the second part of the code did not execute)

As a result, the NameAlias is still modified.

Now with Platform Update 5 the result will be:

That is, we

went into doSomething(),

executed the update of NameAlias for the item,

then an exception of type UpdateConflict was thrown

At this point the catch-all did not catch this type of exception, as we are inside a transaction scope, so the exception was unhandled in this scope, and went to the one above

Since we are by the outer scope outside the transaction, the catch-all block caught the exception,

and the NameAlias is unchanged

So, again, we will simply not handle the 2 special exception types (UpdateConflict and DuplicateKey) any longer in a catch-all block inside a transaction scope, you will either need to handle them explicitly or leave it up to the calling context to handle.

This will ensure we do not get into this erroneous code execution path where the transaction is not aborted, but we never handle the special exception types internally.

Saturday, March 18, 2017

Background

At the Dynamics 365 for Operations Technical Conference earlier this week, Microsoft announced its plans around overlayering going forward. If you have not heard it yet, here's the tweet I posted on it:

#Dyn365Tech AppSuite will be "soft sealed" in Fall 2017 release and "hard sealed" in Spring 2018 - move to use #Extensions in your solutions

The direct impact of this change is that we should stop using certain patterns when writing new X++ code.

Pattern to avoid

One of these patterns is the implementation of factory methods through a switch block, where depending on an enumeration value (another typical example is table ID) the corresponding sub-class is returned.

First off, it's coupling the base class too tightly with the sub-classes, which it should not be aware of at all.
Secondly, because the application model where the base class is declared might be sealed (e.g, foundation models are already sealed), you would not be able to add additional cases to the switch block, basically locking the application up for any extension scenarios.

So, that's all good and fine, but what can and should we do instead?

Pattern to use

The SysExtension framework is one of the ways of solving this erroneous factory method pattern implementation.

This has been described in a number of posts already, so I won't repeat here. Please read the below posts instead, if you are unfamiliar with how SysExtension can be used:

Note

If you modify the attributes/hierarchy after the initial implementation, you might need to clear the cache, and restarting IIS is not enough, since the cache is also persisted to the DB. You can do that by invoking the below static method:

SysExtensionCache::clearAllScopes();

Parting note

There is a problem with the solution described above. The problem is performance.

Friday, March 03, 2017

A monthly cadence AX 2012 R3 update for February has just come out on LCS, and with it a few changes our team has done that deal with overall behavior and performance of WMDP - the Warehouse Mobile Devices Portal used in the Advanced warehousing module.

I want to call them out here, and if you are reading my blog to keep up to date with the Warehousing changes, I strongly encourage you to install the below changes:

A functional enhancement, that allows you to start execution of a work order that has some lines awaiting demand replenishment, processing the lines that can be picked already now

Can also be downloaded individually as KB3205828

Make sure to pair it up with KB3209909, if you use the Full button when executing work

A performance optimization, that avoids updating certain persisted counters on the wave when work goes through its stages.

This also ensures they do not get out of sync, as they were used to make certain decisions about what is allowed for a work order

Can also be downloaded individually as KB3217157

An integrity enhancement, that ensures we always are in a valid state in DB, where each service call from WMDP is now executed within a single transaction scope

Can also be downloaded individually as KB3210293

Can be turned off in code for a selected WHSWorkExecuteDisplay* class, if needed

This is a great change ensuring we do not commit any data unless all went well, but might hypothetically impact your new/modified WMDP flows, if you handle your exceptions incorrectly today. If you do find issues with them, please let me know, I am curious to know your specific examples.

Various minor enhancements, that ensure WMDP performs well under load

Can also be downloaded individually as KB3210293

This includes improvements to enable better concurrency in various WMDP scenarios, better error handling on user entry, etc.

Again, install these, try them out, and provide feedback!

There are a lot more enhancements and bug fixes that went into this release, you can read the full list by following the link below:

I explained that when marking an enumeration as extensible, the representation of this enum under the hood (in CLR) changes. Here's the specific quote:

The extensible enums are represented in CLR as both an Enum and a Class, where each enum value is represented as a static readonlyfield. So accessing a specific value from the above enum, say, NumberSeqModule::Invent would under the hood look something like NumberSeqModule_Values.Invent, where Invent is of type NumberSeqModule which is an Enum. It would in turn call into AX to convert the specific named constant "Invent" to its integer enumeration value through a built-in function like the symbol2Value on DictEnum.

Something that was not super clear in the post is that this was actually a breaking change that might impact your .NET solutions relying on one of these base enumerations.

Problem statement

As part of enabling further extensibility in the application for the next release, we have made a number of additional base enums extensible.

Let's take enum TMSFeeType as an example. Assume we have made it extensible in X++. That means that in our C# project, where we use this enum, we will no longer be able to access it from Dynamics.AX.Application namespace by name, like so:

Note: Because the values are now determined at runtime by going to the AOS and asking for the correct integer value of this enum, they cannot be used in a switch/case block, which expects constant expressions known at compile-time.

What's next

Obviously, this situation is not great.

Let's hope that Microsoft will think of a good way to address this going forward.

Question to you

That leads to a question - how many of you actually have .NET libraries relying on application code in Dynamics 365 for Operations and might be impacted by us making some of the enums extensible in the next major release?