On of my most popular posts is Cleaning up the AIF document log, but this is not the only table that could benefit from a regular cleanup. For some of these standard tables there are little or no cleanup jobs, as an example the cleanup for the AIF document log only runs online in the client without any option to schedule it in batch.

I know that a lot of partners and customers use SQL scripts (as I did for the AIF post) to delete this data but there are some things to keep in mind:

Pro:

Very fast.

No new release of models needed.

Cons:

Deleting a big volume of data might cause more locks and the database log file to expand where the disk might run out of space.

All business logic is skipped and new customizations might be ignored. For example: a new delete action causing orphaned data.

Because of these reasons I started thinking about building a simple framework that is easy to extend, can be limited in the amount of data so database transactions and expansion of the log file is limited, and of course can be scheduled in batch.

So here’s the result:

Type: This Enum is what makes the stuff easy to extend, the classes that do the processing use the extension framework to execute the correct logic.

Number of days: This parameters defines the retention in a number of days.

Number of records in transaction: the maximum number of records that will be deleted in one database transaction. If your SQL Server is configured to use lock escalation selecting a big amount of data could cause a table lock which will stop all other processes on the same table.

Number of bulks: One transaction is one bulk. This is very useful to not over flood the database logging system. For example: if a database log transaction backup runs every hour you could schedule the cleanup to run hourly for a maximum amount of data. If the backup is finished the log file is freed up again and the cleanup can run once more.

So with this example I already provide 3 of the most used scenarios with standard Ax:

Batch history: This job cleans the BatchJobHistory and related (delete actions) tables with the following ranges:

CreatedDatetime: Older then the number of days.

Status: Ended.

AIF logging: This job cleans the AifMessageLog and related (delete actions) tables with the following ranges:

CreatedDatetime: Older then the number of days.

Status: Processed.

Database logging: This job cleans the SysDataBaseLog table with the following ranges:

CreatedDatetime: Older then the number of days.

If you want to extend this with other scripts for new tables all you have to do is this:

Add your new type to the BLOGHistoryCleanupType enum.

Make a new class that uses the BLOGHistoryCleanupAttribute with this enum value, inherit from BLOGHistoryCleanupProcessorBase and implement the run method.

Introduction

This is just a quick note on how I setup an Ax 2012 on an Azure machine to get the most bang for my buck. The example I’m using is a DEV machine where I keep the sample code for this blog. But you could apply the principles for every environment.

Sizing & disks

Since I’m on a limited budget with my Visual Studio enterprise subscription I don’t use premium storage. So my favorites are the D2/D3 and D11 v2 machines, especially the D11 which has more memory, less cores (compared to D2/D3), it can have 4 data disks and the temporary local storage is larger than DS machines. This storage will become very useful for SQL Server and the extra memory is also a must.

Since standard disks are limited to 500 IOPS and 60 MB/s throughput I like to attach multiple disks and spread my installations over these. So I did my installation like this:

Disk 1: SQL service + SSMS + LDF files + backup folder.

Disk 2: SQL MDF files.

Disk 3: Dynamics Ax client and service + visual studio.

(You can even add a 4th disk on this machine size but I didn’t really need it.)

SQL Server

Another neat performance trick is to use the temporary local SSD for the SQL temp db and buffer pool extension file. Check the link below on how to do it, I also have modified their powershell startup script to start my AOS.

Automation

I prefer starting and stopping my machine manually but another good way is to use Azure Automation to automatically start and stop your machine when you are not using it. This will save a lot of money on your bill.

Database synchronize

We all know that Dynamics Ax constructs its tables and indexes during a DB synchronize, and if we would make changes directly in SQL these could be overridden by Ax when installing model updates and syncing those to the database. But there’s more to creating an index then the defaults Ax does and in my experience the fill factor is never specified or the SQL Server default fill factor is used.

Wait what fill factor?!

If this is your first question you might want to read this 🙂 After reading this you should come to the conclusion that on certain tables we might need a proper fill factor to lower IO on the subsystem.

What options do we have?

Maintenance: With the standard SQL maintenance jobs or tools like the one from Ola Hallengren we can specify one general fill factor. But setting one value for all tables might be more of a problem then don’t setting it.

Scripting: We can script something to change these after a maintenance but execution might be forgotten, indexes might be renamed, …

Ax: What if we can setup Ax so it wouldn’t overwrite our settings?

The right way

We can setup a fill factor by going to the System Administrator module -> Periodic -> Database -> SQL Administration. This screen uses a treeview to define some extra SQL instructions like the fill factor per table or per index.

But since this screen uses a treeview with way too much data it’s a nightmare to work in, you cant load excel sheets and all the buttons execute long running processes online on the client tier without any possibility to run it in batch. That’s why I decided to write my own solution for this problem. (Source on the bottom of the post)

This screen is a sort of automated frontend for what the screen with the treeview does and consists out of three steps. (When no indexes are specified for a table the logic will run for all indexes.)

Initialize: This service fills the setup table based on the table group for instance transaction tables could benefit from using this. (When in real life situations I use another method based on SQL DMV’s explained below)

Process: This service process our setup to the standard Ax (kernel) tables.

Reindex: This service reindexes and also sets the fill factor on the indexes we have specified in our setup.

Finding the right fill factor

Other then my first idea to base my setup on the table group I would rather use a better way. If you measure a high amount of page splits, especially during business hours you want to find out which tables cause them.

In the blog posts linked below you can find good tips on how to find indexes that need tuning on this. I usually start with setting the fill factor to 95% and work my way from there. (Make sure these are tables with frequent updates or delete scenarios because when you have a table with only inserts and a clustered index only adding at the back of the index setting a fill factor might not be so useful. Hence setting a fill factor on a recid index is probably not such a good idea)

The following subject already exists for a while but I found it so useful that I want to get it out there once more!

Lets consider the following scenario: There’s a performance issue in production and no other environment around where the customer or the partner is able to reproduce it or the database is just too big. (Yes I know don’t comment on this statement 🙂 ) The last thing you want to do is to put this environment even under more load by installing tools or writing extra code just to log something so maybe this is the solution for you.

When you download Performance Analyzer for Dynamics Ax it comes with all kinds of SQL goodies but also with 2 Performance Monitor templates (AX_Trace_Detail.xml and AX_Trace_ClientAccessOnly.xml) which you can find in the “DynamicsPerf\Windows Perfmon Scripts” folder. As the filenames suggest these allow you to do tracing for Ax. But wait we already have that in the tracing cockpit of the client? Yes we do but it doesn’t allow you to schedule it which is really handy to investigate a batch process running at night in production and let’s be honest do you trust starting this on an Ax client in a production environment?

So here’s how to set it up:

Open the Windows Performance Monitor.

Create a new Data Collector Set.

Use a template to create the Data Collector Set.

Click Browse and pick one of the templates delivered by the Performance Analyzer solution. AX_Trace_Detail.xml or AX_Trace_ClientAccessOnly.xml. This last one is useful for putting on RDP or Citrix servers to reduce the amount of logging.

Change the default “%systemdrive%\PerfLogs\Admin\DAX 2012 Trace” path to another drive which has enough space and avoid using the same drive as Dynamics Ax so that when it runs full the application will still run stable.

After this you can use the standard functionality to set up schedules, disk usage, copy’s, …

It’s just that simple and this feature is available on every Windows installation. Just be aware that this generates a high volume of logs so you want to set up the data manager really good. Processing it in the trace parser also might take a long time so I suggest you always to this on another environment than production.

As you’ve probably already heard the new major release of Dynamics Ax is out and it’s called Microsoft Dynamics Ax! I have been to the Ax7 technical preview conference so I’ve already prepared some content for you and now that the release is official I can share it all. Dynamics Ax comes along with a whole bunch of new goodies and the new compiler being the biggest one. This allowed Microsoft to implement a whole bunch of new keywords in our beloved X++ language.

This keyword for variables

We already had the this keyword for calling methods, but now we can also use it to assign values to variables or pass them.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

publicclassTestClass

{

privatestr myVar="This is a test";

publicvoidstart(str _myVar)

{

this.myVar=_myVar;

this.test(this.myVar);

}

publicvoidtest(str _myVar)

{

info(_myVar);

}

}

Static constructors and fields

You all probably know this but a static variable is a variable shared among all instances of a class. And what do all we pattern loving developers think? Exactly the singleton pattern is finally possible without dirty caching tricks!

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

publicclassStaticVariables

{

privatestaticStaticVariables myInstance=null;

publicstaticStaticVariables construct()

{

if(myInstance==null)

{

myInstance=newStaticVariables();

}

returnmyInstance;

}

}

The static constructor is called TypeNew()

These are automatically called before the first use of a class.

These are called once for each type and each session.

Var variables and declaration

Declaring a variable as a var type (implicitly typing) will let the compiler decide which type it is, therefor you don’t have to. When a var variable is assigned to a type this cannot be changed anymore.

C#

1

2

vari=1;// implicitly typed

inti=1;//explicitly typed

And from now on you can also declare anywhere!

C#

1

2

3

4

5

for(inti=0;i<=10;i++)

{

intcurrent=i;

info(strFmt("%1",i));

}

Const/readonly variables

Const: This is a constant and can only be assigned a value when declaring it.

readonly: The value of this variable can be changed at runtime but only through a constructor.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

publicclassReadOnlyKeywordTest

{

readonlystr myVar="This is a test";

publicvoidnew(str _myVar)

{

this.myVar=_myVar;

}

publicvoidtest()

{

// This does not compile

myVar="this code fails";

}

}

Public / protected / private access modifier for variables

Data encapsulation can now implemented decent because as in C# we can declare our variables public, protected or private. This makes code more robust when implemented well.

public: Access is not restricted.

protected: Access is limited to the containing class or types derived from the containing class.

private: Access is limited to the containing type.

The difference with C# is that the default modifier still is protected, because changing it to private would break too much existing application code.

Finally in try/catch statements

Finally! A way of executing code when either the entire try statement has finished or a handled exception has occurred.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

publicvoidrun()

{

try

{

// Run business logic

}

catch(Exception::Error)

{

// Handle the exception

// Example: propagate a decent error message to the user

}

finally

{

// Example: Cleanup allocated resources

}

}

Typed exceptions

And exception handling even get’s better! Because now we can handle any exception that extends from System.Exception. Which is a big leap forward when using the .NET framework.

C#

1

2

3

4

5

6

7

8

9

10

System.NullReferenceException nullReferenceException;

try

{

// Run business logic

}

catch(nullReferenceException)

{

// Handle the exception

}

Using keyword

No more worries about memory leaks when consuming unmanaged resources because now you can use the using keyword for objects that implement IDisposable.

C#

1

2

3

4

using(vardataSet=newSystem.Data.DataSet())

{

// Do stuff with it

}

Extension methods

What if there was a way to add methods to an object without modifying it in your customization layer? Well now there is, it’s called extension methods and the principle is the same as in C#.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

publicstaticclassBLOGCustTable_Extension

{

staticconststr prefix='DEMO';

publicstaticstr blogPrefixedCustAccount(CustTable custTable)

{

returnstrFmt("%1-%2",prefix,custTable.AccountNum);

}

publicstaticvoidmain(Args _args)

{

CustTable custTable;

select firstonly custTable;

info(custTable.blogPrefixedCustAccount());

}

}

Some reminders when using this:

Use these in a static class.

The name of the class must have the suffix “_Extension”.

The method is public.

Declarative enventing: with pre and post events

In 2012 we already had eventing, but one minor flaw most developers had issues with was that you had to change the object you where subscribing to. Since we run fully in IL it is possible to subscribe to an event without modifying the sender object.

Those of you who work with workflow in the Procurement flow might have encountered the following problem. The client wants to approve Purchase Orders through the Enterprise Portal, but for some reason the work items are not visible, while they are visible in AX Client.

Fair enough, you’d say, there actually is an option in the workflow to enable or disable this from the Enterprise Portal. Weird enough, the checkbox is checked and there is no other reason why actions on work items would be blocked.

The problem is an actual standard Microsoft bug.

There is a difference in workflow elements that can be used, ‘Approve purchase order’ and ‘Approve purchase order, editable’. The latter makes it possible for reviewers to edit the Purchase Order they need to approve. The former obviously doesn’t. Only when the latter is used, work item actions become unavailable in the Enterprise portal.

Open AOT>Workflow>Approvals

You will find 2 approval elements: PurchTableApproval and PurchTableApprovalEdit

If we compare these two elements:

It is clear that the ‘ActionWebMenuItem’ is missing in the properties of the ‘PurchTableApprovalEdit’, for Approve/Reject/RequestChange. To solve our problem, we will simply need to fill these in on the Edit-element:

As we all know index maintenance is important especially on large Dynamics Ax databases, but often I see installations where there are little or no maintenance plans or all kinds of exotic scripts. Therefor I want to show you guys the SQL Server Maintenance Solution by Ola Hallengren, this does not only contain stored procedures for index maintenance but also for database backup and integrity.

Installing it is easy, grab a copy of the installation script and run it. But I would suggest you install it on a new maintenance database and change the following parameters of the install script.

Transact-SQL

1

2

3

4

5

6

7

USE[master]-- Specify the database in which the objects will be created.

SET@CreateJobs='Y'-- Specify whether jobs should be created.

SET@BackupDirectory=N'C:\Backup'-- Specify the backup root directory.

SET@CleanupTime=NULL-- Time in hours, after which backup files are deleted. If no time is specified, then no backup files are deleted.

SET@OutputFileDirectory=NULL-- Specify the output file directory. If no directory is specified, then the SQL Server error log directory is used.

SET@LogToTable='Y'-- Log commands to a table.

USE [master]: Installing it on a separate database maintenance instead of the master makes it easier to uninstall or update.

@CreateJobs: I like to set this option to ‘N’ because I don’t want to call the stored procedures directly from the agent but from a T-SQL block inside of a maintenance plan. This looks more consistent so that it doesn’t look like a lack of maintenance plans.

The rest of the configuration is quite self-explanatory and personal 😉

At the moment I only use it for index and statistics maintenance so here’s an example on how I like to run it on Dynamics Ax databases.

When Dynamics Ax 2012 came out it was shipped with 2 tools AxUtil.exe and the Management shell. In my experience most of the Dynamics Ax developers are already familiar with AxUtil.exe but don’t have much experience in powershell yet. Therefor I decided to write some examples to get you guys going. If you have any questions or a request please leave a comment, I might also add some scripts as I go.

Lately I’ve been busy developing WCF services to communicate with .NET web applications. All of these web services are custom-made and are using .NET data contracts so that every application uses the same contracts. Due to the high amount of data and performance we had to implement some kind of paging. I had no clue that Ax even has paging support but it does and it does this with properties on the QueryRun objects.

For example purposes I’ve made a service which uses .NET request and response contracts. I prefer this way over X++ data contracts because this is more reusable and flexible on the client side. The code is self-explanatory to me but you can always pose questions of course. 😉

Yesterday I was struggling a bit with an InventDim form control on a custom-made form. The problem was that the product dimension look ups were returning none or too much results like example 1. While the correct result should look like example 2.

The problem was that I’ve named the item field ServiceItemId instead of ItemId. This is a problem because the class InventDimCtrl_Frm_Lookup looks hard-coded for an ItemId field when you do a look up on an inventory dimension. But there is an exception, the class also looks for a method called itemId. So if you have another field or maybe have 2 item fields on the same table you can implement something like this method to return the correct ItemId.

Currently I’m working on a new Dynamics Ax 2012 project and for that I’m developing a lot of list pages and forms. For these I had to make multiple menu items that open a list page with different filters, you can do this by setting the menu item query property or calling a class which calls the form with the correct query. So this involves creating a lot of query objects for the same table with only a few extra filters. If only there was a way to inherit queries from each other and there is! 🙂

It’s called composite queries and it’s only useful when you just want to extend your query with range or overriding a method. Also you can only derive one time from a query.

An example:

First I’ve created a base query that filters on Sales Type with value Sales Order. So I have a query that filters all regular sales orders.

Next I’m creating a new query without any data sources and I drag my base query to the Composite query node. Then I can add my specific range, such as on Sales Status with value back order.

That’s it! Now I can create multiple queries such as for delivered or invoiced orders and if I want to apply and extra range for all the queries I only have to change the base query.

Lately I’m working on some projects who experience a bad performance in Dynamics Ax. This can have multiple causes but one of the most important is maintaining the SQL Server. In my humble opinion every Dynamics Ax developer should have a basic knowledge on how SQL Server works and is maintained because it is the backbone of our beloved ERP software. :-).

This said if terms like rebuilding and reorganizing indexes, updating statistics or recovery model don’t ring a bell these links will be very interesting for you ;-).

Some time ago I’ve found out that you can implement a form into a RunBase dialog, this has the advantage that you can easily use a grid control, etc… or use modified field methods without using controlMethodOverload() method. You can do this by overriding the dialog method and adding the following code.

You can still add fields the normal way by using the addFieldValue() method, these fields will appear next to the embedded form.

This form must have a Tab control and some hardcoded groups, also on the design the property FrameType must be set to None. The groups I’ve created are DialogStartGrp and RightButtonGrp, these are used for positioning fields and query values from code. More groups may be necessary when extending from RunBaseBatch, RunBaseReport, … The tab control is used for adding a batch tab when extending from RunBaseBatch.

In the dialogPostRun() method you can get the formRun of the dialog and make calls to the form, for example setting query ranges.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

publicvoiddialogPostRun(DialogRunbase dialog)

{

ObjectformRun;

;

if(FormHasMethod(dialog.formRun(),"setQueryRanges"))

{

formRun=dialog.formRun();

formRun.setQueryRanges();

}

super(dialog);

}

A way to pass data or do callbacks is to get the calling class in the form init.

*Edit Microsoft has released a fix for this problem contact support for this*

As all of you know the Image class in Dynamics Ax 4.0 and 2009 can only run on client. This poses a problem when you want to print for example invoices with your company logo on it. Having this found out I went to look for an alternative!

I’ve added this code to the top of the PDFViewer class in the writeBitmap(OutputBitmapField _field, OuputSection _section) method

C#

1

2

3

4

5

6

7

if(isRunningOnServer()&&

_field.name()==#FieldLogo)

{

this.BLOGWriteBitmapOnServer(_field,_section);

super(_field,_section);

return;

}

For the method BLOGWriteBitmapOnServer(OutputBitmapField _field, OuputSection _section) I have copied everything from the writeBitmap and started by replacing the Image object with a System.Drawing.Image object, you can make a company parameter for this file path.

C#

1

img=System.Drawing.Image::FromFile(imgStr);

After compiling there are a few errors witch I’ve corrected and ended up with this code.

A customer of mine asked me to change the font of some reports to Calibri. It all went well until we saved a report as PDF, there was way too much spacing between characters.

After some days of investigating and contact with Microsoft I’ve found out that it worked on a Windows Server 2008 R2 and the Calibri font files were almost double in size. So I replaced the font files with the ones from Windows Server 2008 R2 and the reports are working like a charm. 🙂