By prematurely stopping the export at different time intervals (press stop), it is possible to find the threshold record where the export fails. Limiting the number of columns in the smart list also narrows down to the field involved. In this example it was some extended characters that had found their way from an ecommerce solution through econnect into the customer name field. It is also important to bear in mind that wrong assumptions about the tables and fields in the smart lists can be made. For example the customer name may originate from one entered in an order, not the debtor maintenance record. Try to establish the source of the data in the smart list.

Once the offending character was identified, in the GP user interface, that character was edited out of the name that resulted in the export to excel working correctly again.

The following Stack Overflow question helped by providing some TSQL function that allows such characters to be found:

Note that many characters such as “™ ® ” don’t seem to cause issues, the character that caused the problem in this case was a “T” like character finding its way in substitute for the “&” character. Investigating we find this is character code 22. This is useful as attempting to copy and paste this character causes issues in editors.

It can be seen that now there is a character code number to work with, a SQL replace is now possible, for occurrences of this character, in these fields. As always backup your data and don’t play if you don’t know what you are doing, instead contact your support partner.

It is also a good idea to check your integration and integrating applications, to find the source of the corruption.

I would like some further investigation time to see what other characters can cause this error, I guess it is all the low value codes (below 32). It would be possible make a SQL reporting job to notify users of data issues if they occur again in the future.

Since GP 2013 sometimes when checking in the GP process monitor, (Microsoft Dynamics GP>> Process Monitor), the message “Check for User Messages(1)” may be found in the process queue.

The origin of this message is a new feature of GP that allows administrators to send messages to GP users. These messages pop-up on the users’ screens after a short delay.

This is achieved through the existence of a polling process that runs regularly on each client instance. That polling process checks for any messages waiting to display to the user. It is this process waiting in the queue that shows up as “Check for User Messages(1)”.

It will keep adding itself to the process queue, after running check links or similar very long processes, I’ve seen dozens of these all stacked up awaiting processing. These processes quickly pop off the queue and disappear, once it reaches the top of the queue.

See below process monitor screen shot for an example of stuck processes:

Send User Message Functionality

That after around 30 seconds or so results in the following on the user’s screen.

Obviously this process is polling a table in the DYNAMICS database. The table is SY30000 and the message is put in the “Offline_Message” field, see below where the message to me is “test”.

Stuck Process Queue

If the process queue gets stuck (crashes) it will no longer process queued jobs, however the “check for user messages” process continues to be added to the queue every min or so. As the queue has crashed and is no longer removing items, it just builds, filling up with this process. What is seen as a result is this process swamping the queue.

As the “check for user messages” process is the most frequent process to be added to the queue, it therefore becomes the most likely process to be added immediately after any crash occurs in the queue. This makes it appear, when looking at the process monitor, like the “check for user messages” caused the problem, however it is merely a victim. The actual process responsible was whichever process immediately preceded it in the queue, as that was the item that caused the queue to halt.

Investigation of the reason why that previous process crashed can be done by checking SQL logs, switching logging on in GP (use the Support Debugging tool), using SQL profiler, changing reports for unmodified vanilla ones if they have been changed etc. This is really a different subject and I’d encourage you to involve a GP consultant as instigations can be arduous.

After moving Dynamics GP to a newly built server, using a back up and restore strategy, the Dynamics GP SQL replication kept causing problems. The following message drowned the error logs of the server. I ended up rebuilding replication configuration entirely to solve the issue, but there may be some clues in the steps I went through that might help someone else out there. There was lots more digging and prodding than summary shown below!

The last step did not log any message! (Source: MSSQL_REPL, Error number: MSSQL_REPL22037

It was possible to add text logging by going to the “Run Agent” step in the Log Reader SQL job and then adding to the list of the parameters, the following;

We were also getting lots of errors relating to “Only one Log Reader Agent or log-related procedure”…”can connect to a database at a time” (see below for screenshot)

Google revealed some posts around SP1/SP2 hotfixes and cumulative updates for SQL 2012 fixing issues around area of multiple log reader errors like this. Other posts talked about the database not being upgraded correctly, new schema differences between versions. My conclusion on reading these posts was that the SQL replication database (distribution), may have been brought over from the previous server (we don’t know the full history of if it was restored or/and replication rebuilt by scripts). The restored database may not have been correctly upgraded by whatever is in SP1/SP2, both of which were applied prior to the restore of our data and thus any operations included in SP1/SP2 would have not been applied against it due to the time line.

After a few evenings spent removing all I could find of relevance in the replication distribution database tables and clearing out and recreating log reader agent jobs and snapshot agent jobs, still problems were persisting. After countless recreations of the publications and subscriptions, it felt like there were remnants of previous versions replication clinging on deep in the bowels of replication (amazing how gut feeling develops with experience).

Failing in patience and for lack of a silver bullet, the solution was to remove the subscriptions and the publications, disable replication on the server. Then ensured the replication distribution database and its data files were gone. Also ensured no SQL jobs relating to replication were left behind (there were so removed them too). Also checked all the nodes under SSMS that relate to replication to ensure nothing was left at all, including on the subscriber (there was a shadow of the subscription left on the subscriber).

Then built replication a new, created a new distributor database, with a new name for peace of mind. Created new publication, added articles, created new subscribers.

After reinitializing all subscriptions and getting snapshots, everything started working how it should.

I feel lucky that I don’t have a totally mission critical replication setup. It is good that we have two copies of our replicated tables in different schemas, with views fronting them. This allows us to toggle between live and cached data modes, while “messing on with replication”.

Hey its working!

The only thing left to figure out is if the “one log reader” error is something to worry about, or perhaps it will simply go away with the next service pack, whenever that is…

Don’t be caught out by this change in the automated restore of test company database from live. This is something we do every Friday night on a scheduled SQL job, however since updating to GP2013 our test company is not working after the restore. Attempts to login are greeted by:

The selected company is not configured as a company of the current Microsoft Dynamics System Database

If you refer to my post Schedule restore of live company to test company Dynamics GP, then you will see that after the database is restored a script is ran to restore the company ID in the restored company database to update it to match that of the test company as registered in the DYNAMICS database.

A change has happened in GP2013, where the DYNAMICS database in GP can now be named anything you like, this has resulted in some changes to the way this script needs to work.

My first move was to refer to the original Microsoft KB that I got the information from the first time around, and yes there is a new script that forks depending on the presence of table SY00100. Check it out below, or better yet go to the KB itself, in case it has been updated since this post.

NOTE: If you have already ran the old script, then the new script will now not execute, the error indicates that it can’t find company lookup table SY01500 in the company database. That is because it should be looking for it in the DYNAMCS database.

A look at the script reveals the issue, the old script wrongly updates table SY00100 to the test database name (it is just dumbly looking for instances of DBNAME), it should have DYNAMICS in the DBNAME field to make the new script work, but it has the test company DBNAME in there, due to this issue. Update it to be whatever your DYNAMICS database name is, using DYNAMICS as example below (your CMPANYID may be different).

Now run the new script again and it will run successfully and you can now login to the test company.

I have updated my previous post to point to this post and changed the script there to protect the innocent.