IFL have shed the shackles of modifications. We have been mod free for nearly two weeks with no issues.

Figure 1 – PRD with the modifications disabled

Thus ends a five year journey to get rid of our modifications. And interestingly enough, with almost no process changes.

No process changes you say?

Well almost no process changes. IFL is a company where most of the employees have been with the business for long periods of time – most staff having been employed doing largely the same job in excess of a decade.

This proves to be a challenge getting people to

Think out of the box with process improvement

Think of the regular oddities that occur in their job and test them

Test – well it worked before!

Test – that’s a simple process, it would NEVER change!

Test – reports? Oh, why would you test those?

Test – interfaced programs? What do you mean that program that says “Export M3 to Prodoc” gets its data from M3?

With upgrades, we regularly encounter testing fatigue or poor quality of testing – some of this comes from such a small team which can leave staff with little time to complete testing on top of their normal day to day tasks, some of this comes from the mentalities mentioned above. Though the failures in comprehensive testing has never been much more than annoying (no critical issues), it added needless pressure to IT resources who try to resolve the issue promptly.

We had many, many Crystal Reports that support the business – significant process changes would mean that some of the more complex reports would need to be extensively modified. We had little internal resource to deal with significant changes to reports.

Adding process changes quickly accelerates testing fatigue and introduces risks that the staff don’t understand the process change correctly, test it appropriately, or switch to the old process halfway through testing invalidating their results or requiring consultants or IT to fix the issues or retrain them.

Process changes also tend to extend the amount of time required to complete upgrades, especially when you have to work through multiple iterations of the process change – introducing testing fatigue.

Over the years, I have taken a firm stance that when applying upgrades, we uplift everything as is, and then work on process improvements. By doing this we have never had significant issues with upgrades or go-lives, staff are happier and less stressed by the upgrades which leads to a happier IT team. It has also meant that often we get away with very short testing cycles, our upgrade from 10.1 to 13.2 was literally compressed in to less than two weeks – there would have been no staff member that spent more than a full day of testing during that two weeks.

By demonstrating that we can manage upgrades without major disruption to the business helps sell the idea that we can apply changes incrementally without introducing significant cost and risk. We build up confidence that perhaps we can start tweaking a process here and there – the ultimate dream being that we get in to the mindset of controlled continuous process improvement, incremental tweaks and testing becoming the norm.

The disadvantage of this philosophy is, as we can attribute through experience – process improvement may never actually end up happening due to many often valid reasons.

Why did it take so long?

We know that IT is a journey, not a destination…

IFL is severely resource constrained with its IT, at its peak it had two staff including myself running all of IT, communications and supporting a highly dependent userbase.

We wanted to take small easy bites at the modification cherry. Small changes, test, roll out to the userbase, wait until we are confident in the results. It took a great deal of pressure off IT at the expense of expedience.

Many of the modifications were poorly documented – it turns out that in almost all of the core modifications subsequent amendments had been made to functionality which never got pushed back in to the specs. So modification removal had a lot of a suck-and-see approach. Combined with concerns around testing, the smaller the potential areas of damage, the better.

As long time readers will know this blog was created due to a lack of information around what could be done with scripting and how to do it – so we also needed to figure out if something was actually possible and how to do it.

What made it possible?

In short, it wouldn’t have been possible to extract the modifications without Smart Office scripting. We had several critical processes that simply couldn’t be realistically worked around.

The Approach

Initially I worked on the premise of a weighting system. Those modifications that were quick and easy to remove had a strong bias; importance, process owners testing ability and the overall risk was also added to the weighting.

Easy wins make for good internal PR – right throughout the corporate ladder. Proving that we could make long term savings with minimal disruption makes it easier to justify spending more time on mod removal. Demonstrating to staff that we can make their lives easier and improve how they work through their processes helps encourage them to think about what other areas can be improved – often the modifications had little pain points which could easily be eliminated with a little code in the ‘scripted’ versions; an example of which was in a panel which stored the container vessels that would take our product, over the years it got larger and larger – the filter boxes never really worked very well. So the replacement script would by default only show container vessels that had a sailing date > (current date – 30 days). Though functionally nothing really changes, we remove a significant pain point.

The quick and easy modifications provided experience around the Smart Office framework – stupid mistakes or bad calls in code structure could be identified as the easier mods were replaced before we got in to the complex modifications.

It also helped us identify that there were issues with the modification documentation.

The Revised Approach

A significant earthquake occurred which pulled all IT resource away from process improvement; damage control of infrastructure then planning and applying long term fixes became the focus. To boot, the stresses for staff dealing with a multitude of issues relating to the earthquake decreased the probability of even minimal testing being completed appropriately to about 0.

It was also abundantly clear by this point that even pulling out simple modifications individually was going to be troublesome given the holes in documentation and the way the modifications themselves were bundled. So code was written to replicate the modified functionality, tested and shelved with the aim that when we do an upgrade we will pull the modifications out and essentially do complete end to end testing with just the scripts and no mods. In some instances where we desperately needed a fix for a modified file, or where we knew that there was no other dependent modifications, we would apply a JT for a specific class via LCM – LCM would provide the option to delete the modified class.

When we went and did the 13.2 upgrade, not all of the modifications had been scripted and we were told that if we split out the modifications that were no longer required, then the mod uplift was going to be significantly more expensive – some of this was due to the bundling, but much of it was because a single ‘mod’ actually comprised of several different unrelated modifications – unwinding it all was going to be prohibitively expensive and risky even before we considered potential testing issues.

We were planning on a 13.3 upgrade without uplifting the modifications, but we had a number of challenges external to the organisation getting there. We did however embark on an unrelated internal project to eliminate an external program which resulted in a few minor processes changes. During this it was identified that one of our core mods would no-longer be needed in the new process. By the time this project was complete we only had one core modification which needed to be replaced – it was around allocation and probably the most important to the business. After extensive testing it was determined that the script to replace the allocation mod worked and the call was made to disable all the modifications. Due to the concerns around the quality of the mod documentation there was a reasonable amount of anxiety – we were now removing all of the classes from production, along with any undocumented behaviour that may have been significant to our business that slipped through testing…but the calls never came in!

Some of the Challenges

Information Availability

Long-time readers and people who have been to my presentations will know that the original purpose of this blog was to explore the Smart Office framework because there just wasn’t information available on how to do things.

Lack of information would blow out script development time, often I would spend time writing butcheries of scripts purely explore or test concepts, brute-force iterating through variations until I got the results I wanted. Then I would usually take the learnings and create something cleaner and more robust.

The availability of the SDK helped significantly, but still left a lot of unanswered questions – so you end up back in the cycle of looking for something that vaguely looks like it is what you want, then you write code to see what happens in various scenarios.

Shipping Details

Figure 2 – Shipping Details Panel replacement (yes, it’s ugly!)

As I have blogged about previously, we have a shipping panel which provides some consolidated shipping information. Much of this information goes in to our export documentation system.

The shipping details get entered as a user goes through the panel sequence in OIS100 and gets associated with a specific customer order.

The interesting thing about the behaviour in OIS100 is that the customer order number (ORNO) doesn’t get generated until you have finished going through the OIS100 sequence. So the OIS100 class was modified so a). it had the shipping details panel in the sequence, and b). the customer order number was generated as soon as you went to any of the OIS100 panels.

Figure 3 – No CO Number in standard OIS100 as you go through the header information

This created the scenario where I couldn’t add a button in any of the panels to attach the shipping information as I didn’t have a CO number to attach the details to. I considered using some of the caching objects, but decided the best bet was to add a button to OIS300 and OIS101 to get to the shipping panel. (OIS300 was done to ‘add value’ and make it easier to get a compromise by saying “now you can get directly in to the shipping details without having to go through the customer order”).

It also turned out that the shipping details modification had dates that if changed also changed some of the order header dates – something that wasn’t documented, nor picked up in testing. There are no APIs to make the changes so I wrote a little MFormsAutomation script to fire if the date was changed on the shipping details panel.

The shipping details mod was a great one from a testing perspective – I could remove the mod from the panel sequence, and just add my buttons, if there was an issue, it was relatively trivial to get staff to use the mod rather than my script.

Eligibility Allocation

This is the most important modification we have. Some stock that we have isn’t eligible to go to some countries. In some instances the reasons can be as simple as “the chartered vessel is owned by a company in x country. Y country doesn’t like x country” so we can’t export product from that vessel – which caught our quota, for us, to country x.

The modification looked at a customer orders destination country as defined in the Shipping Details modifications. It would then retrieve a value from ATS030 which determined an eligibility number. This number (which ranged from 0 to 4) would be checked against the item/lot eligibility attribute to determine if we could hard allocate the stock.

The modified functionality was allowing this to happen at an order level rather than applying the eligibility attribute to each CO line. And it also retrieved the destination from a modification table.

As staff found out several years ago, the mod only supported an eligibility range of 0 to 4. When they decided that they needed an extra eligibility level and implemented it directly in to production without testing (!), it didn’t work.

The script still currently reads from the modification table to retrieve the destination, but it will stamp each CO line with an eligibility attribute value – standard M3 will then take over when we go to hard allocate and only show eligible items/lots. The script will happily handle multiple eligibility levels beyond that that the modification supported.

As a side note, the model that this follows for determining eligibility has been somewhat lacking over the last few years as the rules have become more complex, to address this we had scoped out and developed a more flexible and comprehensive solution. The aim had been that we would cut over to this on our next upgrade as it would require more detailed testing and the expectation was that once the users actually got what they had asked for, they would reconsider and we’d have to tweak extensively :-). But as we got down to this being the only modification left and with issues on getting the 13.3 upgrade underway, we decided to take the code we had written and cut it down to mimic the existing eligibility process, test and deploy.

We will be revisiting this and the greater discussion on how eligibility should work now given some recent business changes early in the new year – but importantly the core code and concepts are there and work.

Still To Do

The shipping details are currently stored in a ‘modification’ database. This data will be pushed in to the Customer Extension tables in the very near future so all the M3 dependent data is within the standard M3 tables. Given the eligibility is now script under our control needs the destination in the modification database, we can either point the webservice SQL call to the new location with minimal fuss or we adjust the code to use the customer extension table APIs directly.

To Wrap Up

Now that we have replaced the modifications, we will save tens of thousands of dollars on each upgrade. We don’t need to consider the risk and the extra work required to test modifications; we can now upgrade easily. We can now patch indiscriminately :-).

Though it took a lot longer than expected, we had to deal with poorly documented functionality, deeply embedded reporting and change resistant staff which restricted how much process change the business could realistically handle.

A philosophy of ‘bare minimum of change’ has seen us upgrade multiple times without significant issue, and has seen us initially depreciate and then disable modifications with no notable impact on the userbase.

From a cost perspective, a single upgrade will more than cover the costs directly associated with converting the mods to scripts. Indirect cost perspective – being able to patch, not needing to potentially uplift modifications to get fixes applied, reduced testing, reduced consideration and planning around upgrades, peace of mind…though hard to quantify, all have bearing.

One of the most important things about this now, is that we have more flexibility. We can incrementally improve these scripts as the business changes.

Nice work and as always I do appriciate your blog. Just a minor note. Best way to update the Custom extension table is to use the API CUSEXTMI which can be called directly from the script. Please do not use SQL for any update since that causes a number of issues like:
* No event are triggered meaning things using EventHub for syncronizations are not updated and you cannot use the event based functionality in M3 BE (CMS04* functions)
* No timestamp will (most likely) be set to Db record which can cause future issues.

one of my statements was a little ambiguous – we don’t do direct updates against the M3 database. With the modified tables (in the customer modification schema IFLJDTA) currently we use insert/update/select statements via MWS webservices to populate the mod tables. Once we move the shipping details out of the mod tables, the webservices will be replaced with the CUSTEXTMI.* calls.

I think the ambiguous statement I made was with the eligibility modification; we use MWS to wrap a select statement to retrieve the destination field from the shipping details. Once we migrate from the modified tables I have the option of simply changing the MWS wrapped select statement to look at the customer extension table rather than having to work the CUSTEXTMI calls in to the eligibility script. The eligibility will get moved to use the API calls, but depending on other projects and time commitments I have the luxury of being able to choose how much change I introduce at once.

Scott — I know the journey you took to get to vanilla and some of the hurdles in bringing therest of the organisation with you (and I remember the visit all those years ago that started the M3 experience !).
Great Stuff,