As a follow-up to my previous commentary, which focused on setting the playing field for transportation routing algorithms and the fact that these are not really “optimizations” but heuristic algorithms, as well as the fact that modeling the constraints is at least as important as getting a lower cost, I want to now shift the focus to the tougher challenge for a transportation management system (TMS) – execution of the recommendations.

The tougher challenge that we face in this industry is trying to route (via algorithms) freight in an environment where the data are constantly changing. This is particularly important for companies that are building multi-stop loads and using pool distribution. These routings create “dependencies” where the routing of one order is dependent on many others.

For example, consider a shipper with a warehouse in Memphis shipping to the northeast. He might have one large order that’s half a truck going to upstate NY along with a dozen small orders going to a pool point in PA. Just as the algorithm recommends this route the big customer in NY calls and adds several items to the order. Suddenly most of the freight going to PA won’t fit. Their routings will have to change even though nothing about the orders has changed. This is dependent routing. If all of the orders were shipping as LTLs or single stop truck loads this would be a minor issue, but in our case, life just got a lot tougher.

The situation described above is exacerbated by the fundamental challenge that algorithms are basically batch processes and essentially run on static data. However, in our industry, the data are not stagnant — they are changing throughout the process. Additionally, much of the information can’t be known until after the algorithm makes its recommendation and the process of load tendering begins.

There are too many dynamic components in this process to list but below are several of the more important ones that are worth thinking about throughout this discussion.

Order Changes – Most companies strive to allow order changes into the process as late as possible. These changes typically come from an ERP and can occur after an order is picked or even staged.

Warehouse Changes – The WMS/TMS integration is particularly challenging. There can be stock outs and in some environments orders can fail QA. In other situations (particularly with mixed product pallets or cartons) the cube and/or pallet count from the ERP might be incorrect. In other cases weight could possibly change. Therein lies the proverbial chicken and egg scenario. The warehouse really can’t pick an order until they know when it will be shipped. On the other hand, order quantities aren’t solid until after it is picked so the algorithm is using data that may not be accurate.

Carrier Availability – Until a load is tendered and a carrier accepts it the rate that the load will run under is uncertain. Therefore one of the most important data elements driving the algorithm can’t be known until well after the algorithm is completed. There are related issues around the availability of specific equipment in some situations and in others there may be a problem getting any equipment given today’s driver shortages. Again, all of these key inputs to the algorithm aren’t known until well after the algorithm is finished.

Appointment Scheduling – This is another chicken and egg situation. If we go back to our Memphis shipper, he might have two orders going to the northeast. Both orders have delivery windows set for only Thursday. However, when the destinations are contacted for appointments the first stop is booked up for Thursday and wants a Friday delivery but the second stop still needs to be delivered on Thursday. Again, this load can’t run as created by the algorithm.

Need to Tender Some Loads Before all Orders are in – Most shippers face a driver and general truck load shortage. Therefore, many shippers want to start tendering loads as early as possible. To reference the Memphis shipper again, he might have an order that is almost a full truck placed late Monday, to ship on Tuesday. He could tender this load as soon as he sees it to get a good carrier and rate. However, if he waits until late Tuesday morning he might find a great order to throw in the back of the truck and have it ride for free.

Real Time Flow – I like to view this process as one where orders are flowing like objects on a river. The orders get created, they go through a process and then they ship. This process sometimes feels like the orders are being shot at with changes such as those described above. Those shots may change the course, but somehow they still need to come out in one piece and routed correctly.

It’s clear that a TMS should not simply send a lot of data to an external algorithm a few times a day and expect to create much value for the customer. In fact, a TMS needs to be built as an execution system to manage the process described above that accesses different routing algorithms, at different points in the process to help make decisions that are too large and too complex to be made manually. The very idea of using a third party algorithm or having an algorithm as a separate module where data is just thrown to it en masse is fundamentally flawed and fated for failure. History shows many companies that have taken this approach and not one has created the quality products that this market needs.

Given that these issues are complex, hard to solve, and expensive when done wrong, one might ask, Is the view worth the climb? Is it worth dealing with all of these challenges when one can just ship the freight direct on single stop truck loads and LTL?

First of all, the potential savings are enormous. I’ve worked with clients that have cut 10-20 percent off of their freight spend. Most companies are willing to put up with a lot to save that much money. Secondly, these issues don’t occur very often so they can be dealt with fairly easily if one has the right tools. However, if these issues are not dealt with correctly all of the savings can disappear or freight could be delivered late or broken. Third, multi-stop routing should improve customer service with faster and more reliable deliveries and less OS&D. So the answer is yes, it is most definitely worth the climb if one has the right tools.

This is an extraordinarily difficult problem to try and solve, and the problems will continue to get more interesting over time as we are more creative with the options we have to ship freight and the requirements are more stringent on how customers want orders delivered. The industry is in need of a solution that will focus on and address issues such as:

Heuristic algorithms to solve problems that can’t be truly “optimized.”

Solving these issues subject to countless constraints that are very hard to model and can’t be violated without serious repercussions.

The realization that algorithms must run on data that is constantly changing.

There are several “chicken and egg” scenarios where decisions need to be tweaked or changed dramatically because of information that could not be known when the algorithms were initially run.

Recommendations must be acted on immediately as orders must be picked (sometimes within minutes) and trucks need to be loaded immediately thereafter.

We have better connectivity tools to provide better quality data faster and we have more powerful hardware to write more complicated solutions. Therefore, there are significant opportunities to revisit how we are currently solving today’s problems – creating brand new, next generation solutions that can change our industry entirely.

Mitch Weseley is the CEO of 3Gtms. With 30 years in the industry, Mitch is widely regarded as the “father of the TMS industry” having created six successful companies in the technology and logistics industry, including Weseley Software and G-Log.