the following entry was prompted by a request for an article on the topic of “optimization” for publication in punetech.com, a website co-founded by amit paranjape, a friend and former colleague. for reasons that may have something to do with the fact that i’ve made a living for a couple of decades as a practitioner of that dark art known as optimization, he felt that i was best qualified to write about the subject for an audience that was technically savvy but not necessarily aware of the application of optimization. it took me a while to overcome my initial reluctance: is there really an audience for this after all, even my daughter feigns disgust every time i bring up the topic of what i do. after some thought, i accepted the challenge as long as i could take a slightly unusual approach to a “technical” topic: i decided to personalize it by rooting in a personal-professional experience. i could then branch off into a variety of different aspects of that experience, some technical, some not so much. read on …

background

the year was 1985. i was fresh out of school, entering the “real” world for the first time. with a bachelors in engineering from IIT-Bombay and a graduate degree in business from IIM-Ahmedabad, and little else, i was primed for success. or disaster. and i was too naive to tell the difference.

for those too young to remember those days, 1985 was early in rajiv gandhi‘s term as prime minister of india. he had come in with an obama-esque message of change. and change meant modernization (he was the first indian politician with a computer terminal situated quite prominently in his office). for a brief while, we believed that india had turned the corner, that the public sector companies in india would reclaim the “commanding heights” of the economy and exercise their power to make india a better place.

CMC was a public sector company that had inherited much of the computer maintenance business in india after IBM was tossed out in 1977. quickly, they broadened well beyond computer maintenance into all things related to computers. that year, they recruited heavily in IIM-A. i was one of an unusually large number of graduates who saw CMC as a good bet.

not too long into my tenure at at CMC, i was invited to meet with an mid-level manager in electronics & telecommunications department of the oil and natural gas commission of india (ONGC). the challenge he posed us was simple: save money by optimizing the utilization of helicopters in the bombay high oilfield.

the problem

the bombay high offshore oilfield, the setting of our story

the bombay high oilfield is about 100 miles off the coast of bombay (see map). back then, it was a collection of about 50 oil platforms, divided roughly into two groups, bombay high north and bombay high south.

(on a completely unrelated tangent: while writing this piece, i wandered off into searching for pictures of bombay high. i stumbled upon the work of captain nandu chitnis, ex-navy now ONGC, biker, amateur photographer … who i suspect is a pune native. click here for a few of his pictures that capture the outlandish beauty of an offshore oil field.)

movement of personnel between platforms in each of these groups was managed by a radio operator who was centrally located.

all but three of these platforms were unmanned. this meant that the people who worked on these platforms had to be flown out from the manned platforms every morning and brought back to their base platforms at the end of the day.

at dawn every morning, two helicopters, flew out from the airbase in juhu, in northwestern bombay. meanwhile, the radio operator in each field would get a set of requirements of the form “move m men from platform x to platform y”. these requirements could be qualified by time windows (e.g., need to reach y by 9am, or not available for pick-up until 8:30am) or priority (e.g., as soon as possible). each chopper would arrive at one of the central platforms and gets its instructions for the morning sortie from the radio operator. after doing its rounds for the morning, it would return to the main platform. at lunchtime, it would fly lunchboxes to the crews working at unmanned platforms. for the final sortie of the day, the radio operator would send instructions that would ensure that all the crews are returned safely to their home platforms before the chopper was released to return to bombay for the night.

the challenge for us was to build a computer system that would optimize the use of the helicopter. the requirements were ad hoc, i.e., there was no daily pattern to the movement of men within the field, so the problem was different every day. it was believed that the routes charted by the radio operator were inefficient. given the amount of fuel used in these operations, an improvement of 5% over what they did was sufficient to result in a payback period of 4-6 months for our project.

this was my first exposure to the real world of optimization. a colleague of mine — another IIM-A graduate and i — threw ourselves at this problem. later, we were joined yet another guy, an immensely bright guy who could make the lowly IBM PC-XT — remember, this was the state-of-the-art at that time — do unimaginable things. i couldn’t have asked to be a member of a team that was better suited to this job.

the solution

we collected all the static data that we thought we would need. we got the latitude and longitude of the on-shore base and of each platform (degrees, minutes, and seconds) and computed the distance between every pair of points on our map (i think we even briefly flirted with the idea of correcting for the curvature of the earth but decided against it, perhaps one of the few wise moves we made). we got the capacity (number of seats) and cruising speed of each of the helicopters.

we collected a lot of sample data of actual requirements and the routes that were flown.

we debated the mathematical formulation of the problem at length. we quickly realized that this was far harder than the classical “traveling salesman problem”. in that problem, you are given a set of points on a map and asked to find the shortest tour that starts at any city and touches every other city exactly once before returning to the starting point. in our problem, the “salesman” would pick and/or drop off passengers at each stop. the number he could pick up was constrained, so this meant that he could be forced to visit a city more than once. the TSP is known to be a “hard” problem, i.e., the time it takes to solve it grows very rapidly as you increase the number of cities in the problem. nevertheless, we forged ahead. i’m not sure if we actually completed the formulation of an integer programming problem but, even before we did, we came to the conclusion that this was too hard of a problem to be solved as an integer program on a first-generation desktop computer.

instead, we designed and implemented a search algorithm that would apply some rules to quickly generate good routes and then proceed to search for better routes. we no longer had a guarantee of optimality but we figured we were smart enough to direct our search well and make it quick. we tested our algorithm against the test cases we’d selected and discovered that we were beating the radio operators quite handily.

then came the moment we’d been waiting for: we finally met the radio operators.

they looked at the routes our program was generating. and then came the first complaint. “your routes are not accounting for refueling!”, they said. no one had told us that the sorties were long enough that you could run out of fuel halfway, so we had not been monitoring that at all!

so we went back to the drawing board. we now added a new dimension to the search algorithm: it had to keep track of fuel and, if it was running low on fuel during the sortie, direct the chopper to one of the few fuel bases. this meant that some of the routes that we had generated in the first attempt were no longer feasible. we weren’t beating the radio operators quite as easily as before.

we went back to the users. they took another look at our routes. and then came their next complaint: “you’ve got more than 7 people on board after refueling!”, they said. “but it’s a 12-seater!”, we argued. it turns out they had a point: these choppers had a large fuel tank, so once they topped up the tank — as they always do when they stop to refuel — they were too heavy to take a full complement of passengers. this meant that the capacity of the chopper was two-dimensional: seats and weight. on a full tank, weight was the binding constraint. as the fuel burned off, the weight constraint eased; beyond a certain point, the number of seats became the binding constraint.

we trooped back to the drawing board. “we can do this!”, we said to ourselves. and we did. remember, we were young and smart. and too stupid to see where all this was going.

in our next iteration, the computer-generated routes were coming closer and closer to the user-generated ones. mind you, we were still beating them on an average but our payback period was slowly growing.

we went back to the users with our latest and greatest solution. they looked at it. and they asked: “which way is the wind blowing?” by then, we knew not to ask “why do you care?” it turns out that helicopters always land and take-off into the wind. for instance, if the chopper was flying from x to y and the wind was blowing from y to x, the setting was perfect. the chopper would take off from x in the direction of y and make a bee-line for y. on the other hand, if the wind was also blowing from x to y, it would take off in a direction away from y, do a 180-degree turn, fly toward and past y, do yet another 180-degree turn, and land. given that, it made sense to keep the chopper generally flying a long string of short hops into the wind. when it could go no further because they fuel was running low or it needed to go no further in that direction because there were no passengers on board headed that way, then and only then, did it make sense to turn around and make a long hop back.

“bloody asymmetric distance matrix!”, we mumbled to ourselves. by then, we were beaten and bloodied but unbowed. we were determined to optimize these chopper routes, come hell or high water!

so back we went to our desks. we modified the search algorithm yet another time. by now, the code had grown so long that our program broke the limits of the editor in turbo pascal. but we soldiered on. finally, we had all of our users’ requirements coded into the algorithm.

or so we thought. we weren’t in the least bit surprised when, after looking at our latest output, they asked “was this in summer?”. we had now grown accustomed to this. they explained to us that the maximum payload of a chopper is a function of ambient temperature. on the hottest days of summer, choppers have to fly light. on a full tank, a 12-seater may now only accommodate 6 passengers. we were ready to give up. but not yet. back we went to our drawing board. and we went to the field one last time.

in some cases, we found that the radio operators were doing better than the computer. in some cases, we beat them. i can’t say no creative accounting was involved but we did manage to eke out a few percentage point of improvement over the manually generated routes.

epilogue

you’d think we’d won this battle of attrition. we’d shown that we could accommodate all of their requirements. we’d proved that we could do better than the radio operators. we’d taken our machine to the radio operators cabin on the platform and installed it there.

we didn’t realize that the final chapter hadn’t been written. a few weeks after we’d declared success, i got a call from ONGC. apparently, the system wasn’t working. no details were provided.

i flew out to the platform. i sat with the radio operator as he grudgingly input the requirements into the computer. he read off the output from the screen and proceeded with this job. after the morning sortie was done, i retired to the lounge, glad that my work was done.

a little before lunchtime, i got a call from the radio operator. “the system isn’t working!”, he said. i went back to his cabin. and discovered that he was right. it is not that our code had crashed. the system wouldn’t boot. when you turned on the machine, all you got was a lone blinking cursor on the top left corner of the screen. apparently, there was some kind of catastrophic hardware failure. in a moment of uncommon inspiration, i decided to open the box. i fiddled around with the cards and connectors, closed the box, and fired it up again. and it worked!

it turned out that the radio operator’s cabin was sitting right atop the industrial-strength laundry room of the platform. every time they turned on the laundry, everything in the radio room would vibrate. there was a pretty good chance that our PC would regress to a comatose state every time they did the laundry. i then realized that this was a hopeless situation. can i really blame a user for rejecting a system that was prone to frequent and total failures?

other articles in this series

this blog entry is intended to set the stage for a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect.

Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.

He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye

30 thoughts on “Optimization: A case study”

Nice to see your name in print again, and what a great example of the short-comings on optimization.

I also think that this is a classical example of what at first seems to be an optimization problem, but is not. Stated differently, I think human judgement, with a little help from heuristics, will at least match optimization in situations that are essentially about scheduling. The problem is too dynamic and the constraints too variable.

I still think that optimization has a role to play in longer term decisions in which the degrees of freedom are still very large. BUT, and it is a big but, I have lost confidence in the accuracy of the results of optimization. I think the result of the optimization is in the region of the optimum, and should be the starting point of a consensus decision process. In other words, optimization should be used to eliminate infeasible regions rather than identifying the location of the optimum.

We fully recognize this phenomenon with forecasting. I know of very few companies that rely on the statistical forecast alone for planning their supply, and in the current economic climate they are probably out of business. We all accept that the statistical forecast is only the starting point of a demand planning process precisely because we know that there are many variables we cannot take into consideration when generating a statistical forecast and that demand, by its very nature, is variable. Yet we treat the results of an optimization engine used to plan supply as precise and accurate.

Around the time you started the work you discuss in this article I was at Penn State doing research into “decision under uncertainty” while working for Dennis Pegden, one of the pioneers in discrete event simulation, and pretending that I understood Queueing Theory. If there is one lesson I took away from this time it is that a model can never replicate reality precisely enough, and indeed life is uncertain. This was brought home to me in a lecture by a Dr Mehta, one of the pioneers of Fuzzy Logic, at an Artificial Intelligence conference. The central theme of the lecture was that AI was fundamentally flawed because one could never be sure that all the exceptions/rule pairs had been incorporated, becuase the decision space is unbounded, and therefore the result could not be trusted. He went on to add that it was the assumption of preciseness of the result that bothered him the most, and the confidence with which the result was followed. (He gave a very funny anecdote about the difference between the German rail system, binary by nature, and Indian rail system, fuzzy by nature.)

This is at the heart of my “distrust” of optimization systems, and is what I thought you brought out in your article so beautifully. The underlying mathematical model is an approximation so why do we assume that the result is precise? In every use of optimization, or for that matter any use of a mathematical model to represent a physical system, we are faced with the dilema you bring out in the article. To get the model to incorporate all situations is itself a “hard” problem because as the number of variables, parameters, and constraints increases, the effort required to ensure that all the interactions are represented correctly increases exponentially. Yet without incorporating this level of detail, the results of the optimization are nearly useless. Which all leads to rapidly diminishing returns.

Something else you ran into is that constraints are seldom hard. They can shift in time and in nature, making them extremely difficult to model precisely because their nature depends on the situation and all the rules governing their nature need to be incorporated in the model. Then there are all the “soft” issues, such needing to balance the wait times experienced by the crews going to particular platforms.

The business problem you set out to solve is not in my mind an optimization problem. Optimization could have been used to determine how many helicopters were required, though is this case it was likely a trivial problem with too few variables to justify the expense. But the routing and scheduling of the helicopters is best left to human judgment and collaboration, perhaps supported by some heuristics.

This is “vintage” Narayan. Always loved his old yarns 🙂 Look forward to the rest of the saga.

Reminds me of another anecdote – one of Narayan’s factory planning customers had a plant where the machine running FP always crashed around 2 pm….turns out it was next to the railroad tracks and there was a freight train every day around that time whose vibrations caused something in the hardware to fail! It took weeks for a few of our smartest consultants to convince the customer that it was not a bug in the software 🙂

Dr. Narayan’s article underscores the importance of understanding a client’s objectives, assumptions, and business rules really-really well before attempting to build an optimization model. What is extraordinary about the described example is that the scheduling model was attempted at a time, when majority of India was just getting introduced to computers and Excel was perhaps just getting invented.

Trevor makes an interesting point about the accuracy of optimization models. I agree that mathematical models can never provide a truly optimal solution due to the inherent uncertainties of the crazy world we live in. However, I would argue that businesses may not be looking for ‘the’ optimal solution afterall, but only a solution that is ‘good enough’. Perhaps that approximate solution is enough for companies to given them an edge over the competition.

Thanks Narayan (& Amit, for convincing Narayan to write). I always loved the examples you used to explain & represent optimization problems. The “Veggie Pizza Problem” is the first one that springs to mind 🙂

PS: the (ground) speed & mileage of the choppers would also have depended significantly on the wind direction. So, larger the distance to be covered in a hop, the more optimal to fly with the wind than against.
It is also for the same reasons that light aircraft prefer to take-off & land into the wind; this allows for high air-speed > the stall velocity, at the same time maintaining a low ground-speed allowing for shorter take-off/landing requirements & greater safety, etc. One of the first lessons learnt while paragliding

This article brings back memories. After graduating from VJTI, I joined CMC and one of my first tasks as a computer engineer was to visit computer systems under maintenance contract with CMC. I remember travelling on one of the “Dolphin” choppers to an ONGC rig and remember it taking off right after dropping us – must have been on its “optimized” route. Another thing I clearly remember was the great “food spread” on the rig 🙂

Naru,
Very well written.
I think you missed the Objective of buying duty free cigarettes!!
Keep writing. Small community of OR guys would love to know that some people can still make money from OR
Mukund

Here is something similar to the vibrations causing PC comatose anecdote..

————————————————————

For the engineers among us who understand that the obvious is not
always the solution, and that the facts, no matter how implausible,
are still the facts …

This is a weird but true story (with a moral) …

A complaint was received by the Pontiac Division of General Motors:

“This is the second time I have written you, and I don’t blame you
for not answering me, because I kind of sounded crazy, but it is a
fact that we have a tradition in our family of ice cream for dessert
after dinner each night. But the kind of ice cream varies so, every
night, after we’ve eaten, the whole family votes on which kind of ice
cream we should have and I drive down to the store to get it. It’s
also a fact that I recently purchased a new Pontiac and since then my
trips to the store have created a problem. You see, every time I buy
vanilla ice cream, when I start back from the store my car won’t
start. If I get any other kind of ice cream, the car starts just
fine. I want you to know I’m serious about this question, no matter
how silly it sounds: ‘What is there about a Pontiac that makes it not
start when I get vanilla ice cream, and easy to start whenever I get
any other kind?'”

The Pontiac President was understandably skeptical about the letter,
but sent an engineer to check it out anyway. The latter was surprised
to be greeted by a successful, obviously well educated man in a fine
neighborhood. He had arranged to meet the man just after dinner time,
so the two hopped into the car and drove to the ice cream store. It
was vanilla ice cream that night and, sure enough, after they came
back to the car, it wouldn’t start.

The engineer returned for three more nights. The first night, the
man got chocolate. The car started. The second night, he got
strawberry. The car started. The third night he ordered vanilla.
The car failed to start.

Now the engineer, being a logical man, refused to believe that this
man’s car was allergic to vanilla ice cream. He arranged, therefore,
to continue his visits for as long as it took to solve the
problem. And toward this end he began to take notes: he jotted down
all sorts of data, time of day, type of gas used, time to drive back
and forth, etc.

In a short time, he had a clue: the man took less time to buy
vanilla than any other flavor. Why? The answer was in the layout of
the store.

Vanilla, being the most popular flavor, was in a separate case at
the front of the store for quick pickup. All the other flavors were
kept in the back of the store at a different counter where it took
considerably longer to find the flavor and get checked out.

Now the question for the engineer was why the car wouldn’t start
when it took less time. Once time became the problem — not the
vanilla ice cream — the engineer quickly came up with the answer:
vapor lock. It was happening every night, but the extra time taken to
get the other flavors allowed the engine to cool down sufficiently to
start. When the man got vanilla, the engine was still too hot for the
vapor lock to dissipate.

re: “Small community of OR guys would love to know that some people can still make money from OR.”

you should take a look at http://ackoffcenter.blogs.com/ackoff_center_weblog/2003/10/the_future_o_op.html. ackoff was a professor of OR at the university of pennsylvania who appears to have lost his faith in the field. that blog points to two very interesting papers by him, the first titled “The Future of Operational Research is Past”. he follows that up with “Resurrecting the Future of Operational Research”. these papers may seem a bit dated in parts because they were written 30 years ago but there are germs of truth in them that every practitioner should read.

Narayan,
As always, I loved reading your article: fantastic way to explain optimization.

What came to mind was the importance to live with imperfection – and therefore the need to analyse and understand, and tinker with the results of an optimization algorithm. Optimization is hard – but for a computer to explain the results of its reasoning to the users is even harder. Successful optimization work is where the optimization intuitively makes sense. Hence the importance of a smart story, good UI, layered optimization, lots of experience and wide customer base (and that next to or on top of optimization). That being said, you will agree that optimization is a piece of beauty… (so much for economic justification 🙂

With regards to the planes, trains, computers and washing machines, I once was venturing on parallel computing on transputers for the control of a manufacturing cell (multitasking was not yet really invented on regular PCs). Searching for a bug that behaved stranger and stranger, and whatever we did to fix it, was a waste of time. Perplexed and junior, we decided to undo our changes and go back to the original code. Alas, that did not give us back our original bug. Debating with colleagues for an hour on whether we should start using revision control, decent design patterns, proper testing, or none of the above, we finally got a clue about the root cause of our bug: we could smell it… Parallel computing on a bunch of transputers is challenging, but when one of the transputers is on fire – for sure – that part of the program does not work anymore. Or why nothing is too silly to consider during problem solving…

@luc: thanks for your thoughtful response. your phrase “the importance to live with imperfection” reminded me of something i read recently. it is very short piece but, like everything by jorge luis borges, is brilliant. at first, it may not seem germane to this topic but i’m sure you’ll see my point if you ponder on it for a bit. i was going to reproduce it for you here but, fortunately for me, this piece is available online at http://www.idb.arch.ethz.ch/files/borges_on_exactitude_in_science.pdf. enjoy!

Narayan,
Very well laid out story. Very logical explanation of the touchpoints between solution and implementation – much appreciated.

Still remember in the early days of i2, briefly discussing the work Dr. G.P. Sinha of Tata Steel had done in the area of utilizing optimization for integrated steel companies. Good to hear you have carried on throwing light on problems using the optimization torch. cheers

Narayan: I stumbled upon your article while browsing for work in the scm space in Pune. It was a nice refresher to read your anecdotes.

Amit: I had the opportunity to work closely with Narayan for a brief period of eight months at Dell. There is never a dull moment with him around!
I look forward to reading the rest of the series, and more on PuneTech.

your story describes very well what we go through in our work with customers/users. The main message for me is that involving the users, managing their requirements, setting acceptance criteria and freezing the scope from the very beginning is essential to any optimization work.

In your case adding user requirements progressively did not actually cause total redesign of the overall algorithm. Adding the weight constraint to the seat number constraint did not affect your previous algorithmic work. However, in many cases that probability is really high.

@themis, good to hear from you! yes, you’re right when you say that the “main message for me is that involving the users, managing their requirements, setting acceptance criteria and freezing the scope from the very beginning is essential to any optimization work”. often, we engineers tend to fall into the trap of thinking of these problems as technical problems exclusively. in reality, the solution to the technical problem needs to be embedded within a solution that addresses the larger problem of bringing about permanent change to an organization and its processes. when amit asked me to write on this topic, i decided that the only way to keep it interesting is to address that larger problem. i’m glad to hear that this resonated with your experience.

I really enjoyed your article. You bring many of my own personal experiences to life. My baptism into Planning & Scheduling came through a contract I got to developed a conference scheduling app for Intel. The problem was to schedule course sessions, instructors, and rooms, and attendees into their most preferred courses. The manager of the project did not have an OR background and pulled my resume because he had envisioned the conference scheduling problem as like “solving a chess problem”. I had mentioned on my profile some work I had done on computer chess and he hired me on that basis.

This was back in 1996, and my background in Planning & Scheduling was little more than what a few internet searches provided me. However, armed with the words variable, domain, and constraint, and perhaps the words “constraint propagation”, I set out to develop a conference scheduling system for them. Just like you, I also, went through the process of continuously discovering new rules and goals that lived in the minds of the schedulers but were never explicitly stated. In addition to this, I also had almost no theoretical knowledge about Scheduling / Optimization techniques. I ended up developing some overly complicated system of search with pruning that solved (given a set of session, room, and instructor assignments) the problem of scheduling attendees into their preferred courses quit well, but did a very poor job of scheduling the sessions, rooms, and instructors. This was partly due to the fact that it was difficult to capture all the requirements and partly due to my lack of theory and experience in Scheduling / Optimization. Ironically the project turned out to be a huge success. This was because I had addressed the “Predictive / Visibility” layer — I had thought of it as “computer aided scheduling”. I developed a calendar (with watermarks) where the scheduler could move course sessions, room and instructor assignments around and the attendees would be automatically re-optimized when they let go of the cursor. The sessions would fill up with water to indicate that more attendees got in. I also provided additional stats to aid the scheduler. And ultimately the schedulers enjoyed moving the session and room and instructor assignments around and watching the water fill up. They never bothered to use the automatic session, room, and instructor scheduling — which didn’t work well anyway.

it seems like you relived my experience except that you were smart enough to recognize that you can’t outsmart humans. 🙂 i guess i was still flush with that youthful “anything you can do, my algorithm can do better” enthusiasm, so i remained convinced that optimization alone would win the day.

thanks for sharing your experience. it only goes to reinforce my feeling that future practitioners in operations research need to be taught more than modeling and algorithms before being unleashed onto the real world.

I am teaching an undergraduate course on operations research at Venezuela, and I would like to ask for permission to:
1) translate to spanish your articles
2) publish the translation in my academic webpage.