Empowering Engineers with Industrial Analytics

How To Go from 40% to 100% Yield

A customer called one day, well, they weren’t a customer just yet. They manufactured high tech weapons systems and “weather satellites” (wink wink). They were suffering miserably. They had their best team on a product and no matter what they did they could only get 40% yield. That means for every 100 units made, 60 were bloopers. No good. Failures. They had heard we help people in such situations, so I agreed to try and boarded a plane to LA.

The product had multiple subassemblies from various vendors and once assembled had to be “dialed in” using tunable resistors. The units then were tested for about 60 performance metrics in a lab. Most units failed the critical tests. Some failing units could be tweaked into specification using the resistors, the others had to be disassembled or scrapped. Manufacturing slowed to a crawl.

The Mother of Invention

I noticed the faults seemed somewhat random, but there seemed to be an interaction between subassemblies. Also, each subassembly came with vendor test results that characterized it. I guessed that it might be possible to determine which subassemblies would work best together, but the trick would be doing that BEFORE the unit is assembled. What if I could virtually assemble a unit, algorithmically tweak the resistors, test it, and confirm the combination worked. I could do that for a variety of subassembly combinations using a Genetic Algorithm, a great, very efficient combinatorial search technology.

The Solution

I got their data about each unit produced recently, good and bad, and all the resistor settings for each and also the associated subassemblies vendor data. I asked them which performance characteristic to target first. They gave me one that was important and failed frequently. I then used our NeuroGenetic Optimizer tool to build models using subassembly characteristics and resistor settings as inputs and the product performance characteristic from the lab database as an output. I soon discovered that such models were viable, they worked and they estimated performance pretty well on previously unseen assemblies. I then stuck a good model inside a genetic algorithm that searched across all possible combinations of subassemblies looking not for the best combination, or even for a good unit, but in a way that all planned units to be produced would consider all subassemblies such that all produced units would pass. No cherry-picking, leaving wasted “bad” subassemblies in the bottoms of bins.

We fired up the solution, which spit out a pick-list telling them which subassemblies to match-up by serial number. They built a run of units. Guess what. They all passed that first important performance characteristic. Yields stepped up to 55% immediately. We then looked at the next most failing important performance characteristic, built models, put THAT model into the GA too. All units now passed 2 performance characteristics. Repeat. 60% pass rate. We repeated and repeated and repeated until in the end we had 57 product characteristics in the system and they had 100% yield. ALL UNITS PASSED!

Customer Gets Improvement Award

A while later the customer called me again. “Ummm… is there a problem?”, I asked. “No”, they said, “We just wanted to tell you we received an award from the US Army for the best manufacturing performance improvement, like, ever. Thank you so much!”.

I cannot tell you how satisfying that is, to get such calls from our customers. We love it.

P.S. This project was the birth of our “Intellect” line of server software and desktop tools to bring the solution, and many others, to the industrial world globally.