Saturday, February 10, 2018

If you’ve been paying attention to the latest trends in manufacturing, you’ve probably noticed a lot of talk about Industry 4.0. Whether it’s in the context of the Industrial Internet of Things (IoT) or a demo factory specifically designed to show off the capabilities of a fully connected production plant, this evocative phrase is everywhere. Even manufacturing giants like Ford are getting swept up in the Industry 4.0 craze. But what does Industry 4.0 mean for quality professionals? For a start, the use of big data promises to enhance quality assurance in a host of ways. In addition, the increased complexity of factories—as well as their products—will boost the calibration and repair services market. This is the main finding of a recent study by Frost & Sullivan’s Global Measurement and Instrumentation research team. According to the study, demand for advanced calibration and repair services is being driven by greater connectivity in end-user industries, such as automotive manufacturing, as well as the rising penetration of complex products in the aerospace and defence industries. The report predicts that the European Market will grow from USD $1.51 billion in 2015 to $2.12 billion in 2022, at a compound annual growth rate (CAGR) of 5 percent. "The complexity of smart products necessitates a superior level of testing and attention to detail right at the design stage, intensifying the demand for sophisticated and accurate instrumentation," said Frost & Sullivan Measurement & Instrumentation research analyst Apoorva Ravikrishnan. "Every year, there is a rise in the number of new and complex instrumentation that requires calibration, from a variety of end-user industry verticals," Ravikrishnan added. The study also identifies the automation of calibration and repair services as another prominent trend in the test and measurement sector. This is because automation boosts the consistency of results and reduces the costs of calibration and repair time, according to the report. The Calibration and Repair Services Market in Europe study is part of Frost & Sullivan’s Test & Measurement Growth Partnership Service program. Related studies from Frost & Sullivan include:

Global Portable and Handheld Analytical Instrumentation

Market Radio Access Networks (RAN)

Capacity Planning and Management Market

Global Wireless Test Equipment Market

New Competition for the Global Test and Measurement Market in IoT Wireless Technologies

This week I was confronted with a question it came from a teammate about what "ndc" actually means in R&R study according to AIAG brochure).
I've found the response in the blog I reproduce hereby.
Hope, you enjoy it.

-----------------------------
By Andy Cheshire

Recently I've been thinking about common questions that customers ask when running a Gage R&R analysis in Minitab.

For example, when you run a Gage R&R, the last result that shows up in the session window is a value for the ‘Number of Distinct Categories’. This one metric is something that customers seem to overlook when they call to discuss their Gage studies.

This value represents the number of groups your measurement tool can distinguish from the data itself. The higher this number, the better chance the tool has in discerning one part from another.

So how do you know if your number is high enough? Fortunately, there are guidelines from the Automotive Industry Action Group (AIAG):

When the number of categories is less than 2, the measurement system is of no value for controlling the process, since one part cannot be distinguished from another.
When the number of categories is 2, the data can be divided into two groups, say high and low.
When the number of categories is 3, the data can be divided into 3 groups, say low, middle and high.
A value of 5 or more denotes an acceptable measurement system.
Let‘s say you do get a value below 5. What next? Well, there are two things you can do.

Analyze more distinct parts that truly represent the entire range of the process.
Increase the precision of your measurement tool.
------------------------------

Wednesday, January 10, 2018

Manufacturing, along with virtually all other industries, is going through a significant period of change. Driven by rapid technological development, manufacturers are having to work smarter, operate more efficiently and be prepared to innovate. As an enabler of growth, technology will play a key role in empowering businesses to innovate and seize the opportunities that will present themselves in 2018.

But where exactly are these opportunities likely to come from? Here, we identify the top trends that we believe will be central to success in the upcoming year.

Organizations that concentrate on making themselves smart and agile will be the ones that are best positioned to take advantage of growth opportunities in 2018. For manufacturers, this process starts by ensuring that internal software systems are fully supported with the latest updates, thereby enabling them to react to change and view them as opportunities rather than threats.

Published by HT Digital Content Services with permission from SME Channels. For any query with respect to this article or any other content requirement, please contact Editor at content.services@htlive.com.

Friday, December 29, 2017

Machine learning has been successfully applied to demand planning, but leading suppliers of supply chain planning are beginning to work on using machine learning to improve production planning. But architecturally and culturally, this is a much tougher problem than machine learning applied to demand planning.

In the $2 billion-plus supply chain planning market, ARC Advisory Group’s latest market study shows production planning as being a critical application SCP solution representing over 25% of the total market. Production planning applications are used for both planning daily production at a factory to creating weekly or monthly plans to divvy up the production tasks that need to be accomplished across multiple factories.

Machine learning is a form of continuous improvement. So, in demand planning the machine learning engine looks at the forecast accuracy from the model, and asks itself if the model was changed in some way, would the forecast be improved. Forecasts are improved in an iterative, ongoing manner. https://blogs-images.forbes.com/stevebanker/files/2017/12/Picture1-1200x994.jpg

Machine learning in supply planning

For supply-side planning, there are key parameters that greatly affect the scheduling. For example, lead times are critical. The longer the lead time, or the greater the variability associated with an average lead time from a supplier, the more inventory a company must keep. But humans are not very good at detecting when these parameters need to be changed and without ongoing vigilance, a planning engines outputs deteriorate. The loop between planning and execution needs to be closed to prevent this.

Cyrus Hadavi, the CEO of Adexa, wrote a good paper on this. He wrote, “with every iteration of planning, there are millions of variables to be considered, billions of versions of plans that can be produced, and thousands of variables which are constantly and dynamically changing.” Much of the data needed to properly update the planning model exists in execution systems. What Adexa is visualizing is having a self-correcting engine continuously scrutinize the data in these systems and then automatically update the parameters in the SCP engine when warranted.

But architecturally, this is a more difficult than using machine learning to improve demand planning. In a demand management application, the system is continuously monitoring forecasting accuracy. That accuracy data in the system allows for the learning feedback loop. Further, demand planners, the people that use the outputs of the system, play a core role in making sure the data inputs stay clean and accurate.

But in supply planning, the data comes from a different system or systems. Improving operations can be extraordinarily challenging if the data that holds the answers is scattered among different incompatible systems, formats and processes. And the people responsible for making sure the data put into various systems don’t use the system outputs; in short, they have less incentive for making sure inputs stay clean. This is a master data management problem.

A form of middleware/business intelligence must access up-to-date and clean data, analyze it, and then either automatically change the parameters in the supply planning application or alert a human that the changes need to be made.

These solutions do exist. I’m most familiar with the solution from OSIsoft, the PI System, which collects, analyzes, visualizes and shares large amounts of high-fidelity, time-series data from multiple sources to either people or systems.

But this means that to improve supply planning, you need not just the supply planning application, but middleware and master data management solutions. In this kind of situation, the integration, cultural, and, consequently, ROI issues become more difficult.