How the asset-intensive industry is driven by cognitive anomaly detection

ON DEMAND WEBINAR

Date: Tuesday, Oct 09, 2018

Time: 11:00AM TO 11:45AM CT

Duration: 45 Minutes

Speakers

Gaurav Sarathe

Subject Matter Expert, IoT and AI,

Softweb Solutions

Vaibhav Pawar

IoT Consultant,

Softweb Solutions

About Webinar

There's a lot of buzz around artificial intelligence (AI) and IoT in the market. Both represent a huge potential to overcome industrial challenges that businesses face. Companies already collect a lot of data but with the right set of technology and tools, they can understand the underlying pattern of data and automate processes to keep operations running smoothly.

Boosting overall equipment effectiveness, increasing in asset life, improving assets’ operation and efficiency, and optimizing of inventory cost are some of the biggest challenges of businesses. In this webinar, we will show you how cognitive anomaly detection powered by machine learning techniques can help you bring down downtime, optimize yield, and improve quality. We will also discuss the roadmap to deploy smart solutions powered by AI so that you can start detecting and predicting anomalies across your industrial data, so don't miss out!

Agenda

Introduction

Remote monitoring to improve transparency

Common challenges to traditional approaches to anomaly detection

Cognitive anomaly detection with AI

Common myths vs reality

How to implement cognitive anomaly detection

Use cases

Benefits

Smart asset monitoring powered by AI

Q&A

Questions & Answers

The following are the answers to the questions that were asked during the live webinar.

Question 1: In the pharmaceutical example, how did the model know about speed of the conveyor belt being the problem? Does someone need to have a hypothesis about the speed of the conveyor belt being too fast? Or can the model figure it out itself?

Answer 1: Since the model was collecting the data from two conveyor belts (test one and test two), the model was giving a clear flag about the speed. It’s because the test two conveyor belt was giving the false rejections. However, in order to derive the optimum speed limit, we had to train the model for a few months.

Question 2: In order to properly do training of a model, how much data do I need? Data over 1 day, 1 week, 1 year?

Answer 2: This depends purely on the business use case and the desirable outcome; however, in the webinar, we showed that we helped our client to detect anomaly and solve their problems with an unsupervised machine learning approach - which was more of an ongoing training model. You can refer to this blog for more information.

Question 3: What is the record volume (count of records)?

Answer 3: As mentioned above in the answer, it is very much subjective. Sometimes, we deal with millions of data and sometime less than that because it depends upon the use case and model methods.

Question 4: Do you have an experience in the semiconductor industry, where there is a lot of NoSQL data (time-phase traces and inspection images) and a minor input change might result into a major quality issue.

Answer 4: Yes, we have an extensive experience of working with semiconductor companies. During the ETL process, we have dealt with different types of data including structured, unstructured and semi structured (e.g. sensor data, SQL dumps or social feeds).

Question 5: What software do you use?

Answer 5: We have developed our smart solutions leveraging Azure technology stack extensively. Besides, we have developed custom models using R & Python. Our most smart solutions are built over our IoT platform – IoTConnect.

Question 6: What types of algorithms you use for anomaly detection?

Answer 6: It completely depends upon the use case and the type of data you have. Some of the most common Machine Learning-Based Approaches are -

Density-Based Anomaly Detection

Clustering-Based Anomaly Detection

Support Vector Machine-Based Anomaly Detection

If you are not sure about what fits best for your application, you can refer our PoV package.

Question 8: What is the accuracy of an ML model?

Answer 8:The accuracy of an ML model completely depends on the machine data you have. If you have enough historical data to train the model, you can have results that are more accurate.

However, on the other hand, if there is no historical data available, still it is not an issue. We can deploy the model and it would start learning from the data and gradually start providing more and more accurate results as the model gets level of maturity.

Question 9: Where do you store data?

Answer 9: We can say it’s a hybrid solution as we need to store the data on both the platforms, i.e., on-premises and cloud storage.

Question 10: In your oil and gas demo you have shown the screen of an app which populates the data in 2 hours of interval. Now how efficient it would be to get the alerts after two hours of interval when the actual event might have happened in between the stages?

Answer 10: The data is captured in real-time; however, in the app we are displaying only the average of data for a specific time window. However, if the temperature goes out of the defined threshold, then it will give a notification right at that point of time.

In our business use case, the ML model not only gives the alert but also show a specific anomaly pattern as well for the nearest prediction.