Predictive Maintenance with Deep Learning Algorithms in Industry 4.0

Learn how an international machine manufacturer uses data-driven analytics to detect impending machine failures early on. Immerse yourself in a success story that shows how you can avoid costly downtime and secure your production goals.

icon_Mehrwert

Advantage

Ability to predict machine failures

icon_DataEngeneering

Data

Sensor data with historical error and fault messages

icon_Machine-Learning

Methods

Deep Learning

Challenge

An internationally operating mechanical engineering company is further advancing the digitalization of its production. One starting point: data-driven optimization of maintenance. The available database contains, among other things, sensor measurements that indicate error and fault messages at different time intervals. This provides the mechanical engineering company with extensive information, as over 100 machines, each with more than 250 sensors, continuously provide data.

Goal

The goal is to predict machine failures at an early stage in order to initiate preventive measures for predictive maintenance. By analyzing sensor readings, eoda's data science specialists will identify processes that could lead to machine failure.

Data Science Implementation:

Three Steps to Your Successful Data Product

Learn more

Solution

The key to reliably predicting machine errors lies in identifying recurring data patterns prior to historically documented machine malfunctions and failures. In this case, the data science specialists relied on a deep learning model. Deep learning, a method from the field of machine learning, uses artificial neural networks to analyze large amounts of data of varying complexity at multiple levels to identify complex relationships. The more data used to train the deep learning model, the more precise the results will be, as the learning algorithm optimizes itself during the analysis process.

Based on this method, eoda developed a multi-label classification model to register the occurrence of specific error messages in sensor data. Several error messages were divided into classes and analyzed over specific time intervals for descriptive analysis. The programming language R was chosen for this purpose, which is particularly suitable for conducting a proof of concept when there are a large number of individual variables.

Data preprocessing was performed beforehand: Sensor data measurements from different sources were aggregated, scaled, and transformed to create a uniform standard for data comparison. Only after this step could the data be analyzed, since each sensor measures at different intervals and times. These measurements had to be processed in such a way that the data from different information sources could be transferred into a uniform format and also be used on a consistent basis across machines.

The next step was to create a meaningful division of information into training, test, and validation data. Due to the uneven distribution of errors in the data, it was necessary not only to take random samples but also to use down- and upsampling to obtain a meaningful distribution of errors and non-errors. The resulting learning effect from training, testing, and verification is the most important step in model building – especially to avoid under- and overfitting.

Due to the big data scenario and the long computation time, the cloud-based workstation Amazon Web Services (AWS) was integrated to accelerate the process. The first prototype enabled the identification and evaluation of various layered architectures. Deviations resulting from the preprocessing phase were thus improved. The technical framework for the deep learning framework in this case was provided by a Keras API written in Python, which provides an interface for the TensorFlow deep learning library.

In the final step, the evaluation of a classifier based on relative frequency was calculated. For this purpose, a confusion matrix was created with a focus on cost minimization. The values ​​derived from the matrix indicate the frequency of occurrence of feature combinations.

Result

Thanks to eoda's deep learning model, the mechanical engineering company is able to predict malfunctions in industrial plants and proactively initiate appropriate maintenance measures. This not only prevents costly and unexpected downtimes, but also makes it easier to achieve specified production targets. This case is a successful example of how data and algorithms can form the basis for reliable and more profitable production.

By implementing the upstream proof of concept, not only can important insights for answering individual questions be obtained in a short time, but more importantly, statements can also be made about the feasibility and profitability of the specific use case. Only if the assessment is positive is the analysis model implemented into the business processes.

Get started now:
We look forward to exchanging ideas with you.

Kontakt_final_Mastmeyer

Your expert on Data-Science-Projects:

Lutz Mastmayer
projects@eoda.de
Tel. +49 561 87948-370







    Scroll to Top