Anomaly Detection and Edge Computing
Calendar Icon - Dark X Webflow Template
May 1, 2024
Clock Icon - Dark X Webflow Template
5
 min read

Anomaly Detection and Edge Computing

Anomaly detection at the IoT edge extends human capacity to respond to outliers quickly, within a timeframe that allows for adequate action.

Machine downtime, defective equipment, inefficient processesthese can turn out to be quite expensive in the long run. Then you have all the missed opportunities to detect overarching patterns and trends that can potentially transform your organization. Companies can no longer afford to wait for things to go wrong and take action retroactively. An ongoing need for real-time decision-making and predictive handling has become the norm in industrial enterprises. Read on to find out how anomaly detection at the IoT edge can help here.

Industries are leveraging the sensor data from IoT devices to detect patterns and outliers in their data, hoping to attain a comprehensive picture of their operations and drive efficiency solutions.

One aspect of this global development towards predictive analytics is the recent focus on anomaly detection. A special case of this trend towards prevention is anomaly detection within the context of the recent shift towards edge computing. Part of the rationale behind the use of edge computing is autonomy. This approach allows for more local optimization and immediacy.

Both anomaly detection and edge computing are defined and approached locally. Let us find out how they work together.

How Do We Approach Anomaly?

Understanding how to compartmentalize problems related to anomaly detection, break them down into concepts, and tackle them individually has increased the speed of innovation. The more the technological capabilities are evolving and the more we are able to localize problems to come up with global solutions, the more we are continuing to push the boundaries of innovation.

Considering the Ecosystem

One effect of this development is the shift from rules to contexts. Rules in anomaly detection assume a comprehensive and in-depth understanding of the underlying characteristics of a system. However, rule-based techniques may not work in anomaly detection scenarios because of their ingrained rigidity. To allow us to take adequate action, rules have to be constantly changing. This is why things have slowly shifted towards pattern-recognition-based anomaly detection. It adds flexibility. The anomaly can only be defined through the ecosystems within which it manifests itself, considering the lifecycles and behaviors of its adjacent things over time.

Breaking Down the Concept

But then again, we also need to confront the very concept of anomaly. To tackle the problem of anomaly detection, that is, to develop an anomaly detection technique and a viable anomaly detection model, this seemingly monolithic chunk of a concept has to be broken down into actionable questions. Extracting the critical questions behind the concept, we end up with a much more fine-grained idea of where to take it from here: what anomaly detection method to choose, what deep learning algorithm to develop, what detection tasks to prioritize.

Traditional definitions serve as generic placeholders for pieces of reality that are far more complex and nuanced than the initial theoretical setup. So the first step is to create a custom definition and see if this definition is insight-driven. That is to say, we need to see whether it translates into actions that translate into insight. Then you start to refine the definition and add dimensionality to it.

Defining Anomaly in the Era of Intelligent Devices

The second step is to establish causality. This is somewhat related to the question of dimensionality as it adds breadth and depth to the concept. Apart from scope and context, however, at this stage, you also consider historical information and create a historical account of patterns.

  • Causality: Why is this an anomaly?

The third question is future-oriented and already points to an actionable plan:

  • Action: What do we do about it?

This last question encapsulates the remaining ones. Again, the concept distilled in this process has to be actionable and has to be placed into context. Then we define the actions that we should take, and establish the most suitable methodology for outlier detection.

Shifting to Edge Computing

Anomaly detection at the IoT edge extends human capacity to respond to outliers. Responding quickly, within a timeframe that allows for adequate action, is essential. Edge computing has become part of the proposed method as it safeguards reliable detection performance while being faster than analyses in the cloud.  

How? Edge computing is about taking parts of the computing process to various physical locations, moving the processing of data close to where it’s needed. Making use of the compute power at the edge of the computer networks is done for two main reasons: for fastness and for avoiding a break in communication. If it needs fast reaction times, computing has to take place close to the edge.

In an IIoT setting, an optimal IoT design is crucial. This means the distribution of storage and computational resources according to the priority and prominence of a measured state.
And this requires a closer look into the extraction points and properties of the data we deal with in the IoT context.

The recent evolution in edge computing shows that manufacturers have a clear preference for greater autonomy and low bandwidth. Greater autonomy translates into fast reaction times whereas low bandwidth allows you to restrict the communications to the central controller and do the aggregation locally. Then only the result of the analysis is sent over, saving on compute resources and avoiding information loss.

Safeguarding Data Quality

The popularity of this approach shows that data availability, quality, and size continue to be major challenges in smart production. What does this mean? First, we cannot attempt to train a machine learning model without high-quality data, meaning reliable, correct, and appropriately modeled input data. Second, the data needs labels in order to train classification algorithms. Third, the training data may be collected from various different sources. This requires a remote mechanism to deploy edge applications when needed.

Also, collecting massive amounts of raw data often leads to the creation of an unusable data lake. Unfiltered data that is continuously streamed will consume storage resources and create noise in the model where one needs the data to train new ML models. Randomly collected Big Data does not necessarily lead to knowledge. To pre-process and label data appropriately, we need to run intelligence at the edge. We need smart local IoT analytics.

Some Examples

At the edge device level, the measurement and data streaming processes require a minimal number of parameters to operate, e.g. sampling and streaming frequencies. Again, continuous streaming can lead to huge volumes of IoT data. It is a good idea to prevent data accumulation by looking at data streaming priority. Such a process demands a (preferably automated and adaptive) decision on the priority of data, that is, whether real time data will be streamed and/or stored temporarily or permanently.

Let’s take the temperature sensitivity of fold molding in an automotive factory as an example. An open window or door can alter the precision of the sensitive procedure and hence the product quality. However, one shouldn’t stream the state of all windows and doors in the factory at any given moment. The local maintenance device should report such a change when needed, which is detected by an algorithm. Another example is squeal detection with sound snippets in a running car. Here we can run neural network models on edge devices in order to classify data and send only relevant labeled data to the database. This saves time and resources.

Latest articles

Browse all