In my first post I discussed holistic prognostics outlining the full set of information required by asset stakeholders. I also discussed the two fundamental levers predictive maintenance must deliver to enable value to be realized. The first of these was Prognostics and Remaining Useful Life (RUL). The second lever, Diagnostics, and the granularity of isolation of faults will be covered here.
Many predictive maintenance systems today fall far short of offering full diagnosis of fault conditions. Some of the least mature systems only enable a threshold alarm to be set on individual parameters to warn stakeholders that those parameters are out of normal zones of operation. This has limited value as it does not provide much information about possible underlying causes and requires expert diagnosis from experienced people to determine the seriousness of the situation. Many parameters may also be influenced by other factors, and depending on operational circumstances, the band of normal operations for the parameter may shift. This means thresholds set for single parameters, that are not compensated for operating environment influencing factors, may frequently trigger false alarms, resulting in nugatory investigative work by the expert diagnostician. This situation can cause maintenance costs to increase due to the wasted investigations, and the credibility of the predictive maintenance system itself may be put at risk if the rate of false alarms is unacceptably high. This most basic form of diagnosis should be termed anomaly detection. You need to beware of organisations who claim anomaly detection is diagnostics.
The next stage in diagnostic maturity is where anomaly detection is augmented by functions that compensate for normal or expected influencing factors in the operational environment. Examples of influencing factors include ambient temperature (daily and seasonal variation) or, in the case of aircraft engines, altitude and air pressure. Models built to correct observed parameters for influencing factors are sometimes called models of normality. You can use a model of normality to predict what your observed parameter is and, by subtracting this signal from the raw sensor parameter, produce a residual signal. You can then detect anomalies in the residual signal, looking for thresholds, continuous rates of change, increases in erratic behavior etc. It is often better to detect an early but steady rate of change than wait for a threshold.
Diagnosis is built on top of anomaly detection. It may use the combination of several anomalies that vary in differing degrees, thereby forming a set of anomaly patterns that can be classified to particular failure modes. This is analogous to medical differential diagnosis, where multiple symptoms are presented by a patient and observed, measured, contextualized and classified by the doctor to diagnose the illness.
In conclusion, the attributes of mature diagnostics in predictive maintenance systems go far beyond simple anomaly detection:
- Must compensate for influencing factors in the operational environment, possibly employing a model of normality and deriving residual signals
- Must use a multi-parameter approach with a pattern of anomalies detected from residual signals, necessary to isolate the failure mode and fault condition
Furthermore, modelling normality and classifying sets of anomalies (or symptoms) should be highly automated, not entirely relying on human diagnostic experts (who are expensive and rare). Predictive analytics are employed to deliver diagnostic automation.
If you are specifying or scoring a predictive maintenance system, you need to ask about the attributes above, but you should also understand what level of fault isolation you need in your own organisation. If your organisation carries out its own maintenance, these levels may be granular and detailed. Similarly, if you want predictive maintenance to support root cause analysis, you also need detail. On the other hand, if you outsource maintenance, you may only need failure mode or fault isolation at a higher machine level.
In the next post we will discuss other vital attributes of diagnostics and prognostics you need, in order to be an intelligent user or customer of predictive maintenance.
Charlie Dibsdale is co-founder and technical director at Ox Mountain, a start-up working to deliver predictive analytics to automate in-service support and maintenance to organizations that rely on complex and critical machinery assets. A seasoned electrical engineer with over 35 years in operating and maintaining submarines, nuclear propulsion plant, power plants and other assets, he was Chief Engineer and Global Head of Equipment Health Management at Rolls Royce and played a key role in developing that company’s renowned predictive maintenance capability that is important in it’s TotalCare (TM) service offerings. Charlie holds a BSc in Computer Science and an MSc in Information Systems from Open University (UK).
If you are a Service professional (manager, practitioner, consultant or academic) in an industrial setting join our group Service in Industry on Linkedin
Deep dive into the industrial service business.
Join our community to receive analysis, insight, news and more.
Success!
We will never share your data