Quality control plays a major role in industrial systems for guaranteeing stable and safe process operations as well as failure-free product items. There already exists a number of standard applications for optical fault diagnosis – typically image processing systems with some machine learning capabilities. Our department, however, aims to extend these systems along two major directions:
- Towards a more “human-like” notion of “quality”
Sometimes, the decision of “good” vs. “bad” is not just based on exact criteria, but takes also “subjective” criteria into account. Modelling such criteria requires to deal with different types of input from one or several operators like differently detailed inputs or uncertain or even contradictory labellings. Our work in the EU project DynaVis is just one example for extracting knowledge from data provided from different types of input provided from different operators. - Taking classification accuracies beyond 99%:
Classification accuracy rates of about 95%, as often achieved in current state-of-the-art machine learning algorithms, are not acceptable in many manufacturing areas. If one aims at higher precision, specific feature pre-processing steps and adaptive models such as on-line (incremental) machine learning techniques are necessary. These take into account concept drifts and other time-dependent variations.
The strength of our department lies in combining latest results from our research with programming experience down to the hardware level. Based on that, we have built – and continue to maintain – several real-time, production-critical quality control applications for major industry companies.