Exhibitors & Products
Events & Speakers

The Fraunhofer Institute for Intelligent Analysis and Information Systems ISIA, based in Sankt Augustin near Bonn, develops testing tools that examine and evaluate AI applications in terms of their reliability, fairness, robustness, transparency or data protection. The tools now being presented at HANNOVER MESSE 2023 can be combined in a modular fashion and are embedded in a software framework. The "ScrutinAI" tool enables testing institutes to systematically search for weak points in neural networks and thus test the quality of AI applications. A concrete example is an AI application that detects anomalies and diseases on CT images. The question here is whether all types of anomalies are detected equally well or some better and others worse. This analysis helps to assess whether an AI application is suitable for its intended context of use.

The AI questioning its own decisions

The "uncertAInty" method integrated into the framework equips neural networks with a situation-dependent quality assessment with which they evaluate their own certainty with regard to the prediction made. Particularly in the case of highly automated AI decisions, it is important to be able to assess how certain an AI is about its result. Specifically, an autonomous vehicle must be able to reliably recognise objects and people in its environment so that it can react appropriately to them. Uncertainty assessment helps to measure whether the system's decision is trustworthy or whether certain fallback mechanisms need to be activated.

"Avoid "biases

The "benchmarking" tool helps companies to appropriately select the right AI application for a given task. It includes functionality to be able to measure the fairness of training datasets. This is crucial in the HR industry when it comes to AI applications that assist in the selection of new employees. Here, the AI application must be trained with balanced and statistically representative datasets to avoid disadvantaging groups of people.