Testing and validating AI-based software systems

The use of artificial intelligence already influences our lives today:

Banks assess our creditworthiness using AI-based software, insurance companies calculate insurance rates AI-based on the basis of individual risks, medical AI software supports medical diagnosis, e.g. through AI-based analysis of X-ray images.

The use of such AI-based software for decision support offers enormous advantages: even with a very high number of cases and large amounts of data, the AI software can analyze each individual data set (e.g. the customer's insurance application or the patient's X-ray image) individually and classify it fully automatically. Even after thousands of data records, the AI system never gets tired or inattentive and always applies the same criteria.

The result of such automated decisions can have serious consequences for the customer or patient concerned. For example, if an insurance application is rejected because the AI system predicts a high risk of damage. Or if a medical diagnosis is made incorrectly because the AI system incorrectly classifies the X-ray image.

AI-based decision systems and autonomous systems

Manufacturers, but also users of AI-based decision systems must ensure that their AI software is trustworthy!

Faulty decisions of the AI can have various causes:

  • the AI processes the data to be checked incorrectly because (as with "normal" software) there is a programming error in the program code,   

  • the AI classifies incorrectly because the classification model to be applied is inadequate or the AI has "learned" it incorrectly or inadequately,

  • the AI classifies incorrectly because the data basis or data preprocessing is incorrect, inaccurate or incomplete

All these aspects have to be checked not only once (in the course of testing and validation of the AI-based decision system), but over the entire life cycle of the system:

 

  • The used AI algorithms must be basically suitable and powerful enough for the intended application.
  • The training data used must be selected and used representatively and without discrimination.
  • The system should have the ability to make its decisions comprehensible/explainable ("explainable AI"). The better the system can "explain" why a decision has turned out one way or the other, the stronger the user can have confidence in the respective decision.

For further information, please refer to the article "Why AI needs intelligent quality assurance" OBJEKTspektrum, issue 02/2020 and German Testing Magazin, issue 01/2020, p. 20-24 by Nils Röttger, Gerhard Runze and Verena Dietrich.

Trust in AI-based systems through quality assurance

For AI systems or software containing AI components to function correctly, reliably, trustworthily and ethically, it is essential that the development and operation of such systems is accompanied by a professional quality assurance process.

By the way: Since mid-2019, our imbus AI specialists and other experts from business and society - under the leadership of the German Institute for Standardization e.V. (DIN) - have been working on and the German Commission for Electrical, Electronic & Information Technologies (DKE) in a joint project with the Federal Ministry of Economics and Energy (BMWi) - a roadmap for norms and standards in the field of AI.

 

Contact show/hide Contact show/hide

Ihr Ansprechpartner bei imbus

Mr. Tilo Linz

mail:tilo.linz@imbus.de
phone:09131 7518-210
fax:+49 9131 / 7518-50

This might also be of interest to you: