欧博娱乐What is Explainable AI (XAI)?

The setup of XAI techniques consists of three main methods. Prediction accuracy and traceability address technology requirements while decision understanding addresses human needs. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.⁵

Prediction accuracy
Accuracy is a key component of how successful the use of AI is in everyday operation. By running simulations and comparing XAI output to the results in the training data set, the prediction accuracy can be determined. The most popular technique used for this is Local Interpretable Model-Agnostic Explanations (LIME), which explains the prediction of classifiers by the ML algorithm.

Traceability
Traceability is another key technique for accomplishing XAI. This is achieved, for example, by limiting the way decisions can be made and setting up a narrower scope for ML rules and features. An example of a traceability XAI technique is DeepLIFT (Deep Learning Important FeaTures), which compares the activation of each neuron to its reference neuron and shows a traceable link between each activated neuron and even shows dependencies between them.

Decision understanding
This is the human factor. Many people have a distrust in AI, yet to work with it efficiently, they need to learn to trust it. This is accomplished by educating the team working with the AI so they can understand how and why the AI makes decisions.

2025-01-31 03:21 点击量:0