A game theoretic approach to explain the output of any machine learning model.
-
Updated
Jun 1, 2022 - Jupyter Notebook
{{ message }}
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Examples for classification, object detection, segmentation, embedding networks and more. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
A collection of infrastructure and tools for research in neural network interpretability.
Model interpretability and understanding for PyTorch
A curated list of awesome machine learning interpretability resources.
Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.
This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/
Notices that the [source] links at the top of method descriptions are sometimes broken, don't exist, or if functional, lead to the implementation module which should be considered private. Proposing to either completely remove these links or linking to the relevant API documentation in the public API.
Federated Learning Library: https://fedml.ai
[ICCV 2017] Torch code for Grad-CAM
moDel Agnostic Language for Exploration and eXplanation
Interpretability Methods for tf.keras models with Tensorflow 2.x
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
XAI - An eXplainability toolbox for machine learning
Interpretable ML package
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Interesting resources related to XAI (Explainable Artificial Intelligence)
Visualization toolkit for neural networks in PyTorch! Demo -->
A collection of research materials on explainable AI/ML
Public facing deeplift repo
Model explainability that works seamlessly with
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Code for the TCAV ML interpretability project
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
H2O.ai Machine Learning Interpretability Resources
Human-explainable AI.
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."
Yes