A game theoretic approach to explain the output of any machine learning model.
-
Updated
Apr 10, 2022 - Jupyter Notebook
{{ message }}
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Examples for classification, object detection, segmentation, embedding networks and more. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
A collection of infrastructure and tools for research in neural network interpretability.
Model interpretability and understanding for PyTorch
A curated list of awesome machine learning interpretability resources.
Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.
This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/
The Boston dataset which we use in some examples has an ethical problem and should be replaced. Read more here: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston
Impacted examples:
cfproto_housing.ipynbale_regression_boston.ipynbThe above link suggests some similar housing-related alternatives.
Federated Learning Library: https://fedml.ai
[ICCV 2017] Torch code for Grad-CAM
moDel Agnostic Language for Exploration and eXplanation
Interpretability Methods for tf.keras models with Tensorflow 2.x
XAI - An eXplainability toolbox for machine learning
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Interpretable ML package
Interesting resources related to XAI (Explainable Artificial Intelligence)
Visualization toolkit for neural networks in PyTorch! Demo -->
Public facing deeplift repo
Model explainability that works seamlessly with
A collection of research materials on explainable AI/ML
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Code for the TCAV ML interpretability project
H2O.ai Machine Learning Interpretability Resources
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Human-explainable AI.
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."
Yes