A game theoretic approach to explain the output of any machine learning model.
-
Updated
Aug 19, 2020 - Jupyter Notebook
{{ message }}
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A collection of infrastructure and tools for research in neural network interpretability.
A curated list of awesome machine learning interpretability resources.
Model interpretability and understanding for PyTorch
Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.
This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/
[ICCV 2017] Torch code for Grad-CAM
Interpretability Methods for tf.keras models with Tensorflow 2.x
Algorithms for monitoring and explaining machine learning models
moDel Agnostic Language for Exploration and eXplanation
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
XAI - An eXplainability toolbox for machine learning
Visualization toolkit for neural networks in PyTorch! Demo -->
Public facing deeplift repo
Interesting resources related to XAI (Explainable Artificial Intelligence)
H2O.ai Machine Learning Interpretability Resources
Code for the TCAV ML interpretability project
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
A Python package implementing a new model for text classification with visualization tools for Explainable AI
Layer-wise Relevance Propagation (LRP) for LSTMs
Pytorch Implementation of recent visual attribution methods for model interpretability
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Model Agnostics breakDown plots
A collection of research papers categorized into broad topics in federated learning.
Using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions"
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."
Yes