A game theoretic approach to explain the output of any machine learning model.
-
Updated
Aug 19, 2020 - Jupyter Notebook
{{ message }}
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
XAI - An eXplainability toolbox for machine learning
Visualization toolkit for neural networks in PyTorch! Demo -->
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
H1st AI solves the critical “cold-start” problem of Industrial AI: encoding human expertise to augment the lack of data, while building a smooth transition toward a machine-learning future. This problem has caused most industrial-AI projects to fail.
Using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions"
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
[CVPR 2020 Workshop] Official implementation of Score-CAM in Pytorch
Explainability techniques for Graph Networks, applied to a synthetic dataset and an organic chemistry task. Code for the workshop paper "Explainability Techniques for Graph Convolutional Networks" (ICML19)
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Contextual AI adds explainability to different stages of machine learning pipelines - data, training, and inference - thereby addressing the trust gap between such ML systems and their users. It does not refer to a specific algorithm or ML method — instead, it takes a human-centric view and approach to AI.
TK & TKL - Efficient Transformer-based neural re-ranking models
The implementation of “A Capsule Network for Recommendation and Explaining What You Like and Dislike”, Chenliang Li, Cong Quan, Li Peng, Yunwei Qi, Yuming Deng, Libing Wu, https://dl.acm.org/citation.cfm?doid=3331184.3331216
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Amazon SageMaker Solution for explaining credit decisions.
For calculating global feature importance using Shapley values.
Data generator for Arena - interactive XAI dashboard
GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset
Getting the Anchors Explainer to work in Different Settings
General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Explainability of Deep Learning Models
Concept activation vectors for Keras
Analysis and investigating the confounding effect of accents in E-2-E Automatic Speech Recognition models.
Using / reproducing TRIM from the paper "Transformation Importance with Applications to Cosmology"
ibreakdown is model agnostic predictions explainer with interactions support, library can show contribution of each feature of your prediction value
Add a description, image, and links to the explainability topic page so that developers can more easily learn about it.
To associate your repository with the explainability topic, visit your repo's landing page and select "manage topics."
Yes