Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference
-
Updated
Dec 4, 2020 - Python
{{ message }}
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Data augmentation for NLP
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
Must-read Papers on Textual Adversarial Attack and Defense
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
A pytorch adversarial library for attack and defense methods on images and graphs
A curated list of adversarial attacks and defenses papers on graph-structured data.
A Harder ImageNet Test Set
Implementation of Papers on Adversarial Examples
A Model for Natural Language Attack on Text Classification and Inference
A pytorch implementations of Adversarial attacks and utils
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
An Open-Source Package for Textual Adversarial Attack.
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Adversarial attacks and defenses on Graph Neural Networks.
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Simple pytorch implementation of FGSM and I-FGSM
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
Implementation for <Decoupled Networks> in CVPR'18.
Physical adversarial attack for fooling the Faster R-CNN object detector
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".
A list of awesome resources for adversarial attack and defense method in deep learning
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."
Output when I specify an attack without a model: