Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Jul 29, 2021 - Python
{{ message }}
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Data augmentation for NLP
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Adversary Emulation Framework
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
Must-read Papers on Textual Adversarial Attack and Defense
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
PyTorch implementation of adversarial attacks.
A pytorch adversarial library for attack and defense methods on images and graphs
A curated list of adversarial attacks and defenses papers on graph-structured data.
A Harder ImageNet Test Set (CVPR 2021)
A Model for Natural Language Attack on Text Classification and Inference
Implementation of Papers on Adversarial Examples
An Open-Source Package for Textual Adversarial Attack.
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Adversarial attacks and defenses on Graph Neural Networks.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle
Simple pytorch implementation of FGSM and I-FGSM
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
Physical adversarial attack for fooling the Faster R-CNN object detector
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Implementation for <Decoupled Networks> in CVPR'18.
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."
Output when I specify an attack without a model: