Here we release source code and/or data for research projects from David Wagner's research group at UC Berkeley.
-
REAP: A Large-Scale Realistic Adversarial Patch Benchmark: https://github.com/wagner-group/reap-benchmark?target=https://github.com
-
Part-Based Models Improve Adversarial Robustness: https://github.com/chawins/adv-part-model?target=https://github.com
-
SLIP: Self-supervision meets Language-Image Pre-training: https://github.com/facebookresearch/SLIP?target=https://github.com
-
Learning Security Classifiers with Verified Global Robustness Properties: https://github.com/surrealyz/verified-global-properties?target=https://github.com
-
Demystifying the Adversarial Robustness of Random Transformation Defenses: https://github.com/wagner-group/demystify-random-transform?target=https://github.com
-
SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries: https://github.com/zhanyuanucb/model-extraction-defense?target=https://github.com (unsupported, research-quality code) (see also https://github.com/grasses/SEAT?target=https://github.com)
-
Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams: https://github.com/wagner-group/geoadex?target=https://github.com
-
Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training: https://github.com/wagner-group/dual-domain-at?target=https://github.com
-
Defending Against Patch Adversarial Attacks with Robust Self-Attention: https://github.com/wagner-group/robust-self-attention?target=https://github.com
-
Model-Agnostic Defense for Lane Detection against Adversarial Attack: https://github.com/henryzxu/lane-verification?target=https://github.com
-
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models: https://github.com/chawins/knn-defense?target=https://github.com