The purpose of our project is to design a new ensemble learning models with some better metric value, like accuracy, robustness, etc. This repository will present some interesting papers bout the current work of ensemble learning and other related information. Also, we will put some good results on ensemble learning and other things to this repository. This repository will keep being updated.
- Revisiting Ensemble in Adversarial Context: Improving natural accuracy
(ICLR workshop 2020)
(Aditya Saligrama et al)
content: Design an ensemble learning system could improve natural accuracy. This paper executes ensemble learning based on composite models. - Delving into Transferable Adversarial Examples and Black-box Attacks
(ICLR 2017)
(Yanpei Liu et al)
(code)
contents: The ensemble learning method proposed in this paper is based on adding all pre-trained models together and train them. This paper claims that they are the first one to study this topic over large models and a large scale dataset. And the first paper to study non-targeted attack and targeted attack in this area. - Enhance certified radius via a Deep Model Ensemble
(arXiv 31 oct 2019)
(Huan Zhang et al)
content: This paper proposes an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model. - Ensemble the adv and certified training
(ICLR 2020)
(Mislav Balunovic et al)
content This paper present COLT, a new method to train neural networks based on a novel combination of adversarial training and provable defense.
- Meta-learning Enabled Adversarial Training
(Yiyi Tao, Bo Li et al) content: First meta-learning based adversarial training approach. Execute ensemble leraning by modifying the models' parameters. - Provable robustness against all adversarial
(arXiv 27 May 2019)
(Francesco Groce et al)
(code)
content: This paper propse a new regularization scheme for ReLU networks which enforces robustness and show how that leads to provably robust models. - self-supervised and using adversarial fine-tuning: Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
(arXiv 28 March 2020)
(Tianlong Chen et al)
(code and models training method) Contents: This paper tries to and reveal that adversarial fine-tune contributes a lot to robustness improvement. They also use ensemble learning and do experiments on different unforeseen attack. For their loss functiong, this paper propose a regularization parameter to promote diversity.
- Ensemble Adversarial Training: Attacks and Defenses
(ICLR 2019)
(Florian Tramer et al)
(ftramer_code)
contents: This paper propose ensemble learning mainly by using data augments. It seems that the first paper to study ensemble learning. - Adversarial Training and Robustness for Multiple Perturbations
(arXiv 18 Oct 2019)
(Florian Tramer et al)
(code)
content: This paper's aim is to understand the reasons underlying the robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types. - n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
(arXiv 19 Dec 2019)
(Mahmood Sharif et al)
contents: This paper propose a new defense called n-ML against adv examples. inputs crefted by perturbing benign inputs by small amounts to induce misclassifications by classifiers. - Testing Robustness Against Unforeseen Adversarial
(arXiv 19 Aug 2019)
(Daniel Kang et al)
(code and perturbation data)
contents: This paper propose a methodology for evaluating a defense against a diverse range of distrotion types and also use a summary metric UAR to measure the unforeseen attack robustness against distortion.
- Improving Adversarial Robustness via Promoting Ensemble Diversity
(arXiv 29 May 2019)
(Tianyu Pang et al)
(code)
content: This paper presents a method that explores the interaction among individual networks to improve robustness for ensemble modles. They define a new notion of ensemble diversity. - Improving Adversarial Robustness of Ensembles with Diversity Training
(arXiv 28 Jan 2019)
(Sanjay Kariyappa)
content: This paper propose Diversity Training, a novel method to train an ensemble of models with uncorrelated loss functions.This method focuses on modifying loss function.They try to improve the robustness by reduce adversarial subspace.
- loss of invariance: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
(arXiv 11 Feb 2020)
(Florian Tramer et al)
code
Content: Instead of studying sensitivity-based adversarial examples, this paper studies invariance-based adv examples, which introduce minimal semantic changes that modify an input's true label yet preserve the model's prediction. - Sensitivity: On the sensitivity of adversarial robustness to input data distributions
(ICLR 2019)
(Gavin Weiguang Ding et al)
(related code)
Content: there are existing relationship between adversarial robustness and the input data distribution.
- Robustness May Be at Odds with Accuracy
(ICLR 2019)
(Dimitris Tsipras et al)
(Mnist code)
(Cifar10 code)
contents: There are existing inherent tension between the goal of adv robustness and that of generalization. The trade-off between the standard accuracy of a model and its robustness to adv perturbations provably exists even in a fairly simple an natural setting. - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
(arXiv 11 Feb 2020)
(Florian Tramer et al)
code
contents: There are existing trade-off between invariance-based adv examples and sensitivity-based adv examples. - Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?
(ICCV workshop 2019)
(Alfred Laugros et al) contents: This paper talks about the relationship between the adversarial robustness and common perturbations. This paper wants to talk about the question about what extent the adversarial robutsness is related to the global robustness.
- self-supervised and using adversarial fine-tuning: Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
(arXiv 28 March 2020)
(Tianlong Chen et al)
(code and models training method) Contents: This paper tries to and reveal that adversarial fine-tune contributes a lot to robustness improvement. They also use ensemble learning and do experiments on different unforeseen attack. - ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies
(NIPS 2019)
(Bao Wang et al)
(code)
contents: Based on the unified viewpoint, this paper proposes a simple yet effective ResNets ensemble algorithm to boost the accuracy of the robustly trained model on both clean and adversarial images. - Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness
(arXiv 29 Sep 2019)
(Jingkang Wang et al)
(code)
contents: Min-max problems, mostly in the adversarial training(AT) case, beyond the purpose of AT has not been rigorously explored in the research of adversarial attacks and defense. The paper shows that this weakness can be sobled under a unified and theoretically principled min-max optimization framework. - Improving the affordability of robustness training for DNNs
(arXiv 11 Feb 2020)
(Sidharth Gupta et al)
contents: PGD based adversarial training has become one of the most prominent methods. However, the computational complexity is a longstanding problem and may be prohibitive when using larger and more complex models. This paper shows that the initial phase of adversarial training can be replaced with natural training. And this efficiency gain can be achieved without any loss in accuracy on naural and adversarial test examples.
Ensemble adversarial black-box attacks against deep learning systems
(Pattern Recognition)
(Jie Hang et al)
(code)
contents: In this paper, the authors attempt to ensemble multiple pre-trained substitute models to produce adversarial examples with more powerful transferability in the form of selective cascade ensemble and stack parallel ensemble.
Last updated: May 5, 2020
| VM1 | VM2 | VM3 | VM4 | VM5 | VM6 | VM7 | VM8 | VM9 | VM10 | VM11 | VM12 | VM13 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| base | 0.0299 | 0.0166 | 0.0100 | 0.0199 | 0.0133 | 0.0365 | 0.0033 | 0.0100 | 0.0664 | 0.0299 | 0.0399 | 0.0532 | 0.0432 |
| DIM | 0.1163 | 0.0565 | 0.0266 | 0.0432 | 0.0365 | 0.0897 | 0.0133 | 0.0199 | 0.2126 | 0.1728 | 0.1163 | 0.1462 | 0.1196 |
| SIM | 0.0299 | 0.0133 | 0.0066 | 0.0199 | 0.0133 | 0.0233 | 0.0066 | 0.0133 | 0.0997 | 0.0465 | 0.0299 | 0.0465 | 0.0266 |
| BC | 0.0631 | 0.0399 | 0.0133 | 0.0332 | 0.0332 | 0.0631 | 0.0133 | 0.0233 | 0.1096 | 0.1329 | 0.0532 | 0.1096 | 0.0963 |
| TIM | 0.1096 | 0.0532 | 0.0299 | 0.0664 | 0.0731 | 0.0831 | 0.0266 | 0.0066 | 0.2060 | 0.1628 | 0.0897 | 0.1362 | 0.1063 |
| SIA | 0.0897 | 0.0797 | 0.0332 | 0.0565 | 0.0432 | 0.0698 | 0.0266 | 0.0133 | 0.1860 | 0.2126 | 0.1528 | 0.2093 | 0.1794 |
| Admix | 0.0797 | 0.0432 | 0.0166 | 0.0399 | 0.0498 | 0.0565 | 0.0199 | 0.0133 | 0.3189 | 0.3189 | 0.1827 | 0.3156 | 0.2392 |
| AIP | 0.0864 | 0.0565 | 0.0133 | 0.0399 | 0.0465 | 0.0598 | 0.0166 | 0.0199 | 0.0997 | 0.0831 | 0.0498 | 0.1096 | 0.0698 |
| TATM | 0.1628 | 0.1429 | 0.0631 | 0.1130 | 0.0864 | 0.1296 | 0.0598 | 0.0299 | 0.1262 | 0.1329 | 0.0731 | 0.0831 | 0.0864 |