Does Iterative Adversarial Training Repel White-box Adversarial Attack

Eileen Pangu
Level Up Coding
Published in
5 min readMay 31, 2021

--

A quantitative and qualitative exploration of how well it guards against white-box generation of adversarial examples

Background

Machine learning is prone to adversarial examples — targeted input data that are specifically crafted to deceive the model and lead to erroneous output. Adversarial training is a technique to defend against such attacks by deliberately generating adversarial examples to augment the training dataset, in hope of improving the robustness of the model. A natural…

--

--

Manager and Tech Lead @ FANG. Enthusiastic tech generalist. Enjoy distilling wisdom from experiences. Believe in that learning is a lifelong journey.