F3arwin -

f3arwin defense yields against its own evolutionary attack compared to PGD-AT, and also generalizes better to PGD (54.8% vs 51.2%). This demonstrates that co-evolving attacks and defenses leads to a more balanced robustness. 5.4 Query Efficiency over Generations f3arwin converges to successful adversarial examples in a median of 38 generations (≈ 2280 queries) compared to 68 generations for standard genetic attack. The adaptive mutation rate prevents premature convergence and reduces wasted queries on low-fitness regions. 6. Discussion Why does evolution help robustness? Standard adversarial training uses a fixed attack method, creating a "gradient-aligned" robust region. Evolutionary attacks explore non-gradient directions, revealing vulnerabilities that gradient-based methods miss. f3arwin defense then closes these gaps, producing a model robust to a wider class of perturbations.

[3] Ilyas, A., Engstrom, L., Athalye, A., & Lin, J. (2019). Black-box adversarial attacks with limited queries and information. ICML . f3arwin

f3arwin significantly outperforms prior genetic attacks due to adaptive mutation and SBX crossover, which preserves high-fitness perturbation structures. Compared to Square Attack, f3arwin requires 11% fewer queries for a similar ASR. On VGG-16 (unseen during attack generation), f3arwin perturbations crafted on ResNet-50 achieved 68.3% ASR, vs. 51.2% for Square Attack and 59.7% for standard genetic attack. This suggests that evolutionary perturbations capture more model-agnostic features. 5.3 Defensive Robustness | Defense Method | Clean Acc. | Robust Acc. (PGD) | Robust Acc. (f3arwin attack) | |----------------|------------|------------------|-------------------------------| | Standard | 92.1% | 0.3% | 0.1% | | PGD-AT | 88.4% | 51.2% | 43.5% | | TRADES | 87.9% | 53.1% | 46.2% | | f3arwin defense | 89.2% | 54.8% | 58.9% | f3arwin defense yields against its own evolutionary attack

[5] Su, J., Vargas, D. V., & Sakurai, K. (2018). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation . Standard adversarial training uses a fixed attack method,

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google