Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to Non-Essential Neurons
2024-02-06 19:09:32
Zhenyu Liu, Garrett Gagnon, Swagath Venkataramani, Liu Liu
Abstract
Deep Neural Networks (DNNs) have revolutionized a wide range of industries, from healthcare and finance to automotive, by offering unparalleled capabilities in data analysis and decision-making. Despite their transforming impact, DNNs face two critical challenges: the vulnerability to adversarial attacks and the increasing computational costs associated with more complex and larger models. In this paper, we introduce an effective method designed to simultaneously enhance adversarial robustness and execution efficiency. Unlike prior studies that enhance robustness via uniformly injecting noise, we introduce a non-uniform noise injection algorithm, strategically applied at each DNN layer to disrupt adversarial perturbations introduced in attacks. By employing approximation techniques, our approach identifies and protects essential neurons while strategically introducing noise into non-essential neurons. Our experimental results demonstrate that our method successfully enhances both robustness and efficiency across several attack scenarios, model architectures, and datasets.
Abstract (translated)
深度神经网络(DNNs)通过在数据分析和决策方面具有无与伦比的能力,彻底颠覆了医疗保健、金融和汽车等各行各业。然而,DNNs面临着两个关键挑战:对攻击的易受性和随着更复杂和更大模型的增加,计算成本的增加。在本文中,我们介绍了一种有效的方法,旨在同时增强对抗性稳健性和执行效率。与之前的研究不同,我们引入了一种非均匀噪声注入算法,在DNN的每个层上策略地应用,以扰乱攻击中引入的 adversarial 扰动。通过使用近似技术,我们的方法识别和保护关键神经元,同时策略地注入噪声到非关键神经元。我们的实验结果表明,我们的方法在多个攻击场景、模型架构和数据集上成功增强了 both robustness and efficiency。
https://arxiv.org/abs/2402.04325
https://arxiv.org/pdf/2402.04325.pdf