Tech Talk Tuesday: Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective
From Alan Fern
views
From Alan Fern
How can we build secure and reliable deep learning systems, e.g., autonomous cars or AI-assisted robotic surgery, for tomorrow?
We cannot answer this question without understanding the worst-case behaviors of deep neural networks (DNNs), i.e., the core component of those systems. Recent work studied the worst-case behaviors, such as mispredictions caused by adversarial examples or models altered by data poisoning. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and it overlooks a holistic picture, e.g., leaving out the threats caused by hardware-level attacks.
In this talk, I will discuss my work, studying computational properties of DNNs from a systems security perspective, that has exposed critical security threats and has steered the industrial practices.
First, I will present my work exposing the false sense of security: DNNs are not resilient to parameter perturbations. An adversary can inflict an accuracy drop up to 100% with a single bit-flip in its memory representation. Second, I will show how brittle the computational savings provided by efficient deep learning techniques are in adversarial settings. By adding human-imperceptible input perturbations, an attacker can completely offset a multi-exit network's computational savings on an input. Third, I will show how privacy-protection mechanisms, offered without a holistic picture, can put millions of users under a serious privacy threat. These mechanisms do not admit any arms race and eventually give the upper hand to an adversary. Finally, I will conclude by discussing how these results opened up new research directions and have steered industrial practices.
Sanghyun Hong is an Assistant Professor in Computer Science at Oregon State University. His research interest lies at the intersection of computer security, privacy, and machine learning. His current research focus is to study the computational properties of DNNs from a systems security perspective. He also works on identifying distinct internal behaviors of DNNs, such as network confusion or gradient-level disparity, whose quantification led to defenses against backdooring or data poisoning.
You can find more about Sanghyun at https://sanghyun-hong.com.