TY - GEN
T1 - Achieving Adversarial Robustness in Deep Learning-Based Overhead Imaging
AU - Braun, Dagen
AU - Reisman, Matthew
AU - Dewell, Larry
AU - Banburski-Fahey, Andrzej
AU - Deza, Arturo
AU - Poggio, Tomaso
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The Intelligence, Surveillance, and Reconnaissance (ISR) community relies heavily on the use of overhead imagery for object detection and classification. In these applications, machine learning frameworks have been increasingly used to assist analysts in distinguishing high value targets from mundane objects quickly and effectively. In recent years, the robustness of these frameworks has come under question due to the possibility for disruption using image-based adversarial attacks, and as such, it is necessary to harden existing models against these threats. In this work, we survey a collection of three techniques to address these concerns at various stages of the image processing pipeline: external validation using Activity Based Intelligence, internal validation using Latent Space Analysis, and adversarial prevention using biologically inspired techniques. We found that biologically-inspired techniques were most effective and generalizable for mitigating adversarial attacks on overhead imagery in machine learning frameworks, with improvements as much as 34.6% over traditional augmentations, and 80.4% over a model without any augmentation-based defense.
AB - The Intelligence, Surveillance, and Reconnaissance (ISR) community relies heavily on the use of overhead imagery for object detection and classification. In these applications, machine learning frameworks have been increasingly used to assist analysts in distinguishing high value targets from mundane objects quickly and effectively. In recent years, the robustness of these frameworks has come under question due to the possibility for disruption using image-based adversarial attacks, and as such, it is necessary to harden existing models against these threats. In this work, we survey a collection of three techniques to address these concerns at various stages of the image processing pipeline: external validation using Activity Based Intelligence, internal validation using Latent Space Analysis, and adversarial prevention using biologically inspired techniques. We found that biologically-inspired techniques were most effective and generalizable for mitigating adversarial attacks on overhead imagery in machine learning frameworks, with improvements as much as 34.6% over traditional augmentations, and 80.4% over a model without any augmentation-based defense.
KW - adversarial attacks
KW - automatic target recognition
KW - biological learning
KW - deep learning
KW - satellite imaging
UR - http://www.scopus.com/inward/record.url?scp=85153726166&partnerID=8YFLogxK
U2 - 10.1109/AIPR57179.2022.10092213
DO - 10.1109/AIPR57179.2022.10092213
M3 - Conference contribution
AN - SCOPUS:85153726166
T3 - Proceedings - Applied Imagery Pattern Recognition Workshop
BT - 2022 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2022
Y2 - 11 October 2022 through 13 October 2022
ER -