NeurIPS 21‘: Unadversarial Examples: Designing Objects for Robust Vision

We study a class of computer vision settings wherein one can modify the design of the objects being recognized. We develop a framework that leverages this capability—and deep networks’ unusual sensitivity to input perturbations—to design “robust objects,” i.e., objects that are explicitly optimized to be confidently classified. Our framework yields improved performance on standard benchmarks, a simulated robotics environment, and physical-world experiments.
Figure 1: We demonstrate that optimizing objects (e.g., the pictured jet) for pre-trained neural networks can boost performance and robustness on computer vision tasks. Here, we show an example of classifying an unadversarial jet and a standard jet using a pretrained ImageNet model. The model correctly classifies the unadversarial jet even under bad weather conditions (e.g., foggy or dusty), whereas it fails to correctly classify the standard jet.

Pipeline: Unadversarial patches/stickers

optimize a sticker

Pipeline: Unadversarial textures

requires a set of 3D meshes for each class of objects, and a set of background env for simulate a scene.

Eval:

Patches: Apply unadv to corrupted dataset(ImageNet-C)

Textures: Render adverse weather conditions

Real-World: Printed stickers