An official website of the United States government
Here's how you know
A .mil website belongs to an official U.S. Department of Defense organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .mil website. Share sensitive information only on official, secure websites.

NEWS | May 9, 2022

GARD: Guaranteeing AI Robustness Against Deception

Project Lead: Dr. Bruce Draper                                                              

Sponsoring Organization: DARPA

Website: https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception

Project Synopsis: GARD seeks to establish theoretical machine learning (ML) system foundations to identify system vulnerabilities, characterize properties that will enhance system robustness, and encourage the creation of effective defenses. Currently, ML defenses tend to be highly specific and are effective only against particular attacks. GARD seeks to develop defenses capable of defeating broad categories of attacks. Additionally, current paradigms for evaluating AI robustness often focus on simplistic measures that may not be relevant to security. To verify relevance to security and wide applicability, defenses generated under GARD will be measured in a novel testbed employing scenario-based evaluations. GARD researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research have generated the following virtual testbed, toolbox, benchmarking dataset, and training materials, which are available to the research community:

  • The Armory virtual platform, available on GitHub, serves as a “testbed” for researchers in need of repeatable, scalable, and robust evaluations of adversarial defenses.
  • Adversarial Robustness Toolbox (ART) provides tools for developers and researchers to defend and evaluate their ML models and applications against a number of adversarial threats.
  • The Adversarial Patches Rearranged In COnText (APRICOT) dataset enables reproducible research on the real-world effectiveness of physical adversarial patch attacks on object detection systems.
  • The Google Research Self-Study repository contains “test dummies” that represent a common idea or approach to build defenses.

GARD’s Holistic Evaluation of Adversarial Defenses repository is available at https://www.gardproject.org/. Interested researchers are encouraged to take advantage of these resources and check back often for updates