Skip to content
/ TAOB Public

code for "Targeted attack via adversarial patch outside bounding box"

License

Notifications You must be signed in to change notification settings

mayfly227/TAOB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TAOB: Targeted Attack via Adversarial Patch Outside Bounding Box

This repository contains the official implementation of the paper "Targeted Attack via Adversarial Patch Outside Bounding Box".

Abstract

Recently, non-targeted physical adversarial attacks against object detectors in the field of autonomous driving have been widely studied. Compared with non-targeted attacks, targeted attacks offer more precise control over the detector's output and pose a greater threat, yet they have rarely been studied. Meanwhile, adversarial patches located outside bounding box offer greater stealthiness and lower deployment difficulty. However, achieving targeted physical attacks outside bounding box is a non-trivial problem. To address this problem, this paper proposes a novel targeted attack framework by leveraging a multi-task collaborative parallel optimization strategy. Specifically, the framework decomposes the attack process into two subtasks optimized in parallel: 1. Eliminating perception of the object: This subtask aims to help the object evade detection. We propose a loss function that combines global semantic representations with local intermediate layer features to achieve this. 2. Reconstructing an erroneous perception of the object: This subtask aims to force the model to output a preset bounding box and class label. We propose a loss function that establishes an incorrect spatial association between the adversarial patch and the original object to achieve this. The proposed method is tested against three state-of-the-art CNN-based object detectors: YOLOv8, YOLOv9, and YOLOv11. Results show that our adversarial patch achieves up to 78.1% targeted attack success rate, outperforming the compared methods by as much as 77.7%. More importantly, we verified the effectiveness of our attack in the physical world. The patch remains effective across different devices, viewing angles, and distances.

pipeline

Environment Setup

Prerequisites

  • Python 3.10+
  • CUDA 11.0+ (for GPU support)
  • PyTorch 1.13+

Installation

  1. Clone the repository:
git clone https://github.com/mayfly227/TAOB.git
cd TAOB
  1. Create a conda environment:
conda create -n taob python=3.10.16
conda activate taob
  1. Install required dependencies:
# Install PyTorch (adjust CUDA version as needed)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

# Install Ultralytics YOLO
cd thirdpart/ultralytics
pip install -e .

# Install other dependencies
pip install pillow tqdm matplotlib opencv-python

Required Dependencies

The main dependencies are:

  • torch >= 1.13.0
  • torchvision >= 0.14.0
  • ultralytics >= 8.0.0
  • numpy
  • pillow
  • tqdm
  • matplotlib
  • opencv-python
  • pathlib

Data Preparation

1. Download Pre-trained Models

All models will be automatically downloaded when needed.

2. Dataset Preparation

The dataset can be downloaded here.

The data.yaml file should contain:

path: /path/to/your/dataset
train: train.txt
val: val.txt

nc: 80  # number of classes
names: ['person', 'bicycle', 'car', ...]  # class names

To generate train.txt and val.txt, please run python split.py. This file can be founded at your downloaded zip file.

Usage

Basic Usage

1. Targeted Attack

Run the main targeted attack script:

python TAOB.py \
    --data_path /path/to/your/dataset.yaml \
    --model_path yolov8s.pt \
    --target_label 0 \
    --attack_target 11 \
    --epoch 100 \
    --devices 0 \
    --save_path save \
    --save True

2. Key Parameters

  • --data_path: Path to the dataset YAML file
  • --model_path: List of model weights to attack (e.g., ['yolov8s.pt', 'yolov9s.pt'])
  • --target_label: Target class labels for the attack (e.g., [0] for person class)
  • --attack_target: Original class to attack (e.g., 11 for stop sign)
  • --epoch: Number of training epochs (e.g., 100)
  • --devices: GPU device ID
  • --place: Patch placement position (bottom, top, left, right)
  • --patchAspectRatio: Patch aspect ratio (2 for 2:1, 1 for 1:1, -2 for 1:2)
  • --beta: Weight for target loss (default: 0.05)
  • --tvloss: Weight for TV regularization loss (default: 3)
  • --save_path: Directory to save results
  • --save: Whether to save patch images and results

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Contact

For questions and support, please open an issue.


Note: This implementation is for research purposes only. Please use responsibly and in accordance with applicable laws and regulations.

About

code for "Targeted attack via adversarial patch outside bounding box"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published