Lightweight Inferencing for accurate and fast Vessels-in-water detections and classification, developed during Space Challenges 2025 in Sofia, Bulgaria. It contains the code for Ship Detection from Sentinel-1 SAR Images.
The code has been inspired and some parts adopted from LEAD-YOLO, licensed under the GNU General Public License v3.0.
Ensure your system meets the following requirements:
- Python >= 3.8
- PyTorch >= 1.10
- CUDA >= 11.3 (for GPU support)
git clone https://github.com/space-challenges-AI2/sentinel1_sar_ship_detection.git
cd sentinel1_sar_ship_detection- For venv:
python3 -m venv .venv- For conda env
conda create -n sc25 python=3.11pip install -r requirements.txtBasic training:
python train.py --cfg models/yolov5n.yaml --data data/HRSID_land.yaml --hyp data/hyp/hyp.scratch-low.yaml --weights '' --epochs 350Training with custom experiment name:
python train.py --cfg models/yolov5n.yaml --data data/HRSID_land.yaml --hyp data/hyp/hyp.scratch-low.yaml --weights '' --epochs 350 --name my_experimentTraining with custom project and name:
python train.py --cfg models/yolov5n.yaml --data data/HRSID_land.yaml --hyp data/hyp/hyp.scratch-low.yaml --weights '' --epochs 350 --project my_project --name my_experimentNote: If you don't specify a --name, experiments will be named experiment, experiment2, experiment3, etc. If you specify a custom name that already exists, it will automatically append a number (e.g., my_experiment2, my_experiment3).
The corresponding training files for the experiment are saved inside runs/train/experiment{i}, including the weights.
python val.py --weights runs/train/experiment/weights/best.pt --data data/HRSID_land.yaml --img 640 --batch-size 32Note: Replace experiment with your actual experiment name if you used a custom name during training.
The corresponding validation files for the experiment are saved inside runs/train/experiment{i}, including the graphs and metrics for the experiments.
Run inference on a single image or a batch of images:
python detect.py --source ./source --weights runs/train/lead_yolo4/weights/best.ptThe inference outputs (e.g., annotated images or videos) are saved in runs/detect/.
WaveTrack.AI provides Docker support for containerized deployment, including specialized configurations for satellite and space applications. The project includes two main Dockerfile variants:
-
Standard Docker Image (
utils/docker/Dockerfile)- Based on PyTorch with CUDA support
- Optimized for GPU-accelerated inference
- Suitable for ground-based processing and development
-
ARM64 Docker Image (
utils/docker/Dockerfile-arm64)- Based on Ubuntu ARM64
- Compatible with ARM architectures (Jetson Nano, Raspberry Pi, Apple M1)
- Recommended for satellite deployment due to power efficiency and space constraints
Build the standard image:
# Build for x86_64 with GPU support
docker build -f utils/docker/Dockerfile -t wavetrack-ai:latest .Build the ARM64 image (recommended for satellites):
# Build for ARM64 architecture
docker build --platform linux/arm64 -f utils/docker/Dockerfile-arm64 -t wavetrack-ai:arm64 .Run the container:
# Run with GPU support (if available)
docker run -it --gpus all -v $(pwd)/data:/usr/src/app/data wavetrack-ai:latest
# Run ARM64 version (CPU only)
docker run -it -v $(pwd)/data:/usr/src/app/data wavetrack-ai:arm64WaveTrack.AI implements a comprehensive SAR (Synthetic Aperture Radar) ship detection pipeline that processes satellite imagery through multiple stages to identify and classify vessels in water bodies.
- Purpose: Monitors and manages incoming SAR image tiles
- Functionality:
- Watches for new SAR data files
- Queues images for processing
- Manages work item lifecycle
- Output: Queued work items for pipeline processing
- Purpose: Performs AI-powered ship detection on SAR images
- Features:
- YOLOv5-based object detection
- Built-in denoising capabilities (FABF, None)
- GPU/CPU acceleration support
- Configurable confidence thresholds
- Output: Detection results with bounding boxes and confidence scores
- Purpose: Converts pixel coordinates to geographic coordinates
- Functionality:
- Transforms detection coordinates to lat/long
- Handles SAR image geolocation metadata
- Provides geographic context for detections
- Output: Georeferenced detection coordinates
- Purpose: Enhances and validates detection results
- Features:
- Generates thumbnail images
- Applies post-processing filters
- Quality assessment and validation
- Output: Enhanced detection results with thumbnails
- Purpose: Creates downlink packets for satellite transmission
- Functionality:
- Bundles detection results
- Optimizes data for transmission
- Creates standardized output formats
- Output: Transmission-ready data packets
- Purpose: Monitors pipeline health and performance
- Features:
- Real-time status monitoring
- Performance metrics collection
- Error logging and alerting
- Output: Health status and performance reports
SAR Image Input β Ingest β Detection β Georeferencing β Post-Processing β Packaging β Output
β β β β β β β
test_ingest/ test_work/ test_detections/ test_georeferenced/ test_postprocessed/ test_outbox/
The pipeline creates and manages the following test directories:
test_ingest/- Input SAR imagestest_work/- Intermediate processing filestest_metadata/- Image and processing metadatatest_detections/- YOLO detection resultstest_thumbs/- Generated thumbnail imagestest_outbox/- Final output packetstest_logs/- Processing logstest_georeferenced/- Georeferenced coordinatestest_postprocessed/- Enhanced detection resultstest_denoising/- Denoising artifactstest_results/- Additional processing results
sudo docker build -f utils/docker/Dockerfile-laptop -t sar-ship-detection:latest .sudo docker run -it --rm --gpus all --ipc=host \
-v $(pwd):/workspace \
--entrypoint python \
sar-ship-detection:latest demo.py- Container starts with your project mounted to
/workspace - Pipeline processes images from
source/directory - All outputs are saved to your local test directories
- When container stops, all data remains in your local filesystem
test_detections/pipeline/- YOLO detection resultstest_thumbs/- Generated thumbnailstest_metadata/- Processing informationtest_outbox/- Final output packetstest_logs/- Processing logs
The main pipeline orchestration is handled by PipelineCoordinator in utils/pipeline/coordinator.py. This class:
- Manages the entire pipeline workflow
- Coordinates between different services
- Handles error recovery and monitoring
- Provides real-time status updates
You can modify the pipeline behavior by:
- Adjusting denoising parameters in the coordinator
- Modifying service configurations
- Adding new processing stages
- Customizing output formats
Pipeline behavior is controlled by:
configs/flight.env- Environment-specific settings- Service-specific configuration parameters
- Runtime command-line arguments
-
Container can't find entrypoint.sh
- Solution: Use
--entrypoint pythonor--entrypoint bash
- Solution: Use
-
No output files in local directory
- Solution: Ensure you're using
-v $(pwd):/workspacevolume mount
- Solution: Ensure you're using
-
GPU not detected
- Solution: Install NVIDIA Docker runtime and use
--gpus all
- Solution: Install NVIDIA Docker runtime and use
-
Permission denied errors
- Solution: Use
sudofor Docker commands
- Solution: Use
# Check Docker image exists
sudo docker images | grep sar-ship-detection
# Check container logs
sudo docker logs <container_id>
# Interactive debugging
sudo docker run -it --rm --gpus all --ipc=host -v $(pwd):/workspace --entrypoint bash sar-ship-detection:latestThe pipeline provides real-time status updates including:
- Images processed count
- Detection accuracy metrics
- Processing time statistics
- System health status
- Pipeline component status
- Resource utilization
- Error rate monitoring
- Performance metrics
sentinel1_sar_ship_detection/
βββ utils/pipeline/ # Pipeline orchestration
β βββ coordinator.py # Main pipeline coordinator
β βββ ingest.py # Image ingestion service
β βββ geo.py # Georeferencing service
β βββ postproc.py # Post-processing service
β βββ packager.py # Output packaging service
β βββ health.py # Health monitoring
βββ utils/docker/ # Docker configurations
β βββ Dockerfile-laptop # Optimized for x86_64
β βββ Dockerfile-arm64 # ARM64 compatible
β βββ build-and-run.sh # Build automation
βββ models/ # YOLO model configurations
βββ data/ # Dataset configurations
βββ weights/ # Pre-trained model weights
βββ source/ # Test images for demo
βββ demo.py # Main demo script
βββ requirements.txt # Python dependencies