This project focuses on developing an object detection system tailored for the SeaDronesSee dataset, which presents unique challenges such as diverse maritime environments, varying object scales, and complex backgrounds. Leveraging state-of-the-art deep learning models, the system is trained and optimized to achieve high accuracy. The trained model is then deployed on embedded hardware platforms, enabling real-time inference on resource-constrained edge devices. This approach supports robust and efficient object detection in marine surveillance and search-and-rescue operations, addressing practical deployment challenges in embedded systems.
- YOLOv8-based model optimized for embedded real-time inference
- Custom data labeling and annotation to improve dataset quality and model training
- Deployed via TensorFlow Lite Micro on resource-limited devices
- Weighted non-maximum suppression to reduce false positives
- Multi-class detection with high accuracy (mAP metrics)
- Efficient prediction with measured FPS performance
SeaDronesSee is a large-scale, publicly available dataset designed for object detection and tracking in maritime environments using drone footage. It features a diverse collection of annotated videos and images capturing various objects such as boats, swimmers, life jackets, and buoys in challenging conditions including varying weather, lighting, and sea states.


