AI system detects road cracks and damaged signs in real-time
The AI models were seamlessly integrated into a mobile application featuring a GIS-based dashboard. Maintenance workers can view heatmaps of detected issues, filter by anomaly type, and access geotagged photo evidence. The app also supports citizen participation: local residents can submit reports via smartphone, contributing to real-time data streams and reinforcing training data for future model updates.

A cutting-edge artificial intelligence system developed through Italy’s CTE Molise project is poised to transform municipal infrastructure maintenance by detecting damaged traffic signs and deteriorating road surfaces with over 90% accuracy. The tool, developed by researchers from the University of Molise and Tiscali Italia, leverages state-of-the-art deep learning models and cloud computing infrastructure to enable proactive safety interventions in urban mobility networks.
The findings were recently published in a paper titled "Improving Road Safety with AI: Automated Detection of Signs and Surface Damage" in Computers.
The system integrates two computer vision models based on the YOLOv8 architecture, one optimized for traffic sign detection and the other for identifying surface anomalies such as potholes and cracks. It also includes a convolutional neural network (CNN) enhanced with attention mechanisms to classify traffic signs as damaged or intact. Together, these models form a real-time detection pipeline embedded in a mobile application designed for municipal maintenance crews and public reporting.
The platform was tested using two large open datasets: the Mapillary Vistas dataset for traffic signs and the RDD 2022 dataset for road surface damage. The researchers employed a data augmentation strategy, including rotation, scaling, and pixel normalization, to improve model generalization across various lighting conditions, camera angles, and weather variations.
For traffic signs, the researchers used YOLOv8x, a high-capacity model capable of identifying small and irregularly shaped signs with 92% average precision at 50% intersection over union (mAP50). To assess surface damage, they opted for YOLOv8s, a smaller, more efficient model that achieved 75% mAP50 while maintaining fast inference times suitable for mobile hardware.
Both models were trained on high-performance GPU infrastructure, including Google Colab’s NVIDIA Tesla T4 environment for YOLO tasks and Reevo cloud servers for CNN classification. Validation results showed the CNN attained 90% accuracy in classifying damaged signs, demonstrating robustness even under severe class imbalance conditions, only 6025 damaged versus 34,315 intact sign images were available initially.
To address the imbalance, the team used Stable Diffusion v2.1, a generative AI model, to produce 18,000 realistic images of defaced signs featuring graffiti, rust, stickers, and physical deformation. These synthetic images were generated through text-plus-image conditioning and trained using contrastive and perceptual loss functions to ensure both visual realism and classification utility.
The CNN classifier includes both spatial and channel attention mechanisms, enabling it to focus on subtle details, like sticker corners or faded paint, across various image regions and feature channels. It uses the Adam optimizer with Focal Loss, a function designed to reduce the dominance of the majority class during training.
The AI models were seamlessly integrated into a mobile application featuring a GIS-based dashboard. Maintenance workers can view heatmaps of detected issues, filter by anomaly type, and access geotagged photo evidence. The app also supports citizen participation: local residents can submit reports via smartphone, contributing to real-time data streams and reinforcing training data for future model updates.
The dashboard categorizes road damage and sign issues using intuitive icons and prioritizes interventions based on severity and location. Early field testing in Campobasso demonstrated the tool’s ability to detect potholes, cracks, and defaced signs with high confidence and minimal false positives. Real-time processing enabled same-day flagging of critical issues for repair teams.
To ensure scalability across municipalities, the system was built using containerized cloud microservices and supports integration with existing urban maintenance software. The app’s modular design allows it to operate in rural or urban zones and scale from small townships to large metropolitan areas. GPU acceleration and real-time API delivery ensure the system remains lightweight, responsive, and deployable via mobile networks.
Moreover, the team identified several challenges. Chief among them was the difficulty of detecting subtle or partial damage on signs, especially under poor lighting or visual obstructions. Additional complications included overfitting risks, which were addressed through dropout layers, early stopping protocols, and batch normalization.
The research team now plans to extend the platform by incorporating retroreflectivity detection, analyzing how well road signs reflect light at night, and automating data labeling through generative AI to accelerate retraining cycles. These features will further enhance the predictive maintenance capabilities of the system, allowing cities to transition from reactive fixes to scheduled upkeep based on risk forecasting.
- FIRST PUBLISHED IN:
- Devdiscourse