Conventional methods for measuring rainfall—such as rain gauges, radar, and satellite imaging—often lack the resolution and responsiveness needed for dynamic urban conditions. These systems are typically expensive, limited in spatial granularity, and prone to errors under high-intensity rainfall. Moreover, the global decline in ground-based monitoring stations has further exacerbated data scarcity. Researchers have explored alternative techniques, including audio sensing and cellular networks, but many require complex calibration or infrastructure. Surveillance cameras, already widespread in urban settings, offer untapped potential for fine-scale rainfall detection. However, challenges like low video resolution, background noise, and changing light conditions have hindered broader adoption. Due to these challenges, a deeper investigation into camera-based rainfall estimation is urgently needed.
A research team from Tianjin University has developed an AI-powered method that turns everyday surveillance cameras into rainfall sensors. Their findings (DOI: 10.1016/j.ese.2025.100562) were published in April 2025 in Environmental Science and Ecotechnology. The study presents a hybrid framework combining image-quality analysis, enhanced random forest classifiers, and a deep learning regression model using depthwise separable convolution and gated recurrent units. Tested in the cities of Tianjin and Fuzhou, this novel system achieved superior accuracy and robustness in predicting rainfall, even during night-time or under poor visibility conditions.
The proposed system operates through two key modules: a feature extraction module (FeM) and a rainfall estimation module (RiM). The FeM analyzes video frames using a novel image quality signature (IQS) method that extracts brightness, contrast, and texture features to detect rain streaks, even from noisy or low-light footage. It then uses an enhanced random forest classifier (eRFC) to classify video frames and apply optimal filters, accurately isolating rain features while discarding irrelevant visual information. The RiM employs a hybrid deep learning model combining depthwise separable convolution (DSC) and gated recurrent units (GRU), enabling it to capture both spatial and temporal patterns in rain events. This architecture proved highly effective in estimating rainfall intensity (RI) at minute-level intervals. The model was trained on over 60 hours of video data and validated against rain gauge measurements, achieving an R² value of up to 0.95 and a Kling–Gupta efficiency (KGE) of 0.97. Importantly, the system demonstrated robustness across varying conditions, including daytime and nighttime, and across multiple surveillance cameras. This adaptability marks a significant advancement in cost-effective, scalable rainfall monitoring technologies.
"Our system leverages widely available surveillance infrastructure and advanced AI to fill gaps left by traditional rainfall monitoring techniques," said Dr. Mingna Wang, senior author of the study. "What's most exciting is that we can now provide highly accurate, real-time rainfall estimates using existing urban technology, even under challenging conditions like night-time or high-density rainfall. This opens the door to smarter flood management systems and more resilient cities in the face of climate change."
This research offers a scalable and low-cost solution for urban rainfall monitoring, particularly valuable for cities facing infrastructure and budget constraints. By repurposing existing surveillance camera networks, municipalities can implement real-time rainfall monitoring systems without significant additional investment. The model's ability to function across diverse lighting and environmental conditions makes it ideal for deployment in complex urban settings. Moreover, the framework can enhance predictive flood modeling, support emergency response strategies, and inform infrastructure planning. Future improvements, such as integrating additional data sources or optimizing performance during high-intensity rainfall, could further elevate its utility in climate adaptation and smart city initiatives.
###
References
DOI
10.1016/j.ese.2025.100562
Original Source URL
https://doi.org/10.1016/j.ese.2025.100562
Funding information
This work was supported by the National Key R&D Plan of China (Grant No.2021YFC3001400).
About Environmental Science and Ecotechnology
Environmental Science and Ecotechnology (ISSN 2666-4984) is an international, peer-reviewed, and open-access journal published by Elsevier. The journal publishes significant views and research across the full spectrum of ecology and environmental sciences, such as climate change, sustainability, biodiversity conservation, environment & health, green catalysis/processing for pollution control, and AI-driven environmental engineering. The latest impact factor of ESE is 14, according to the Journal Citation ReportTM 2024.