Smarter from Space: AI-Powered Models That Rebuild Cloud-Covered Satellite Imagery

Researchers from Technological University Dublin review how advanced AI models, especially GANs and diffusion models, are being used to remove clouds from satellite images and reconstruct hidden ground details. While diffusion models often deliver higher accuracy and stability, GANs offer faster and visually sharp results, with both showing strong potential for improving real-world Earth observation.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 24-02-2026 10:29 IST | Created: 24-02-2026 10:29 IST
Smarter from Space: AI-Powered Models That Rebuild Cloud-Covered Satellite Imagery
Representative Image.

Every day, satellites capture images that help us forecast weather, monitor crops, track wildfires, map cities and study climate change. But there is one stubborn problem: clouds. Large parts of the Earth are covered by clouds at any given time, and when they block satellite cameras, valuable information about the ground is lost.

For scientists, governments and businesses that depend on clear satellite images, cloud cover can delay decisions and reduce accuracy. In tropical regions, where clouds are frequent and thick, getting a clear image can be especially difficult. This is why removing clouds from satellite imagery has become a major research focus.

A recent review by researchers from Technological University Dublin looks at how advanced artificial intelligence is tackling this challenge. The study focuses on two powerful types of generative AI models: Generative Adversarial Networks, or GANs, and diffusion models.

From Old Methods to Smart AI

In the past, cloud removal relied on physical and mathematical techniques. Scientists used atmospheric models to estimate how light interacts with clouds or blended images from different dates to fill in cloudy areas. These methods worked in simple situations but struggled when clouds covered large regions or when the land below changed quickly.

Machine learning improved things slightly by using historical data to predict what might be hidden under clouds. However, these systems still depended heavily on good-quality reference images and often failed in new or complex environments.

Deep learning brought a bigger breakthrough. Instead of just predicting missing pixels, newer models learn patterns from huge collections of cloud-free images. They try to understand what landscapes usually look like and then recreate what might be hidden beneath the clouds.

How GANs Rebuild Hidden Landscapes

GANs work like a creative competition. One neural network, called the generator, tries to create a cloud-free image. Another network, the discriminator, checks whether the image looks real or fake. Through this back-and-forth process, the generator gradually improves until it produces realistic results.

In cloud removal, GANs can reconstruct buildings, roads, forests and farmland that are hidden under clouds. Some versions use attention mechanisms, which help the model focus specifically on cloudy areas. Others combine data from radar sensors, which can see through clouds, to improve accuracy.

GAN-based models are known for producing sharp and visually convincing images. They are also relatively fast once trained, which makes them attractive for large-scale applications. However, they can be unstable during training and sometimes struggle when clouds are extremely thick and block all ground information.

Diffusion Models: A More Stable Approach

Diffusion models are newer and work differently. Instead of competing networks, they use a step-by-step process. First, they gradually add noise to an image. Then they learn how to reverse that noise and reconstruct the original image. In cloud removal, they apply this process to rebuild clear ground scenes from cloudy inputs.

These models are often more stable than GANs and can produce very high-quality reconstructions. In many cases, they achieve better numerical performance in terms of image similarity and noise reduction. This means the restored images are closer to the true ground conditions.

However, diffusion models are usually slower and require more computing power because of their repeated processing steps. This can make them harder to use in real-time systems or on satellites with limited onboard resources.

The Road to Real-World Use

Even with these advances, cloud removal remains challenging. Thin clouds are easier to handle because some light still passes through. Thick clouds, on the other hand, completely hide the ground. In those cases, AI models must guess what lies beneath based on patterns they have learned before. There is always some uncertainty.

Another issue is data. Many cloud removal models are trained and tested on specific datasets with particular types of landscapes and cloud conditions. This can make it hard to know how well they will work globally.

Despite these challenges, the potential benefits are enormous. Clearer satellite images could improve disaster response, strengthen food security monitoring, support urban planning and enhance climate research. The future may lie in combining the strengths of GANs and diffusion models, creating hybrid systems that are both accurate and efficient.

As satellite data becomes more important to everyday life, smarter AI systems may soon help us see the Earth more clearly, even when the skies are full of clouds.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback