What is special about a SAR image compared to a conventional image?

Elise Colin
5 min readNov 2, 2022

--

In recent years, progress and research efforts dedicated to deep learning covers more and more different domains. Earth observation images are no exception to this trend. Many image processing functionalities (classification, detection, recognition, registration, noise reduction, simulation, etc.) are now addressed for terrestrial observation images, even if this context has its own particularities.

But among the different existing modalities, some are more specific than others, and look less like conventional images.

This is really the case for SAR (Synthetic Aperture Radar) images. This acronym concerns the processing required to compute a radar image, i.e. an image made in the microwave domain.
How do these images differ from a conventional image?

An unusual interpretation

SAR images are simply difficult to interpret, because they contain information that is invisible to our eye.

I used to say that what we “see” is the arrangement of the bricks of the scene, which have sizes of the order of the wavelength, and that the radar signal is both sensitive to the geometry of the bricks, and to their material.
For example: with a wavelength of 3 cm, it is mainly the leaves that constitute the signal; with a wavelength of 20 cm, it will be mainly the branches, and with a metric wavelength, the trunks constitute an important part of the signal. The signal from these objects will depend greatly on the amount of moisture they contain.

Radar images are complex images

Radar images are complex images, in the mathematical sense. In each pixel, the signal contains a real part and an imaginary part.
If one considers a radar image “alone”, one will often say that the phase “is useless”, because it appears uniformly distributed between 0 and 2 and is not visible. However, this phase contains information related to the formation of the image, and it is therefore useful to preserve it in many cases: in particular, when we want to combine two different images, for interferometry or polarimetry, as we shall see later.
Some product levels do not require phase, but they can no longer be used for these more “advanced” modalities. The first product level is called “SLC” for Single Look Complex.
In the complex domain, to preserve this phase information, the resampling operations are particular, which explains the difficulties of the processing platforms to integrate them:

Sentinel-1 SLC data cannot currently be ingested, as Earth Engine does not support images with complex values due to inability to average them during pyramiding without losing phase information.

The dynamics are unusual.

This means that if you display an image with your favorite software, say python, you won’t see much because the dynamics are crushed. In fact, you have to reason in logarithmic scale (or decibels) or, better, threshold the amplitude images before displaying them. I’ll do another post on this subject.

These are coherent images that contain speckle.

As a result, a homogeneous area will appear with spots, or speckle. It is sometimes considered as noise; in reality it is different from thermal noise, because this speckle is a phenomenon inherent in the principle of image formation.
Below, we have an example of SAR image (SETHI sensor of Onera) on homogeneous plots of pine forests in the moors (France). The grain of the parcels corresponds to the speckle phenomenon.

Forest stands in the forest (Nezer, France, P-band, SETHI sensor from Onera)

The resolution is not linked to the pixel size!

As the image is not acquired by a conventional camera, the resolution depends on the acquisition parameters of the system, i.e.: the central frequency, the duration of the integration time, the relative position of the antenna and the imaged scene, the frequency band used.
This resolution is not necessarily equal to the pixel size: often, we choose to sample the image with a finer step than the theoretical achievable resolution.
Below, two examples of images with the same theoretical resolution, but different pixel dimensions: the right thumbnail is largely oversampled.

Two images from the same acquisition. On the left: the original sampling. On the right, the same image oversampled while preserving the statistical properties of the Speckle images.

Unusual stastistical distributions

Speckle images obey particular statistics, described at length in Godman’s reference book.

Goodman, J. W. (2007). Speckle phenomena in optics: theory and applications. Roberts and Company Publishers.

There are books that deal more specifically with processing and estimation in the context of these particular statistics. Note that a valuable reference exists in French, written by Jean-Marie Nicolas:
https://perso.telecom-paristech.fr/tupin/JMN/documentsJMN.html.
Stian Affinsen has also done a lot of work on this subject and its extension to multivariate images:

Nicolas, Jean-Marie, and Stian Normann Anfinsen. “Introduction to Second Kind Statistics: Application of Log-Moments and Log-Cumulants to the Analysis of Radar Image Distributions.”

The Rayleigh Nakagami law is a law commonly used in SAR imaging to model the speckle: here a spckle for several different settings.

An unusual PSF

The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object.

In the case of a radar image, the PSF is a cardinal sine along both axes of the image:

A spectrum with a special meaning

The shape of the spectrum depends on how the image was acquired. Its center is not necessarily at the center of the image support, its support is not strictly a rectangle.

Extract from the report by Jean-Marie Nicolas, “Une approche de l’imagerie radar satellitaire (Radar à Ouverture Synthétique) https://perso.telecom-paristech.fr/tupin/JMN/DOCJMN/monradar.pdf, “ illustrating a SAR Carabas image, and its spectrum, for which the deformations reach high values.

Advanced modes

There are complex modes, called polarimetry, interferometry, tomography, which actually combine information from several different antennas. In the same way that a color image contains 3 channels Red, Green, Blue, a radar image can contain several polarimetric/and or interferometric channels and thus appear in false color.

The geometry is different

Unlike an optical image, a radar image is synthesized in a tilted plane frame, which contains the direction of illumination. This means that a pixel can contain information from several locations at once: the ground, a wall and a building roof, for example. This makes the geometric interpretation of volume elements particularly complicated.

An example of how buildings in an urban setting are distorted — Images from https://www.capellaspace.com

Finally, the list of these specificities is not infinite. By keeping it in mind, it is quite possible to approach or adapt treatments that are done on classical images. Later on, I will try to come back in more detail to the points discussed in this post.

--

--

Elise Colin
Elise Colin

Written by Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.