Why 2D convolutions everywhere in SAR imaging?

Or: “I didn’t understand the line of code that calculates the interferometric coherence…”

Elise Colin
3 min readNov 30, 2022

Because of the presence of speckle, SAR images are perfect examples of random signals. Most of the estimates are based on second order statistical moment calculations, especially when working in polarimetry or interferometry. And for the calculation of these estimates, two-dimensional convolutions are used everywhere.

For example, in polarimetry, I will have to express the coherence matrix as:

where the signals ShH, SvV, ShV are stored in 3 different images, and where <.> denotes the mean in the statistical sense.

In interferometry, I will have to estimate the interferometric coherence :

where z1 and z2 are stored in 2 different images (the interferometric pair).

In practice, it is rare to have several statistical realizations of the same scene at the same time. Therefore, this estimation is done in a spatial way, using a local neighborhood in each pixel considered. This type of estimation is known in the literature as boxcar filtering: a small window of NxN pixels is used around the pixel considered.

In practice, the calculation scheme for <|z|2>, for example, goes something like this (in this example, N=5):

All the values centered around the pixel i,j are combined together to obtain the estimate. We can recognize, in the sum formula, the expression of a two-dimensional convolution between a signal x and a response h :

if we choose for h plays the role of a signal of finite support in two dimensions, that is a square of dimension N.

And the python code to compute this kind of object fits in one line! To compute <z1 z2*>, it is simply a matter of using a 2D convolution, between the image of z1 z2* and a square matrix that contains only 1s, of size N.
In practice, this gives, to compute the local estimation of “my_statistical_moment_to_estimate” :

from scipy import signal
import numpy as np

signal.convolve2d(my_statistical_moment_to_estimate, np.ones((N,N)), mode='same')/N**2

It is common to forget the N**2 factor…which is important in some cases. For the calculation of the coherence, it disappears because of the normalization, and in more details:


from scipy import signal
import numpy as np

def interferometric_coherence_2D(Z1,Z2,N):
win = np.ones((N,N))
num = signal.convolve2d(Z1*np.conj(Z2), win, mode='same')
den1 = signal.convolve2d(Z2*np.conj(Z2), win, mode='same')
den2 = signal.convolve2d(Z1*np.conj(Z1), win, mode='same')

gamma=num/np.sqrt(np.abs(den1)*np.abs(den2))
coherence=np.abs(gamma)
phase=np.angle(gamma)

return gamma,coherence,phase

Note that N must be large enough for the estimate to be robust, but small enough not to blur the transitions between different backscattering zones.

--

--

Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.