How to Visualize Changes in a Radar Timeline

Elise Colin
9 min readNov 12, 2023

--

REACTIV, What is it

It’s been a while since I introduced a colorful visualization that highlights changes in a radar time-series stack. The acronym stands for Rapid and EAsy Change detection In radar TIme-series by Variation coefficient.

It allows creating images like these:

Application of REACTIV to Visualize Maritime Traffic Calculated Over 3 Years in Europe — Result obtained thanks to the Google Earth Engine Platform
The drift of an iceberg in Antarctica, the construction of the highway bridge on Réunion Island, the construction sites in Paris, palm oil exploitation in Indonesia, the change of materials on a Parisian façade, and the presence of operational trains around a diamond mine

This algorithm has been around for a few years now and has received several awards:

  • Best Paper Award at the CFPT Conference in 2018
  • Early Bird Prize, followed by the Best Custom Scripts Awards of Sentinel-Hub Contest.

I developed it in MATLAB and Python, and later implemented it on the Google Earth Engine platform. Thomas di Martino implemented it on the EObrowser platform, incorporating some modifications, such as intensity filtering.

But for some time now, I’ve been delving back into it because:

  1. I’m working on studying all possible polarimetric extensions.
  2. Many people ask me for help in using it, and I realize that what I had put online four years ago was not at all clear. 🤪 I recently cleaned up and updated some significant modifications.
  3. As data becomes increasingly accessible, it would be great for more people to be able to use this algorithm and then apply it in different contexts and applications.

The algorithm, along with a test dataset, is available on GitHub under the GPL license.

What is the principle in a few words?

REACTIV combines two key ideas:

1- Using the HSV color space

2- Leveraging the properties of the coefficient of variation on speckle.

A color is encoded in three dimensions, usually in the RGB color space. When we have three images taken at three different dates, each color channel is used to code one of the dates. How do we code more than three dates in color? We’ll think in the HSV color space. In this space,

  • the Hue component: models the time dimension
  • the Saturation component: models the relative intensity of change, thanks to the coefficient of variation
  • the Value component: models the absolute radar intensity

Theoretical background about statistics and application to detection are available in [1]. This article is significant, especially because it explains the theoretical concepts that enable automatic parameterization of the coefficient of variation, a key parameter of the algorithm, based on theoretical speckle statistics. That’s why if the algorithm is useful in your research, I would appreciate it if you mention it:

[1] Colin Koeniguer E, Nicolas J-M. Change Detection Based on the Coefficient of Variation in SAR Time-Series of Urban Areas. Remote Sensing. 2020; 12(13):2089.

How do I code this in Python?

From a time-series stack of images, we need to code the coefficient of variation, which can be derived from the first and second-order moments calculated along the temporal axis at each pixel, as well as the maximum amplitude along this axis and the temporal index at which this maximum is reached. In practice, from a coding perspective, this can be conceived in two ways:

  • The first method assumes that I have a stack of images (registered with each other), stored as an image stack. I then directly calculate means, maxima, and the argument of the maximum along this stack.
from scipy.special import gamma
import numpy as np

def Stack2reactiv(Stack, timeaxis=2, L=None):
"""
Compute REACTIV representation from a Stack of Images.

Parameters
----------
Stack : float ndarray
Input stack of images.
timeaxis : int, optional
Designates the dimension for the temporal axis. Default is timeaxis=2.
L : float, optional
Equivalent Number of Looks. If data are SLC, then L=1. If not specified,
it will be automatically estimated from the data.

Returns
-------
CV : float ndarray, between 0 and 1
Coefficient of variation for the temporal axis.
Scaled between 0 (theoretical mean value) and 1 (theoretical mean value + theoretical standard deviation).
K : float ndarray, between 0 and 1
Corresponds to the temporal fraction where maximal amplitude is found during the observation period.
Amax : float ndarray - not scaled
Corresponds to the image of maximal amplitude values reached during the observation period.
"""

# Temporal dimension
Nt = np.shape(Stack)[timeaxis]

# Compute mean and mean of squares
M1 = np.mean(Stack, axis=timeaxis)
M2 = np.mean(Stack**2, axis=timeaxis)

# Find maximum amplitude and corresponding fraction of time
Amax = np.amax(Stack, axis=timeaxis)
Kmax = np.argmax(Stack, axis=timeaxis)
K = Kmax / Nt

# Compute the Coefficient of Variation
R = np.sqrt(M2 - M1**2) / M1
R[M1 == 0] = 0 # Remove possible Nan output when signal is zero
R[M1 < 0] = 0

# Theoretical estimation of mean and std of the Coefficient of Variation for the given Equivalent Number of Looks
if L is None:
gam = R.mean()
L = ((0.991936 + 0.067646 * gam - 0.098888 * gam**2 - 0.048320 * gam**3) /
(0.001224 - 0.034323 * gam + 4.305577 * gam**2 - 1.163498 * gam**3))
print('Equivalent Number of Looks has been estimated to ',L)

Rmean = np.sqrt((L * gamma(L)**2 / (gamma(L + 0.5)**2)) - 1) # theoretical mean value
num = (L * gamma(L)**4 * (4 * (L**2) * gamma(L)**2 - 4 * L * gamma(L + 1/2)**2 - gamma(L + 1/2)**2))
den = (gamma(L + 1/2)**4 * (L * gamma(L)**2 - gamma(L + 1/2)**2))
Rstd = 1 / 4 * num / den / np.sqrt(Nt) # theoretical standard deviation value
Rmax = Rmean + Rstd

# Recast Coefficient of Variation between 0 (mean value) and 1 (max value)
CV = (R - Rmean) / (Rmax - Rmean)
CV = (CV >= 0) * CV
CV = CV * (CV < 1) + (CV >= 1)

return CV, K, Amax

The drawback of this method is that loading this time-series stack can be very memory-intensive. For instance, if I have a stack of 100 images, each 10000x10000 pixels, encoded in uint16, it represents a variable of 20 GB, on which a simple calculation can quickly become time-consuming.

  • The alternative is, in the case of a very large stack, to not load the entire stack into memory but to calculate the required variables iteratively by opening each image file independently. This results in the List2reactiv version.
from skimage import io
from scipy.special import gamma
import numpy as np

def List2reactiv(List,L=None):
""" Product output for REACTIV representation from a List of Images
Parameters
----------
List : List of Image files
Example. If all your images are stored in .tif files in Directory, then call
List= glob.glob('/home/Directory/*.tif')
It is preferable that .tif files containes Amplitudes (and not Intensities)
L : float
L is Equivalent Number of Looks. If data are SLC, then L=1.
if you do not specify L, it will be automatically estimated from the data
Returns
-------
CV : float ndarray, between 0 and 1
Coefficient of variation for temporal axis.
Casted between 0, theoretical mean value, and 1, theoretical mean value+theoretical standard deviation
K : float ndarray, between 0 and 1
Corresponds to the temporal fraction where maximal Amplitude is found during the observation period
Amax : float ndarray - not casted
Corresponds to the image of maximal amplitude values reached during the observation period
"""
Nt=len(List)# temporal dimension

im=io.imread(List[0]) # read the first image
nx,ny=np.shape(im)


# Compute Coefficient of Variation
M1=np.zeros([nx,ny])
M2=np.zeros([nx,ny])
Kmax=np.zeros([nx,ny])
Amax=np.zeros([nx,ny])

k=0
for count,file in enumerate(List):
im=io.imread(file)
M1=M1+im
M2=M2+im**2
k=k+1
Kmax=(im>Amax)*count+(im<Amax)*Kmax
Amax=np.maximum(Amax,im)

M1=M1/Nt
M2=M2/Nt
K=Kmax/Nt

# Compute the Coefficient of variation
R=np.sqrt(M2-M1**2)/M1;
R[M1==0]=0 # Remove possible Nan Oupput when Signal is zero
R[M1<0]=0

# Theoretical estimation of mean and std of the Coefficient of variation for the given Equivalent Number of Looks
if L is None:
gam=R.mean()
L= ((0.991936+0.067646*gam-0.098888*gam**2 -0.048320*gam**3)/(0.001224-0.034323*gam+4.305577*gam**2-1.163498*gam**3));
print('Equivalent Number of Looks has been estimated to ',L)

Rmean=np.sqrt((L*gamma(L)**2/(gamma(L+0.5)**2))-1); # theretical mean value
num=(L*gamma(L)**4.*(4*(L**2)*gamma(L)**2-4*L*gamma(L+1/2)**2-gamma(L+1/2)**2));
den=(gamma(L+1/2)**4.*(L*gamma(L)**2-gamma(L+1/2)**2));
Rstd=1/4*num/den/np.sqrt(Nt); # theoretical standard deviation value
Rmax=Rmean+Rstd

# recast Coefficient of Variation between 0 (mean value) and 1 (max value)
CV=(R-Rmean)/(Rmax-Rmean)
CV=(CV>=0)*CV
CV=CV*(CV<1)+(CV>=1);
return CV,K,Amax

Once we have calculated the three key parameters of the algorithm, we can then transform them into an RGB matrix ready to be displayed.

from matplotlib.colors import hsv_to_rgb
def reactiv_image(CV, K, A, thresh=None):
"""
Create a REACTIV representation as an RGB image.
Parameters
----------
CV : float ndarray, between 0 and 1
Coefficient of variation for the temporal axis. Scaled between 0 (theoretical mean value) and 1 (theoretical mean value + theoretical standard deviation).
K : float ndarray, between 0 and 1
Corresponds to the temporal fraction where maximal amplitude is found during the observation period.
A : float ndarray - not scaled
Corresponds to the image of maximal amplitude values reached during the observation period.
thresh : float, optional
Threshold for amplitude values. If not specified, it is set to the mean plus the standard deviation of the amplitude.
Returns
-------
rgb : ndarray
RGB representation of the REACTIV image.
"""
# Ensure amplitude values are positive
A = np.abs(A)
# Set threshold to mean plus standard deviation if not specified
if thresh is None:
thresh = np.mean(A) + np.std(A)
# Normalize amplitude values between 0 and 1
A = A / thresh
A = A * (A < 1) + (A >= 1)
# Stack CV, K, and normalized A to create an HSV image
result = np.dstack((K, CV, A))
# Convert HSV to RGB
rgb = hsv_to_rgb(result)

return rgb

To test these codes on a small stack of prepared Sentinel-1 images, I have created a Python notebook here: (It is read-only; you can copy it to your own environment to execute it)

Here, the result of this notebook is produced by selecting images from the VH polarization. If you rerun the code for the VV polarization, you will see that you get a different result.
This will be the subject of another post…😁

A few lingering questions:

I’m not good at Python; does it exist in MATLAB? Yes, I initially developed it in MATLAB, and I can send you a version, but honestly, I now prefer to help and encourage you to use the Python version because that’s the one I will continue to develop.

It works from how many images? N=5 is a minimum acceptable. N=20 is a good compromise. Beyond that, some short-term changes could be overwhelmed.
The paper gives better answers.

Can it be used on Google Earth Engine? Yes, it works really well there. Code is given in the github repository. It also works on EObrowser here, but it can sometimes take a bit more time, and if it’s in a relief area, it’s better to use ‘terrain rectified’ data.

Does it apply to anything? Mostly, you need to test. In natural areas, the effects can sometimes be less noticeable.

I don’t like the colorbar because it’s difficult to distinguish the colors at the beginning and the end. Can I change it? Yes, it’s possible to restrict the colorbar to a subset of the palette, for example, by doing K=K/1.5… or even by coding a transfer function to a new palette.

Why did you choose the maximum of intensities and not the mean?
Because otherwise, targets that appear only once (such as boats) will be drowned in the mass and won’t be visible. But if you want longer-term changes, you need to change the representation principle and adapt it to your needs. The same goes if you want to detect oil slicks, where intensity is minimal rather than maximal.

What about polarizations? Patience, I’m working on it!” 😋

In summary, if you’re interested, contact me!

--

--

Elise Colin
Elise Colin

Written by Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.