Sitemap

How to Easily Compensate for Small Coregistration Errors in Sentinel-2 Time Series

5 min readMay 15, 2025

Sentinel-2 imagery has become a cornerstone of many remote sensing applications — from vegetation monitoring to urban change detection. The data is pre-processed and delivered as Level-2A orthorectified products, projected onto a standard UTM grid. In theory, this means that multi-temporal stacks should align pixel-to-pixel, right?

Well… almost.

Why Coregistration Still Matters (Even When You Think It’s Done)

While Sentinel-2 Level-2A products are typically well-aligned, small residual geometric errors sometimes remain — especially in older data processed by early versions of the ground segment. These misalignments can reach up to a pixel or more, particularly when observing sharp edges like roads, field boundaries, or buildings.

For applications requiring high temporal consistency — e.g., computing per-pixel NDVI trajectories, detecting subtle land cover changes, or training machine learning models — such small shifts can introduce significant noise.

So how can we fix it?

Keep It Simple: Phase Correlation to the Rescue

There’s no need to reach for complex deep learning models or custom optical flow pipelines just yet. In many cases, the residual misregistration is well-approximated by a global 2D translation. And there’s a simple, elegant method for estimating such shifts: phase correlation.

The idea is based on a fundamental property of the Fourier transform: a shift in the spatial domain corresponds to a phase ramp in the frequency domain. If two images differ only by a translation, we can estimate the offset by comparing their spectra. You can then apply the estimated shift with subpixel accuracy using a Fourier-based phase ramp

In more details: the goal of image registration in this context is to estimate a global translation between two images I(x) and I(x) , where x=(x,y) denotes spatial pixel coordinates. When the images are related by a pure translation δ=(δx,δy), we have:

Press enter or click to view image in full size

and formally, then:

Press enter or click to view image in full size

By computing the normalized cross-power spectrum:

Press enter or click to view image in full size

and taking its inverse Fourier transform:

Press enter or click to view image in full size

you obtain a Dirac-like peak at the location of the best matching shift δ. The subpixel shift is then estimated by locating this peak with subpixel accuracy (e.g., through interpolation or upsampled FFT)

In Python, this is implemented by:

from skimage.registration import phase_cross_correlation
shift, error, _ = phase_cross_correlation(I1, I2, upsample_factor=20)

Once the translation vector δ=(δx,δy) is known, we can apply it directly in the frequency domain. Given the Fourier transform F{I}(ω), the shifted image is:

Press enter or click to view image in full size

This phase ramp is applied using:

def apply_translation(I, shift):
ny, nx = I.shape
Y, X = np.meshgrid(np.fft.fftfreq(ny), np.fft.fftfreq(nx), indexing='ij')
delta_y, delta_x = shift
phase_ramp = np.exp(-2j * np.pi * (delta_x * X + delta_y * Y))
I_fft = fft2(I)
shifted = np.abs(ifft2(I_fft * phase_ramp))

# Clip to [0, 255] for visualization and return
return np.clip(shifted, 0, 255)

Why Not Use scipy.ndimage.fourier_shift? While SciPy provides a ready-made function:

from scipy.ndimage import fourier_shift

I have noticed that this built-in function can sometimes introduce intensity artifacts, especially when the input image has clipped or quantized values (e.g., 8-bit Sentinel-2 reflectance). Thus, I chose to implement a custom apply_translation() function using direct FFT operations.

A Hands-On Example: Sentinel-2 Cube from GEE

I’ve put together a Colab notebook Sentinel_2_Cube.ipynb (https://github.com/elisecolin/Tutorials_Notebooks.git) that walks through the whole process:

- Access and filter a Sentinel-2 image collection via Google Earth Engine
- Extract a data cube (same AOI, same band, multiple dates)
- Detect residual translations using phase correlation
- Apply subpixel corrections to align a pair of images

It’s quick to run, easy to understand, and can be plugged into any time series processing pipeline.

Here’s a preview of the kind of residual misalignments you may encounter when working with Sentinel-2 time series data. The example below shows a small crop (100×100 pixels) over Toulouse, France, comparing two acquisitions from different dates. The left panel displays a red/cyan composite before correction, where red and cyan fringes along buildings and roads (highlighted by yellow ellipses) reveal clear geometric misalignments. These color fringes result from pixels not being perfectly aligned between dates. The mean displacement has been estimated to [-0.77 0.35] pixels. In contrast, the right panel shows the same area after applying this subpixel translation correction using phase correlation. The fringes are strongly reduced.

Press enter or click to view image in full size

Bonus Experiment: What About Sentinel-1?

Curious to test the same method on Sentinel-1? I tried building a cube of ground-range detected (GRD) images in ascending orbit with VV polarization. No significant misregistration was detected.

That’s good news! It reflects the excellent geometric stability of Sentinel-1, and it also means we don’t have to worry about resampling complex SAR images (which aren’t available via GEE anyway).

Conclusion

Before diving into complex registration techniques, check if a simple global shift is all you need. For GRD Sentinel-2 time series, especially older data, phase correlation is a robust and efficient way to bring your pixels back in line. It’s fast, easy to implement, and often good enough for most practical use cases.

If you’re curious to try it yourself, feel free to check out Sentinel_2_Cube.ipynb(https://github.com/elisecolin/Tutorials_Notebooks.git). And let me know how it goes!

--

--

Elise Colin
Elise Colin

Written by Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.

Responses (1)