Layover, shadows, and echo superposition in radar images…from a 3D terrain description

Elise Colin
11 min readJan 21, 2024

--

How to Bring a 3D Perspective to the Layover Phenomenon and Echo superposition

How does terrain project in a radar image?

When teaching, I realize how complex the notions of radar geometry are to understand. Most of the time, I create my diagrams in two dimensions. Often, we reason in the incidence plane. However, the images are in three dimensions. And it’s not uncommon for a student to ask me: what is the axis of the radar trajectory in your drawing? Well, it is in the direction perpendicular to the board, to my sheet, to this screen… It all started with an introductory practical session on the interpretation of a radar image. We began with this illustration, which I am quite fond of:

Distortions induced by side-looking SAR. Ground points a, b, c are ‘seen’ by radar as points a’, b’, c’ in the slant range. Credit: Franz J. Meyer

Where are the phenomena of layover, foreshortening, and shadow? If we compare a radar image with an optical image, can we easily visualize these effects of terrain transformation?

We have already addressed these geometry issues in the case of buildings here. But I wanted to create a simple Python code to easily visualize what’s happening on a basic terrain with relief.

I will assume that I have an optical image, and the altitude of the terrain. And the orientation of the radar axes. How do I view this optical image in the reference frame of my radar image? And I realized that it was not so simple! So I am sharing step by step a method to start implementing a process in a simplistic way.

First step: I describe my color image and the underlying terrain Here I assume that I have:

  • an image of the altitude Z, in a matrix
  • a texture image associated with it, either the optical image or a radiometric texture image.
“Texture image on the left, and considered elevation image on the right.

The first step is to transform these two images into a triangulated surface, using a classical Delaunay triangulation. First, it’s necessary to define the X and Y matrices of the associated longitudes and latitudes, or the abscissae and ordinates of the Z terrain matrix.

nx,ny=np.shape(Z)
x = np.linspace(-nx/2, nx/2, nx)
y = np.linspace(-ny/2, ny/2, ny)
x = x[::-1]
Y, X = np.meshgrid(x, y, indexing='ij')

Once all these data are prepared, they are transformed into a ‘mesh’ object, using, for example, the Python library open3d. The mesh is described using vertices, faces connecting these vertices, and possibly a color or texture for these faces. For this texture, one can use the optical image of the area described by the terrain, or another radiometry image (this can be simulated based on the orientations of the facets relative to the radar line of sight, for example.)

All data are first vectorized:

X = X.flatten()
Y = Y.flatten()
Z = Z.flatten()

if len(C.shape)==2:
CR=C
CV=C
CB=C
elif len(C.shape)==3:
CR=C[:,:,0]
CV=C[:,:,1]
CB=C[:,:,2]
else:
print('Color Matrix invalid dimension')

CR = CR.flatten()
CB = CB.flatten()
CV = CV.flatten()

and then triangulated:

# Creating points for triangulation
points = np.vstack((X, Y)).T
# Applying Delaunay triangulation
tri = Delaunay(points)
# Creating vertices and triangles for Open3D
vertices = np.vstack((X, Y, Z)).T
triangles = tri.simplices

And then transformated into a mesh object:

import open3d as o3d
mesh = o3d.geometry.TriangleMesh() # Creation TriangleMesh Object
mesh.vertices = o3d.utility.Vector3dVector(vertices)
mesh.triangles = o3d.utility.Vector3iVector(triangles)

# Add color - optional
C_expanded = np.stack((CR, CV, CB), axis=-1)
C_expanded = C_expanded.astype(np.float64)
mesh.vertex_colors = o3d.utility.Vector3dVector(C_expanded)

Help. I’m losing my bearings. How are my bounding boxes oriented?

When we generally want to understand or simulate a radar image, we often start from what we know best: its equivalent in the visible domain, with georeferenced data, oriented along North-South and East-West axes.

However, a radar image is synthesized in another frame of reference: the azimuth/distance frame. The axes of our image are therefore never oriented along the conventional North-South, East-West axes.

The satellite tracks form an angle with the north, either towards the West (Ascending passes) or towards the East (Descending passes). In our example below, the pass is Ascending. In this case, if we consider the initially considered texture image in a geographical frame of reference, when projected onto the radar plane, it will have the blue shape in our image.

From Geographic Space to Virtual Radar Space

I perform the transformation that allows me to go from the 3D geographic space (X,Y,Z) to the virtual radar space (k,H,V). What I call the virtual space is the reference frame described by:

  • the wave vector
  • the azimuth vector, which corresponds both to the axis of the horizontal polarization (H) and to the trajectory
  • the vector (V) perpendicular to the previous two, which corresponds to the vertical polarization.

The planes formed by two of these vectors correspond to particular planes:

  • The plane formed by the vectors (k,V) is the incidence plane. This is the plane in which most of the drawings explaining the geometric phenomena of wave, layover, shadows, are represented!
  • The plane formed by the vectors (H,V) is the wave vector plane, in which the polarization is described.
  • The plane formed by the vectors (k,H) is the plane of SAR image formation.

If we apply a transformation that allows the vectors (k,H,V) to be transformed into (X,Y,Z), then after applying this rotation to our 3D mesh,

  • the plane (X,Z) is the incidence plane
  • the plane (X,Y) is the image formation plane.
"""
This code calculates the components of a wave vector (kw)
and the polarization vectors (polarh and polarv) based on the given incidence and azimuth angles in degrees,
with their conversion to radians for use in trigonometric functions.
"""
theta = 50 # Incidence angle in degrees
phi = 10 # Azimuth angle in degrees

# Conversion from degrees to radians for trigonometric functions
theta_rad = np.deg2rad(theta)
phi_rad = np.deg2rad(phi)

kw = np.array([np.sin(theta_rad) * np.cos(phi_rad),
np.sin(theta_rad) * np.sin(phi_rad),
-np.cos(theta_rad)]) # Wave vector
polarh = np.array([-np.sin(phi_rad), np.cos(phi_rad), 0]) # Vector y collinear to the path: polar h
polarv = np.cross(kw, polarh) # Vector normal to the radar plane containing y and kw: polar v
Q = np.column_stack((kw, polarh,polarv))
mesh2=mesh.rotate(Q.T, center=(0, 0, 0)) # Rotation applied to mesh

To understand how my 3D surface will be synthesized in my radar image, we need to consider a form of ray casting in two dimensions:

  • Shadows are generated along the X-axis (formerly the k-vector): only the first intersection of the X-axis with the surface layer will contribute to the signal.
  • Then, all illuminated points on the surface will sum along the Z-axis: this is the phenomenon of Echo superposition.

For each azimuth position Y, it is therefore necessary to consider these two types of intersections along the X and Z axes, and to consider their potential overlap.

Describing my Surface in 3D Space

We are working with sampled data, not continuous. One might think, to calculate these intersections, to work on a voxelization of the surface in 3D. This was my initial idea. However, in practice, this voxelization poses a lot of aliasing issues. Ultimately, I preferred to go through a representation of my surface as the calculation of a space distance function, between the points of a voxelization grid, and the said considered surface. Once this distance function is calculated, we will consider one by one all the cuts along a fixed azimuth (fixed Y).

# Create a scene and add the triangle mesh
scene = o3d.t.geometry.RaycastingScene()
mesh2 = o3d.t.geometry.TriangleMesh.from_legacy(mesh2)
_ = scene.add_triangles(mesh2)

# Create New grid to compute the distance to the surface
dx=1
dy=1
dz=1
min_bound = mesh2.vertex.positions.min(0).numpy()
max_bound = mesh2.vertex.positions.max(0).numpy()
x_range=np.arange(min_bound[0], max_bound[0], dx)
y_range=np.arange(min_bound[1], max_bound[1], dy)
z_range=np.arange(min_bound[2], max_bound[2], dz)
x_grid, y_grid, z_grid = np.meshgrid(x_range, y_range, z_range, indexing='ij')
query_points = np.stack((x_grid, y_grid, z_grid), axis=-1).astype(np.float32)
nxr,nyr,nzr,_=np.shape(query_points)
# Compute distance
signed_distance = scene.compute_signed_distance(query_points)

Reasoning on Shadows and Layover

We have seen that we must reason, in this new reference frame, for each column, and for each line.

  • on the calculation of shadows: For each column, we need to find the first encountered zero minimum (corresponds to the first intersection between the ray and the surface).
  • for the calculation of layover: for each line, identify all the local minima of the distance around the zero value.

Once all these minima are identified, we keep the intersection between the first minima along the X-axis, the following being in the shadow of the surface, and all the local minima along the Z-axis.

Calculation of the First Encountered Zero Minimum for Each Column

For each column, we will find the first local minima using the following functions:

def first_group_positive_column(column):
# Find the first index of the non-zero group
begin_group = 0
while begin_group < len(column) and column[begin_group] == 0:
begin_group += 1
# If no non-zero group is found, return a zero vector
if begin_group == len(column):
return np.zeros_like(column)
# Search for the end of the non-zero group
end_group = begin_group + 1
while end_group < len(column) and column[end_group] > 0:
end_group += 1
# Create a result vector with the first positive group
result = np.zeros_like(column)
result[begin_group:end_group] = 1
return result


def first_group_positive(A):
# Apply previous function iteratively to all columns of a matrix A
nx,ny=np.shape(A)
first=np.zeros((nx,ny))
for j in range(ny):
column=A[:,j]
first[:,j]=first_group_positive_column(column)
return first

Calculation of the Various Local Minima Encountered for Each Line

For each row, we will consider the number of local minima near zero, which indicates the number of echoes that will overlap in the final pixel of the image (without considering the shadows):

def minima_local_line(line, thresh):
# Counter for local minima
number_minima_local = 0
# Minimum value difference between local minima
step_minimal = 2 # Adjust the value according to your needs
# Indicator to track the last local minimum
last_minimum_local = -float('inf') # Initialized to positive infinity
indices_minima_local = []
# Iterate over the elements of the line
for i in range(1, len(line) - 1):
pixel = line[i]
left_neighbour = line[i - 1]
right_neighbour = line[i + 1]
# Check if the pixel is a local minimum
if pixel <= left_neighbour and pixel <= right_neighbour and pixel < thresh:
# Check if there is a sufficiently large gap compared to the last local minimum
if i - last_minimum_local >= step_minimal:
last_minimum_local = i
indices_minima_local.append(i)
return indices_minima_local


def minima_local(A, thresh):
nx, ny = np.shape(A)
mini = np.zeros((nx, ny))
for line_index in range(nx):
# Assuming you have a 2D matrix A representing the distance map
# and you want to examine the line with the index 'line_index'
line = A[line_index, :] # Select the line
indices_minima_local = minima_local_line(line, thresh)

Nmini = len(indices_minima_local)
for j in range(Nmini):
column_index = indices_minima_local[j]
# mini[line_index, column_index] = len(indices_minima_Col)
mini[line_index, column_index] = 1

return mini

Here, the threshold “thresh” is used to determine from what value the discretized distance between our grid of point coordinates and the surface is small enough to be considered an intersection. Since we have discretized on grids spaced by dx, one can consider that the threshold value to choose is dx/2.

Product and Generation of Echo Summations

On the product of these two maps, the sum will lead to considering the number of contributors projected in the final image: these are the phenomena of echo superposition.

To obtain this, we will first multiply the two previous products together, then sum along the ‘Z’ axis the obtained result to get the ‘EchoOverlap’ product:

tresh=dx/2
nr,na,nh=np.shape(D)
voxelarray=np.zeros((nr,na,nh))
for azi in range(na):
A=D[:,azi,:]
first=first_group_positive(A<tresh)
mini=minima_local(A,tresh)
voxelarray[:,azi,:]=mini*first

EchoOverlap=np.sum(voxelarray, axis=2)
from scipy.ndimage import median_filter
EchoOverlap = median_filter(EchoOverlap, size=3)

If we therefore consider the sum of these crossed minima in each azimuth slice, then we find a map that we can call an Echo Overlap Map. The “EchoOverlap” product contains zeros on all unilluminated shadow areas, and otherwise an integer that corresponds to the number of contributors overlapping in the same pixel of the final image.

Projecting My Colors onto the Radar Image Plane, and Accounting for Layovers+Shadows

In our example, on the left, we see the echo overlap image; in the middle, the reinterpolated texture map after rotation on the (X,Y) plane; and on the right, a multiplication between the texture and the number of echoes. The multiplication enhances the areas of echo overlap and primarily serves to zero out the shadow pixels.

Thus, we have a better representation of the final geometry that will be obtained in the radar image reference frame!

On a more complicated example: the case of Etna.

Let’s consider the application of what has been said above to the case of Etna in Sicily. The relief images and the corresponding optical image are indicated below. Note here, the data represented are not at a 1:1 scale.

The idea is to understand what a radar image of the volcano will look like.

We triangulate the generated mesh on a grid spaced by 2.5 pixels.

dx=2.5
dy=2.5
dz=2.5
margin=10
min_bound = mesh2.vertex.positions.min(0).numpy()
max_bound = mesh2.vertex.positions.max(0).numpy()
x_range=np.arange(min_bound[0]-margin, max_bound[0]+margin, dx)
y_range=np.arange(min_bound[1]-margin, max_bound[1]+margin, dy)
z_range=np.arange(min_bound[2]-margin, max_bound[2]+margin, dz)
x_grid, y_grid, z_grid = np.meshgrid(x_range, y_range, z_range, indexing='ij')
query_points = np.stack((x_grid, y_grid, z_grid), axis=-1).astype(np.float32)
nxr,nyr,nzr,_=np.shape(query_points)

The resulting Echo Superposition image looks like this:

It allows us to see where the shadows and overlapping areas are located, which here are limited to two lines along the vertices.
If we consider only the way the texture is projected into the new geometry, we see something that transforms the optical texture as follows:

Left : Texture in a geographical frame. Right: Texture projected in the radar antenna frame.

We can then have fun transforming this last image into a product where we set the Echo Superposition phenomenon.
The image is transformed into grayscale. A mask is applied, which retains only those mask values where the Echo Superposition values are strictly positive. The resulting intensity is then raised to a power whose value is the inverted number of superimposed echoes.
Below, I show what is obtained by varying, in the previous case, the angle of incidence theta to 60° (middle), and 30°(right).
At high angles of incidence, the shadow zones lengthen. At low angles of incidence, more echoes are superimposed along the ridges.

Left: Texture in the geographical frame. Middle : Texture + shadow + Layover in the radar antenna image plane, at 60° incidence angle. Right : Texture + shadow + Layover in the radar antenna image plane, at 30° incidence angle.

This is, of course, a very implified view of what happens in practice, and a far cry from the radar radiometries we’d get. But I find that manipulating these kinds of transformations gives us a better grasp of the radar geometries of terrain with relief, especially the effects of superimposed echoes and shadows.

This code can be adapted to your needs and may be improved. Do not hesitate to contact me to get the entire Python notebook.

--

--

Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.