How to stop losing one’s way and know how to display a SAR image in the correct orientation?

Or a first very approximate georeferencing of an SLC image

Elise Colin
6 min readSep 17, 2023

SLC radar images are acquired in the antenna reference frame, known as “slant range-azimuth.” On the ground, these axes are not aligned with the north. Moreover, these two axes are not sampled with the same physical dimensions. This simply means that sometimes the SLC image of a duck might appear like this:

It is sometimes useful to reorient, even approximately, an image with the north axis oriented vertically and the west-east axis oriented horizontally, if only to be able to “find your way around”. This way, our duck will appear like this:

This step can be coded quite easily using basic metadata such as the angle of incidence, angle of heading (trajectory angle), and pixel dimensions. However, it’s a stage where it’s sometimes easy to get confused, mainly due to five little pitfalls:

  1. How the heading angle is defined.
  2. The difference between Ascending and Descending modes.
  3. The distinction between radar that looks “left” and radar that looks “right.”
  4. The way data is stored (typically: one row corresponds to one fixed azimuth, one column corresponds to one fixed distance).
  5. How data is displayed in matrix form (whether the y-axis is ‘Reversed’ by default or not).

Definitions

Heading angle

Heading is typically based on cardinal directions, so 0° (or 360°) indicates a direction toward true north, 90° true east, 180° true south, and 270° true west.
Let’s use the example of an acquisition over Paris in “Ascending” “Right” mode: In this case, the heading angle is 349° (which is equivalent to -11° with respect to North).

Ascending and Descending modes

A satellite is said to be in a descending orbit when it moves from the North Pole to the South Pole, and in an ascending orbit when it moves from the South Pole to the North Pole.

Left and Right

The terms “left” and “right” indicate the direction of the distance axis (slant range) in relation to the satellite’s trajectory. For instance, the diagram above illustrates the axes obtained for an ascending orbit, whether it’s looking to the right or left.

The way in which data is stored — Row = Azimuth, Column: Distance

Let’s examine the four potential acquisition configurations for our duck, which consistently maintains the same orientation with respect to the north and south.

When considering the image in the (range-azimuth) frame of reference, we obtain the following four “displays.” In this context, if the columns represent an iso-distance line and the rows represent an iso-azimuth line, the configurations are as follows:

How data is displayed in matrix form (y-axis ‘Reverse’ or not by default)

In Python, imshow() enables you to display an image stored in a 2D (or 3D) array within a rectangular region in data space. The orientation of the image typically follows these conventions:

For an array with a shape of (M, N), the first index corresponds to the vertical direction, while the second index corresponds to the horizontal direction. By default, the vertical axis direction is reversed, meaning that [0, 0] is at the top-left corner to align with the matrix indexing conventions used in mathematics and computer graphics image indexing.

This diagram summarizes all the cases encountered in image representation:

An example of a processing chain using TerraSAR-X data

Let’s use an example of an image acquired by the TerraSAR-X sensor over Paris to illustrate this. In ascending mode, the heading angle is 350°, and in descending mode, it’s 190°. We’ll open this image using Python.

First, let’s assume that the data is stored in a matrix A with dimensions nz x nr, where nz represents the number of azimuth lines, and nr represents the number of range columns.

  • A[0,0] contains the data for the first azimuth and distance cells.
  • A[nz,0] corresponds to the last azimuth row.
  • A[0,nr] corresponds to the last range column.
naz=A.shape[0]
nr=A.shape[1]

plt.figure()
plt.imshow(A,vmin=0,vmax=400,cmap="gray")
plt.title('after opening')

The four parameters required for approximately resampling an image to square pixels (North-South orientation) are: the pixel size along the azimuth, the pixel size along the range axis, the heading angle, and the mean angle of incidence.

    daz=float(ret['projectedSpacingAzimuth'])
dr=float(ret['slantRange'])
phi=float(ret['headingAngle']) # heading in degrees
theta=float(ret['incidenceAngle'])*np.pi/180 # incidence in radians

(Another common mistake is that some functions expect the angle argument in degrees, such as OpenCV, while others anticipate the angle arguments in radians, like cosine and sine functions.)

From the average angle of incidence, we can determine the size of the distance pixel projected onto the ground, represented as dg. We then resample the images by tracking pixels using these new dimensions:


dg=dr/np.sin(theta)

import cv2
output = cv2.resize(A, (int(nr*dg/dy),int(daz*naz/dx)), interpolation = cv2.INTER_LANCZOS4)

At this stage, we intend to apply a transformation to our image. Following the Python convention, we’ll begin by flipping the vertical axis using np.flipud before applying the rotation. If the acquisition mode is set to "LEFT," then an additional flip between left and right must also be applied:

    output = cv2.resize(A, (int(nr*dg/dy),int(daz*naz/dx)), interpolation = cv2.INTER_LANCZOS4) 
output=np.flipud(output)

if (ret['lookDirection']=='LEFT'):
output=np.fliplr(output)

Finally, we can apply a rotation to realign the image with a vertical axis aligned with the north:

    (h, w) = output.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -phi, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY

output2=cv2.warpAffine(output, M, (nW, nH))

Certainly, these lines of code need to be customized according to your specific requirements. Notably, for a more precise result, you can use a local angle of incidence.

In the case of our image, a standard Python display will produce the following well-oriented image.

plt.figure()
plt.imshow(output2,vmin=0,vmax=400,cmap="gray")
plt.title('after scaling and rotation')

Bottom line:

Suppose we have stored the image data as a matrix A in Python, with nz azimuth rows and nr distance columns. Then, the process involves:

● Resampling the images into square pixels, considering the angle of incidence along the distance axis.

● Performing a horizontal flip (Up-Down).

● If the view mode is set to “left,” performing a vertical flip (Left-Right).

● Applying a clockwise rotation based on the heading angle.

--

--

Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.