Geometry effects of a building in a SAR image, compared to an optical image
Definition of main axes and planes
The complexity of radar geometry comes from the fact that the radar is an active system, and that the plane in which the image is formed is not the focal plane of the sensor. There are therefore two problems:
- The illumination of the sensor generates shadow areas: some faces will never be seen by the sensor. (In optics, it is the sun which plays the role of the source from which it is necessary to calculate the shadows ; in radar, the source is the antenna)
- Once we have determined which faces are seen by the sensor, these faces are then projected in a plane containing the axis of the line of sight. In optics, the plane in which the image is projected is the camera plane.
Where 3D rendering engines can work vectorially, projecting all faces in the camera plane in increasing order of distance to the sensor, in radar this approach is not possible. The projection plane of the image is not the plane of the sensor, and therefore the calculation of the faces that are hidden must be done separately.
If we consider a flat earth model, we can define the following two reference frames:
The terrestrial reference frame. This is the classical cartographic reference frame (x,y,z), where x is traditionally oriented in the East-West direction, y in the North-South direction, and z oriented upwards, normal to the terrestrial plane assumed locally flat and horizontal.
The reference frame of the radar antenna. Three main axes are to be taken into account in the understanding of the radar geometry:
- The range axis, or line of sight axis. We will call this axis the k axis because it is also the vector of the wave vector.
- The azimuth axis, or axis of trajectory of the sensor. This axis is horizontal (defined by the sensor velocity vector) and perpendicular to the distance vector. It is also the axis of horizontal polarization. We call this axis the H axis
- The axis perpendicular to the two previous ones, which we will call radar height. It also corresponds to the vertical polarization axis. We will call this axis the V axis. But be careful, V is not collinear with z.
From these axes, we will be able to define particular plans:
- The (Antenna,H, V) plane is the wave plane. It is the one in which we define the orientation of the polarization.
- The (Antenna, k, V) or (Antenna, k, z) plane: this is the incidence plane. It is the one in which we reason most often to understand the geometric effects particular to SAR: layover, shadows, etc.
- The (Antenna, k, H) plane: this is the range-azimuth plane: the one in which the SAR image is computed.
In optics, if the sensor was placed in place of the radar antenna, the image plane would be the wave plane. The shadows are independent of the position of the camera: they depend on the position of the sun. But in the optical shadows, we can have signal in these areas in the presence of ambient lighting!
In radar, this is not the case at all: the projection plane contains the direction of propagation of the wave. And the shadows are calculated from the sensor itself.
Understanding of a simple building
Let’s take a simple building as a generic example:
If we want to know what it looks like in an optical image, we project the 3D building model in the wave plane, which gives the classical view that can be obtained from a 3D rendering engine:
Generally, for an optical satellite, the incidences are closer to the vertical (called nadir), and we see mostly the roof.
To have the vision of the radar, it is necessary to project the 3D model in the image plane. But before, it is necessary to calculate the faces which are visible, i.e. in direct sight of the antenna. And then, several faces can be projected together in the image. For our previous building, we obtain the following image in the image plane:
We see that the pixels of the roof are superimposed sometimes with the yellow facade, sometimes with the blue facade.
Here I show you another example of a building seen in a SAR image or in an optical view:
To understand the projection of the image as a whole, it will be necessary to add, in the calculation of the visible faces and the layover, the elements of the ground in direct view and the other buildings.
Layover: what happens when you add a ground, then other buildings
To understand the layover phenomenon, it is useful to consider, for each azimuth position of the antenna, the intersection of the 3D model with the incidence plane.
- In a first step, we consider the intersection of the imaged 3D surface in the plane (Antenna, k, V)
- in a second step, we will consider which faces are seen. the wave propagating along the k-axis, these are the first intersections of our 3D surface along the distance axis that will be in direct illumination of the radar.
- Finally, once we have determined these faces in direct view, we must then project them perpendicularly, on this axis, as in the following figure:
When several faces are superimposed during this projection, it is the phenomenon of layover. When the scene becomes more complicated, for example with other buildings, the interpretation of layover phenomena becomes more complicated, but the principle remains the same:
Here, the reasoning was done for a constant incidence. In practice, for very large areas, one would have to take into account the fact that the incidence varies along the swath. But the representation of our intersection in the (k,V) plane allows a relatively quick visual interpretation of the viewed faces and the layover.