Sitemap

Why a SAR image is not your usual optical one

And how to tell a radar image apart from an optical one with a few solid arguments

8 min readOct 2, 2025

--

When I show two satellite images side by side — one optical panchromatic, one radar — it’s surprisingly rare for people to confidently identify which one is the radar image. Even when they guess, very few can justify their choice with solid arguments.

Press enter or click to view image in full size
Which one is the radar one?

So let’s explore together the key differences between Synthetic Aperture Radar (SAR) and optical imagery. By the end, you’ll have more than just a hunch — you’ll have six strong reasons to tell them apart.

1. Different slice of the spectrum

Optical sensors use visible light — the same radiation your eyes detect — while radar sensors operate in the microwave domain, with wavelengths from centimeters to meters. This single shift in spectrum leads to several important consequences:

First, each wavelength interacts with objects of a scale comparable to its size. Optical sensors, with their tiny hundreds-of-nanometer wavelengths, are sensitive to very fine structures like pigments and cellular details. Radar, with its much longer wavelengths, interacts with larger structures such as leaves, branches, or terrain roughness.

Second, the radar signal is sensitive to both the geometrical structure of objects (flat surfaces, orientation, multiple bounces such as the “double bounce” in urban areas) and to the dielectric properties of materials at microwave frequencies. The latter are strongly influenced by water content, meaning radar is naturally sensitive to moisture and humidity.

Third, radar’s long wavelengths can pass through clouds. Unlike optical satellites that sulk when the weather is bad, radar can image the Earth regardless of cloud cover or lighting conditions. This “all-weather, day-and-night” capability makes SAR invaluable for building reliable time series of observations, where consistency is everything.

Press enter or click to view image in full size
Different observation scales

2. Active vs passive sensing

Optical sensors are passive: they wait for sunlight to illuminate the scene and simply record what’s reflected back. Radar, by contrast, is an active system: it sends out its own signal and records the echoes.

This fundamental difference has two very practical consequences:

First, radar doesn’t depend on sunlight. It can acquire images in the middle of the night just as well as at noon. SAR satellites are basically insomniac night owls — they never need a coffee break.

Second, because the radar beam itself is the illumination source, shadows always align with the radar’s line of sight (the range direction). Unlike optical shadows that follow the sun’s position and shift during the day, radar shadows are geometry-driven and predictable. This makes SAR imagery both peculiar to interpret and incredibly consistent across acquisitions.

Passive (left) versus active (Right) sensors

3. Coherent sensing: amplitude and phase

Unlike optical sensors, which record only the intensity of reflected light, radar systems capture the full complex signal: both amplitude and phase of the returning wave. This coherence has three powerful consequences:

First, because the phase of the electromagnetic field is preserved, SAR enables polarimetry — a technique that uses the complex measurements (sometimes expressed as a Jones vector) to reveal target structure and scattering mechanisms. With it, one can distinguish bare soil from vegetation, or natural surfaces from man-made objects.

Second, the recorded phase is the foundation of interferometry (InSAR). By comparing the phase between two SAR acquisitions, scientists can measure ground displacements on the order of centimeters — or even millimeters — across large areas. This makes SAR a unique tool for monitoring earthquakes, volcanic activity, landslides, or urban subsidence.

Finally, coherence is also what gives rise to speckle, the grainy texture specific to radar imagery that we’ll revisit later.

4. An image built by an algorithm, not a snapshot

An optical image is essentially a photograph: light passes through a lens and is projected onto a sensor. A SAR image, on the other hand, is synthesized. As the radar moves along its orbit, it collects echoes over time, and sophisticated algorithms combine these signals to reconstruct a 2D image.
This has two major consequences:

First, spatial resolution is not determined by the physical size of a camera lens but by radar parameters such as frequency, bandwidth, and the length of the synthetic aperture. Attention, and I cannot repeat this enough: resolution and pixel size are not the same in SAR. Resolution is dictated by the sensor’s acquisition parameters, while pixel size is chosen later during the image synthesis algorithm.

Press enter or click to view image in full size
Same spatial resolution….for different pixel sizes

Second, unlike optical systems that capture an image directly projected onto the ground plane, radar images are first built in a very specific reference frame: range–azimuth coordinates. Range corresponds to the line of sight between the radar and the target (distance measurement) and azimuth corresponds to the direction along the satellite’s motion. At this “Level-1” stage, the image is not yet a map of the Earth’s surface but rather a radar-centric view. Only later do we project it onto a geographic reference system, and this transformation introduces several geometry-specific distortions: Layover: tall objects appear to lean toward the radar, foreshortening: slopes facing the radar look compressed, shadowing: areas hidden from the radar beam simply vanish. This unique geometry makes SAR interpretation less intuitive than optical imagery.
This peculiar geometry also means that the incidence angle varies along the radar swath, which can have non-negligible radiometric consequences when comparing or interpreting backscatter intensities.

Note also that very strong reflectors (like metallic structures) can produce responses shaped by the radar’s point spread function (PSF). In SAR, this PSF often looks like a bright cross, while in optics it is closer to a smooth Gaussian spot.

Example of sinus cardinale PSF visible on a SAR image (SETHI, Onera, X-band)

5. Speckle: the grainy fingerprint of radar

One of the most striking visual differences between radar and optical images is the grainy texture in SAR, known as speckle. Unlike optical “noise,” speckle is not an artifact of a faulty sensor but a direct result of radar’s coherent imaging process.
Here’s what happens: when a radar beam illuminates the ground, the wave is scattered by many small elements (leaves, soil particles, buildings, etc.). Because the radar is coherent, the scattered echoes interfere with each other. Sometimes they add up (constructive interference), sometimes they cancel out (destructive interference). The result is a random-looking salt-and-pepper pattern across the image.

This phenomenon has several important consequences:

  • Speckle reduces image readability at first glance. A smooth agricultural field may look noisy, making interpretation less intuitive than with optical imagery.
  • It is not just “noise”: speckle is part of the signal and contains information about the scattering process. In fact, statistical models of speckle are widely used to study surface properties, or movements.
  • Mitigation is possible but never perfect: multi-looking, spatial filtering, or temporal averaging can reduce speckle, but always at the cost of resolution or detail.
  • Its statistics follow specific probability laws: the amplitude of speckle often follows Rayleigh or Nakagami distributions, while the intensity is described by a Gamma law. In the case of single-look complex (SLC) images, the scale parameter is typically L=1. These probabilistic models are crucial for designing filters and for interpreting radar backscatter quantitatively.

So, while speckle may look like a flaw, it is actually a defining characteristic of SAR — and something every radar analyst must learn to live with (and sometimes even exploit).

Speckle Effects are visible on agricultural crops (SAR ICEYE Image)

6. Geometry is tricky

Optical images project the world onto a sensor in a way that closely matches human vision: objects are where you expect them, with shadows cast according to the Sun’s position. Radar geometry is very different.

In SAR, distances are measured in slant range (the direct line-of-sight between the antenna and the target) rather than in a vertical projection onto the ground. Combined with the radar’s side-looking geometry, this creates distortions that are unique to radar imaging.

That said, these distortions are mostly problematic in mountainous terrain. In agricultural regions, where the land is relatively flat, geometry effects are far less of a concern.

👉 If you want to dive deeper into radar distortions like layover, shadows, and echo superposition, I wrote a dedicated post here.

Conclusion: back to our two images

Let’s return to the starting point: two satellite images, one optical panchromatic, the other radar. Which one is which?

  • The radar image is the one on the left, and now we can prove it with arguments.
Press enter or click to view image in full size
The left image is an ICEYE X-band image (Agricultural area and road network near Brawley and Westmorland, California, U.S.)
  • Zoom in, and you’ll see the characteristic speckle pattern, a direct signature of SAR’s coherent imaging.
  • In urban areas, buildings appear very bright. This is not a “color” effect but the result of geometric scattering mechanisms. Flat walls facing the radar and the classic double-bounce effect (between walls and the ground) create strong returns, making cities glow in radar images.
  • Field boundaries, and roads, on the other hand, often appear dark. This is due to specular reflections: when the radar wave bounces away from the sensor rather than back to it, little or no signal is recorded. In optics, roads are bright…

On top of these clues, other signatures may appear depending on the image and acquisition mode:

  • Relief distortions: mountains or hills may look “tilted” toward the radar due to layover, or compressed by foreshortening.
  • The appearance of buildings: in SAR images, buildings often look unnaturally bright and sometimes distorted. This comes from strong double-bounce effects (between walls and the ground) and from the radar’s side-looking geometry, which makes vertical structures appear different from what we expect in optical imagery.
Press enter or click to view image in full size
Optics vs Radar (TerraSAR-X, DLR) on Paris (France)
  • Shadows aligned with the radar axis: unlike optical shadows that follow the Sun, radar shadows are systematically aligned with the radar’s line of sight.

These effects are less obvious at first glance, but once you know what to look for, they provide additional solid arguments to distinguish a SAR image from an optical one.
Train yourself with these other examples and try to find as many reasons as possible to identify which image is the radar one:

Press enter or click to view image in full size
Other comparison: optics vs radar (SETHI, Onera) over Toulouse, France
Press enter or click to view image in full size
Other comparison: optics vs radar(SETHI, Onera) over Bretigny, France

In the end, SAR and optical are not rivals but complementary tools: one shows us the world as our eyes would see it, the other unveils the hidden structures, motions, and material properties invisible to visible light. Together, they give us a far richer understanding of our planet.

--

--

Elise Colin
Elise Colin

Written by Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.

Responses (1)