Measuring Movement in a Complex System via Dynamic Speckle: Two Totally Different Methods.

Elise Colin
6 min readJul 28, 2024

--

Global Speed Measurement: The Runnerā€™s Analogy

Before delving into dynamic speckle, let me offer an analogy with measuring the overall speed of a group of runners.

Long Exposure: Capturing Motion as Blur

šŸ‡ Imagine a group of ten runners circling a stadium at a constant group speed. Each time they pass in front of a fixed camera, a photograph is taken with a long exposure. During this long exposure, the runners move, introducing blur into the captured image. This blur is directly related to the runnersā€™ speed: the faster they are, the more pronounced the blur. By repeating this measurement on each lap and analyzing the blur across several images, we can estimate the groupā€™s average speed. This process is analogous to dynamic speckle where speckle blur is measured using multiple repeated acquisitions over time; to infer information about movement. Within each photo, I can measure a blur. Between each photo, there is not necessarily an exact correlation between the positions of my runners; they may have changed places between laps.

Measuring speed with a short exposure:

šŸ¢ In this second scenario, we change the approach by using short exposure times, thus eliminating blur in the images. Instead of the blur, we take a burst of photos as the runners pass in front of the camera. Each image in the burst captures the runners at slightly different positions, thus maintaining sharpness and detail. By analyzing the runnersā€™ movements between each successive image in the burst, we can calculate their speed by correlating positions across the various shots of the fixed camera.

This is similar to a speckle analysis where each separate snapshot contributes to a frozen speckle image. But as the global position is modified between burst images, we can estimate the velocity of the group by considering the global temporal autocorrelation.

Summary: In both cases, the goal is to measure the runnersā€™ speed, but the techniques differ in how the images are captured and analyzed. With long exposure, blur serves as a speed indicator, while with short exposure, it is the relative position in a series of sharp images that is used. This is exactly what happens with dynamic speckle!

  • In case number 1, it is the speckle contrast that helps us estimate movement. The blurrier the image, the lower the contrast, the higher the speed!
  • In case number 2, it is comparisons between successive images that help us: temporal autocorrelation functions, Fujiiā€™s index, etc. Case number 2 is suited for slow speeds, if one does not want to wait to make interminable exposure times, especially waiting for them to complete several laps of the stadium!

ā˜šŸ½šŸ¤”What happens on radar?
In a time series where we wait several days or even months between each acquisition, the runners have time to change! Signals are not stationary, and dynamic speckle is intended to detect activity or changes.
We can also look at blurring effects within a single image during the integration time. Thatā€™s for another post

How do I approach these two examples through simulation?

In the first case, one should imagine that the signal being recorded can be further decomposed into intermediate sub-states. Each of these sub-states corresponds to a set of positions of the scatterers. Each of these position states will result in an intensity pattern. The camera integrates all these intensity patterns together. The final result is a speckle intensity pattern whose contrast is less than one.

import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import ifft2, fftshift, fft2

# Number of points to generate
N = 100

# Dimensions of the square (10x10)
limit = 10

# Number of images to generate
num_images = 5

# Colors for each image in a blue gradient
colors = plt.cm.Blues(np.linspace(0.3, 1, num_images))

# Create a figure for the plots
fig, axes = plt.subplots(1, 2, figsize=(11, 4))
scatter_ax, intensity_ax = axes

scatter_ax.set_title('Superposition of Coordinates at Each Step')
scatter_ax.set_xlabel('x')
scatter_ax.set_ylabel('y')
scatter_ax.set_xlim(0, limit)
scatter_ax.set_ylim(0, limit)

x = limit * np.random.rand(N)
y = limit * np.random.rand(N)
kx, ky = np.meshgrid(np.linspace(-np.pi, np.pi, limit), np.linspace(-np.pi, np.pi, limit))

intensity_sum = np.zeros((limit, limit))
# Loop to generate and display each set of points
for i in range(num_images):
# Generation of N points x and y with random variation
x = x + 0.1 * np.random.randn(N)
y = y + 0.1 * np.random.randn(N)

# Display of points
scatter_ax.scatter(x, y, color=colors[i], alpha=0.6, s=40, label=f'Step {i+1}')

Z = np.zeros((limit, limit), dtype=complex)
# Calculation of the contribution of each diffuser
for k in range(N):
phase = 2 * np.pi * (kx * x[k] + ky * y[k]) # Simplified phase calculation
Z += np.exp(1j * phase)
intensity_sum += np.abs(fftshift(fft2(Z)))**2

scatter_ax.legend(loc='upper right', title="Steps", bbox_to_anchor=(1.25, 1))
scatter_ax.grid(True)
scatter_ax.set_xticks(np.arange(0, limit + 1, 1))
scatter_ax.set_yticks(np.arange(0, limit + 1, 1))
scatter_ax.axis('equal')

# Normalization of the sum of intensities
final_intensity = intensity_sum / num_images
# Visualization of the normalized sum of intensities
intensity_ax.imshow(final_intensity, extent=(0, limit, 0, limit), cmap='gray', vmin=np.min(final_intensity), vmax=np.max(final_intensity))

intensity_ax.set_title('Normalized Sum of Intensities')
intensity_ax.grid(False)

plt.show()

In the second case, each image corresponds to a set of instantaneous positions. The following image corresponds to a state of positions very close to the previous ones. However, the resulting pattern from the summation of contributions is very different! See for example below: positions that differ by only 5% within the pixel already result in different resulting patterns.

import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft2, fftshift

# Number of scatterers
N = 1000
# Dimensions of the square (10x10)
limit = 10

# Number of images to generate
num_images = 3

# Colors for each image in a blue gradient
colors = plt.cm.Blues(np.linspace(0.3, 1, num_images))

# Preparation of the figure and axes for scatterplots and Fourier images
fig, axes = plt.subplots(2, num_images, figsize=(10, 6)) # 2 rows, num_images columns

x = limit * np.random.rand(N)
y = limit * np.random.rand(N)
kx, ky = np.meshgrid(np.linspace(-np.pi, np.pi, limit), np.linspace(-np.pi, np.pi, limit))

intensity_sum = np.zeros((limit, limit))

for i in range(num_images):
# Generation of N points x and y with random variation
x = x + 0.05 * np.random.randn(N)
y = y + 0.05 * np.random.randn(N)

Z = np.zeros((limit, limit), dtype=complex)

# Calculation of the contribution of each scatterer
for k in range(N):
phase = 2 * np.pi * (kx * x[k] + ky * y[k])
Z += np.exp(1j * phase)
I = np.abs(fftshift(fft2(Z)))**2

# Scatterplot for the positions of the scatterers
ax_scatter = axes[0, i]
ax_scatter.scatter(x, y, color=colors[i], alpha=0.6, s=20)
ax_scatter.set_xlabel('x')
ax_scatter.set_ylabel('y')
ax_scatter.set_xlim(0, limit)
ax_scatter.set_ylim(0, limit)
ax_scatter.set_title(f'Step {i+1}')
ax_scatter.grid(True)
ax_scatter.set_xticks(np.arange(0, limit + 1, 1))
ax_scatter.set_yticks(np.arange(0, limit + 1, 1))
ax_scatter.axis('equal')

# Fourier Image
ax_intensity = axes[1, i]
im = ax_intensity.imshow(I, extent=(0, limit, 0, limit), cmap='gray')
ax_intensity.set_title('Speckle Intensity Image')
ax_intensity.grid(False)

# Adjustment of the spacing between the plots
plt.tight_layout()
plt.show()

Speckle patterns are extremely sensitive to the slightest movements, arenā€™t they?

Of course, this simulation would need to be more complex to include more scatterers and pixels. A convolution kernel should be applied to simulate speckle grains larger than 1 pixel! However, these initial simplified simulations demonstrate the philosophical difference between the two approaches.

What seems important to remember is that:
Calculating an activity index from the speckle contrast coefficient is an approach that does not belong to the same domain of hypotheses as looking at frame-to-frame differences (or the speckle autocorrelation function, or Fujii indices, or methods of tracking grains by optical flow):
In the first case, we hope that the speeds involved are high enough to observe several cycles of decorrelation within an image, thus, we hope blurring!
In the second case, the signals are correlated from one frame to the next, and the ergodic hypothesis does not apply!

It is this second case that we have studied through the dynamics of our images of leaf sap:

Elise Colin, Enrique Garcia Caurel, Karine Adeline et al. Real-Time Observations of Leaf Vitality Extinction by Dynamic Speckle Imaging, 24 July 2024, PREPRINT (Version 1) available at Research Square [https://doi.org/10.21203/rs.3.rs-4777448/v1]

For the second scenario, also look at
Erdmann, Simon, et al. ā€œDynamic speckle imaging of human skin vasculature with a high-speed camera.ā€ Optics Express 30.7 (2022): 11923ā€“11943.

--

--

Elise Colin
Elise Colin

Written by Elise Colin

Researcher with broad experience in Signal and Image processing, focusing on big data and IA aspects for Earth observation images, and medical images.

No responses yet