3D particle tracking using the defocusing principle
General Defocusing Particle Tracking (GDPT)
Machine learning and deep learning
Important practical aspects
References

3D particle tracking using the defocusing principle

Tracking particles is a common need in many fields, one example is fluid mechanics, where tracer particles are used to probe the velocity of fluid flows. When observing particles in a fluid using an objective lens with a small depth of field (e.g. through a microscope), the respective particle images show different patterns and sizes depending on their distance from the focal plane of the objective lens.

We can get a first intuition simply considering geometrical optics and the thin lens approximation. Given a lens with focal length f, rays emitted by a point source lying on a plane at distance s_o from the lens (object plane) will converge to a point lying on a plane at distance s_i (image plane), following the thin lens equation

\Large\frac{1}{s_o} + \frac{1}{s_1} = \frac{1}{f}.

If we consider a sensor placed at a given distance from the lens, only particles lying on the conjugate object plane will be in focus, whereas particles at different distances will look as larger and dimmer disks. The size and intensity of the disk is proportional to the distance of the respective particles to the focal plane.

In real optical systems, the situation is clearly more complex with diffraction and optical aberrations that comes into play, however the general picture remains the same and particle images have different shapes depending on the distance of the respective particles from the objective lens. In some cases, controlled optical aberrations are even introduced on purpose to read out more efficiently the defocusing information, as for the Astigmatic Particle Tracking Velocimetry.

General Defocusing Particle Tracking (GDPT)

A lot of methods have been proposed to exploit the defocusing principle for 3D particle tracking. We consider here a method often referred to as General Defocusing Particle Tracking (GDPT). The main idea of GDPT is to rely on a reference set of experimental particle images at known depth positions which is used to predict the depth position of measured particle images of similar shape. The similarity between measured and reference particle images, in the original formulation of the GDPT method (and in Method_1 of DefocusTracker), is rated using the normalized cross-correlation function.

Machine learning and deep learning

The GDPT method can also be seen from a machine learning perspective. We define a certain calibration model (for instance based on the normalize cross-correlation) and we train that model on a training set of images. The training set consists of a set defocused particle images associated with the respective true values (i.e. the depth positions). Once the model has been trained, it will be able to process an image to identify defocused particle images of the same kind and their respective position. The same model can be trained on different images (for instance with different particle size or image magnification). This is the philosophy behind DefocusTracker.

More sophisticated algorithms can also be used for GDPT analysis, such as Convolutional Neural Networks (CNN). CNNs have been incredibly successful in achieving many tasks in image analysis and pattern recognition, however, to be properly trained they need a large amount of training data. One of the scope of this website it is collect and make available a large database of training set for the development and testing of CNN or other deep learning architectures.

Important practical aspects

Experimentally, it is difficult to create directly a set of training images since it is often impossible to know a priori the particle positions. Practically, the training set of images is often obtained by looking at static particles (e.g. sedimented on the bottom of a microchannel) and moving the optics systematically at different depth positions. This approach, however, introduces some subtle consequences that must be carefully considered to avoid mistakes.

First, it must be considered carefully the sign of the depth axis. For instance, using an inverted microscope, the defocused images of particles taken with the objective in the lower position, correspond to particles in the top position during the measurement. What counts is the relative distance between the objective and the particles.

Second, if the refractive index of the immersion medium of the lens (typically air) is different from the refractive index of the fluid where the particles are moving (typically water), the distance that the objective must travel, with fixed particles, to obtain a certain defocus pattern is different from the distance that a particle must travel, with fixed objective, to obtain the same pattern. Practically, the difference can be approximated by a multiplication factor equal to the refractive index of the fluid divided by the refractive index of the immersion medium (which is 1 in case of air).

Finally, one must be aware that the optics is not perfect, and there can be aberration that modify the shape of particle images across the image sensor. This can also lead to bias error that should be compensated as much as a possible. A practical example is provided in the second Work Through Example (WTE2) of DefocusTracker.

References

R. Barnkob, C. J. Kähler, and M. Rossi. General defocusing particle tracking.
Lab Chip 15, 3556-3560 (2015)

R. Barnkob, and M. Rossi. General Defocusing Particle Tracking: fundamentals and uncertainty assessment.
Exp Fluids 61, 110 (2020)