Scientists at the Indian Institute of Science (IISc), Bengaluru show how a brain-inspired image sensor can go beyond the diffraction limit of light to detect miniscule objects such as cellular components or nanoparticles invisible to current microscopes. Their novel technique, which combines optical microscopy with a neuromorphic camera and machine learning algorithms, presents a major step forward in pinpointing objects smaller than 50 nanometers in size.
Since the invention of optical microscopes, scientists have strived to surpass a barrier called the diffraction limit, which means that the microscope cannot distinguish between two objects if they are smaller than a certain size of typically 200-300 nanometers. Their efforts have largely focused on either modifying the molecules being imaged, or developing better illumination strategies, stated Deepak Nair, Associate Professor, Centre for Neuroscience (CNS), IISc, and corresponding author of the study published.
Measuring roughly 40 mm in height by 60 mm in width by 25 mm in diameter, the neuromorphic camera weighs about 100 grams. It mimics the way the human retina converts light into electrical impulses, and has several advantages over conventional cameras. In a typical camera, each pixel captures the intensity of light falling on it for the entire exposure time that the camera focuses on the object, and all these pixels are pooled together to reconstruct an image of the object. In neuromorphic cameras, each pixel operates independently and asynchronously, generating events or spikes only when there is a change in the intensity of light falling on that pixel. This generates sparse and lower amount of data compared to traditional cameras, which capture every pixel value at a fixed rate, regardless of whether there is any change in the scene. The results are published in Nature Nanotechnology.
"Such neuromorphic cameras have a very high dynamic range which means that you can go from a very low-light environment to very high-light conditions. The combination of the asynchronous nature, high dynamic range, sparse data, and high temporal resolution of neuromorphic cameras make them well-suited for use in neuromorphic microscopy,” said Chetan Singh Thakur, Assistant Professor, Department of Electronic Systems, Engineering (DESE), IISc, and co-author.
In the current study, the group used their neuromorphic camera to pinpoint individual fluorescent beads smaller than the limit of diffraction, by shining laser pulses at both high and low intensities, and measuring the variation in the fluorescence levels.
To accurately locate the fluorescent particles within the frames, the team used two methods. The first was a deep learning algorithm, trained on about one and a half million image simulations that closely represented the experimental data, to predict where the centroid of the object could be, said Rohit Mangalwedhekar, former research intern at CNS and first author of the study.
In biological processes like self-organisation, there are molecules that are immobilized. Therefore, there is a need to have the ability to locate the centre of this molecule with the highest precision possible so that we can understand the thumb rules that allow the self-organisation. The team was able to closely track the movement of a fluorescent bead moving freely in an aqueous solution using this technique. This approach can, therefore, have widespread applications in precisely tracking and understanding stochastic processes in biology, chemistry and physics, said the scientists.
|