Open Access Article
Eden
Dotan
a,
Dana
Yagoda-Aharoni
a,
Eli
Shapira
b and
Natan T.
Shaked
*a
aDepartment of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel. E-mail: nshaked@tau.ac.il
bNational Neuromorphic Lab, Tel-Mond, Israel
First published on 23rd September 2025
We present a label-free imaging flow cytometry system that integrates a microfluidic chip, imaged by a motion-sensitive (event-based) camera and an interferometric phase microscopy module using a simple frame-based camera. The event camera captures activity from the flowing cells, corresponding to thousands of frames per second, and triggers the significantly slower interferometric camera when a rare cell, requiring more sensitive analysis, is detected via a single raw-interferogram analysis, significantly reducing data volume. The raw interferometric data serves as an input to a deep neural network for rare-cell classification. We demonstrate the use of this system to detect and grade rare cancer cells in blood, where the event camera is used to rapidly classify between the common white blood cells and the rare cancer cells, and the interferometric camera is used to grade the cancer cell type (primary/metastatic), as a human model for detecting and grading circulating tumor cells in liquid biopsies. This hybrid approach enables efficient data acquisition, rapid processing, and high sensitivity, significantly reducing computational load, and is expected to find various applications in detecting and processing rare cells in imaging flow cytometry.
Flow cytometry is a high-throughput technique capable of analysing up to hundreds of thousands of biological cells per second as they pass through laser beams. It can provide information on cell size and internal complexity using forward and side light scatter, respectively.5 While conventional flow cytometry lacks imaging capabilities, imaging flow cytometry combines high-throughput analysis with cellular imaging during flow, thus allowing morphological and functional profiling of single cells.6–8 Event cameras are well suited for imaging flow cytometry since they generate events only when a cell passes through the sensor's field of view.3,4
To achieve molecular specificity and reliable identification in flow cytometry, fluorescence based labelling with antibodies is typically required.9 This approach is widely used across immunology, cancer biology, and infectious disease research,10–15 but relies on the presence of distinct surface markers. Challenges arise when target cells lack unique antigens or share markers with other cell types. Moreover, attaching labels to the cell membrane may alter cell behaviour or compromise measurement integrity.16
Label-free imaging methods overcome these limitations by leveraging intrinsic optical properties of cells. One such parameter, the refractive index, reflects intracellular composition, including the cell dry mass and concentration. Interferometric phase microscopy (IPM) noninvasively measures the optical path delay (OPD) profile of cells, or its integral refractive-index profile, enabling analysis of cell morphology and subcellular contents, without using external labelling agents.17–20 We have previously shown that IPM is sensitive enough to grade cancer cells (healthy/primary cancer/metastatic cancer),21 even during flow on the background of blood cells.22 IPM typically requires the acquisition of full-frame off-axis imaging interferograms, generally dictating a low throughput of tens to hundreds of frames per second when using low-cost cameras.
Several high-throughput quantitative phase imaging strategies have been developed for rapid imaging flow cytometry. For example, multi-ATOM enables blur-free quantitative phase imaging of over 10
000 cells per second using asymmetric detection and real-time FPGA processing,23 while Coherent-STEAM applies dispersive Fourier transform for high-speed, label-free phase mapping of cells in flow.24 However, these techniques require complicated optical setups, such as laser scanning modules, and meticulous alignments, which are not suitable for seamless integration into existing imaging flow cytometry setups.
In this paper, we propose a new concept for rapid and efficient quantitative label-free imaging and grading of rare cells within liquid biological samples. This approach utilizes a rapid event-based camera that acquires cells during flow in a microfluidic chip and triggers a slower interferometric frame-based camera when rare cells requiring more sensitive analysis are detected. We demonstrate this approach's potential for detecting circulating tumor cells (CTCs) in liquid biopsies obtained in routine lab tests by using a human model of blood spiked with colorectal cancer cells.
Finding cancer in its early curable stages represents a critical unmet need in oncology. Certain cancers, such as pancreatic and colorectal cancers, can progress without noticeable symptoms, thereby bypassing early diagnosis.25,26 Liquid biopsies, such as peripheral routine blood samples, can be analysed to detect CTCs, as a means of detection and monitoring of cancer. Currently, however, this procedure's sensitivity for analysing and grading the cells remains limited due to the extremely low concentration of CTCs; for example, there are typically 1–10 cells per 10 mL of peripheral blood.27–29
In our label-free imaging flow cytometry setup, detecting a cell can be accomplished by monitoring multiple cells during flow within a confined region and time window.3,30 Once a candidate cancer cell is detected by the event-based camera, the frame-based camera is immediately triggered, providing a more sensitive IPM analysis for cell grading. IPM cell grading utilises a convolutional neural network (CNN) trained to classify the rare cell types (primary/metastatic cancer), such that during the network inference, when IPM is triggered by the event camera, it can predict the rare cell type directly based on the raw interferograms, further increasing the process throughput.
Several sensor approaches have been previously explored to combine high-speed detection with high-resolution imaging. High-speed CMOS cameras can record at kilohertz rates, yet they require full-frame readout, producing terabytes of data quickly and heavy post-processing, which limits real-time scalability. Hybrid sensors like DAVIS346,31 offer simultaneous events and frames but are constrained by lower dynamic range and reduced spatial resolution. Shared readout pathways further degrade temporal precision. These limitations underscore the need for our dual-sensor design, which preserves both sparse, low-latency detection and high-quality interferometric imaging.
The microfluidic chip is simultaneously imaged by two cameras, a rapid event camera and a conventional CMOS camera. The setup combines an inverted microscope illuminated by a helium–neon (He–Ne) laser (λ = 632.8 nm, 17 mW), which provides coherent light necessary for interferometric imaging. The laser beam is directed through the microfluidic chip positioned on a three-axis microscope stage. Cells flow in the microfluidic chip driven by a pressure-controlled pump system. The event camera (IDS, UE-39B0XCP-E) is used for rapid preliminary cell detection and the frame-based camera (IDS, UI-3060CP-Rev.2) is used for further and more sensitive cell analysis via interferometric detection. In this setup, the event camera acts as a real-time trigger, detecting rare, rapidly flowing cells on localized bursts of events (sparse imaging). The event camera was configured with contrast thresholds of 80 for both ON and OFF events. When such a rare cell target is identified, the system is designed to initiate capture on the frame-based camera via the IPM module, recording off-axis imaging interferograms only upon these rare events. This enables much slower acquisition frame rate processing suitable for these rare events and only across the areas of the candidate cell. The optical path includes a 60×, 0.85 numerical aperture (NA) infinity corrected microscope objective (Newport), which is shared by both the event-based and frame-based cameras, where a beam splitter positioned after the objective divides the magnified image between two imaging channels. Tube lenses positioned in the event-based and frame-based camera beam paths create total magnifications of 77× and 70×, respectively. This dual-path design allows both cameras to image the same scene with appropriate magnifications for their respective sensing modalities. The event camera captures the magnified image responses to motion, via changes in brightness at the pixel level and outputs a continuous stream of events indicating the presence of moving cells. In our case, the event stream is grouped into temporal windows of Δt = 1000 μs, and we preliminarily detect rare cancer cells, which are larger than the background blood cells. This is done by converting the event stream into a binary image and applying morphological dilation to connect nearby events. The density-based spatial clustering of applications with noise (DBSCAN) algorithm is applied to group events into clusters representing individual cells. A diameter-based threshold of 180 pixels is then used to filter out small clusters and retain only those corresponding to larger cells. This threshold was selected based on the minimum cancer cell diameter measured in the training dataset and is consistent with the morphological filtering parameters (kernel size 10 × 10 with 2 iterations). The value matches the chosen event accumulation time window, Δt, and could vary if a smaller temporal window is employed. Together with the selected temporal window, morphological dilation reduces missed detections by ensuring that weak or fragmented events are aggregated into candidate cells. Once a candidate cancer cell is identified, its centroid is computed and projected onto the frame-based camera coordinate space using a homography transformation derived from prior calibration. Thus, the frame-based camera is activated only when it receives a trigger that a rare cell is detected by the event camera. When multiple clusters exceed the threshold, each centroid is projected through the homography transformation to capture all relevant areas of interest simultaneously. Cell aggregates are filtered out using a maximum-size threshold and roundness criteria, as they are not considered reliable inputs for classification. Missed triggers of the interferometric camera are unlikely, as the inherent delay between event detection and trigger activation significantly exceeds the camera exposure time, ensuring readiness for the next trigger.
The frame-based camera is positioned after a compact flipping interferometry (FI) module.32 The module enables off-axis interferometric imaging by creating an interference pattern between the sample image and its flipped version, thus allowing external interferometry that is mechanically stable. The FI module consists of a beam splitter that divides the magnified beam into two paths: one is directed toward a retro-reflector and the other toward a slightly tilted mirror. The interference between these two beams generates an off-axis image interferogram, containing amplitude and the quantitative phase map of the sample, capturing the sample morphology and content via its refractive index changes, thus allowing sensitive quantitative imaging without using chemical staining.
For the actual CTC model, we used cancer-cell-spiked blood. Whole blood was obtained from the Israel National Blood Services, following the Tel Aviv University's IRB approval. First, red blood cells were depleted using a commercial depletion kit (EasySep, STEMCELL Technologies) following the manufacturer's protocol, leaving white blood cells (WBCs). To facilitate comparison between cancer cell types, two controlled samples were prepared: one spiked with SW480 cells and the other with SW620 cells, each at a 1
:
100 cancer cell to WBC ratio. These samples were then inserted into the microfluidic chip and imaged using the setup described above.
:
20 ratio, respectively. Since classification is done on the raw interferograms rather than on the OPD maps of the cells, the model was first trained on synthetic digital off-axis holograms having varying off-axis fringe spatial frequency, designed to improve robustness against interference patterns, thereby enhancing training efficiency and classification accuracy.33 The model was trained using the Adam optimizer with a batch size of 32, 20 epochs, and a decreasing learning rate schedule to optimize convergence. All computations were performed on an Intel i7-10700 CPU. After training, the CNN can predict the cancer cell type based on the raw interferogram.
Fig. 2 shows examples of cancer cells imaged by the event camera (first row), their off-axis interferogram as imaged by the frame-based camera through the interferometric module (second row), and their extracted OPD map (third row), as obtained by applying an off-axis hologram reconstruction process (although not used for cell classification).34
The event data is processed to detect potential cancer cells based on spatial and temporal event patterns. Upon detection, a trigger signal activates the frame-based camera, which captures an off-axis hologram of the detected cell at a specific area of interest corresponding to the coordinates obtained from the event camera. This approach reduces the computational load by limiting processing to a 256 × 256-pixel region per cell, instead of the full 2-megapixel frame.
Video S1 shows an illustration of the system at work, with both cameras imaging the cells at different frame rates. Table 1 summarizes the classification performance on the test set using different imaging modalities: event-based data, amplitude images, OPD maps, and raw off-axis interferograms. As expected, interferogram-based classification achieved the highest accuracy due to its ability to capture both the cell quantitative phase information and amplitude information. Classification using the raw off-axis interferograms yielded perfect scores, which can be attributed to the rich information content of the holograms, the use of a MobileNetV2 model pre-trained on a large synthetic dataset of the same cell types acquired with varying fringe frequencies (as detailed in the Methods), and the 10-fold cross-validation implemented. This approach ensured evaluation across diverse subsets, reducing overfitting risk and providing a robust estimate of model performance. In contrast, event-based data alone yielded limited performance, highlighting the importance of interferometric imaging for cancer cell classification, rather than using only the data acquired by the event camera for cancer cell classification. The event-based imaging did not miss detecting any cancer cell but could not classify the cancer cell type with high accuracy, necessitating utilizing interferometric-based classification. The average processing time required to detect a candidate cancer cell from the event stream was 2.05 ± 7.43 milliseconds per frame, which enables near instant triggering of the frame-based interferometric camera.
| Data type | Accuracy | Label | Precision | Recall | F1-score |
|---|---|---|---|---|---|
| Off-axis interferogram | 1 | SW480 | 1 | 1 | 1 |
| SW620 | 1 | 1 | 1 | ||
| OPD | 0.94 | SW480 | 0.98 | 0.90 | 0.94 |
| SW620 | 0.91 | 0.98 | 0.95 | ||
| Amplitude | 0.90 | SW480 | 0.90 | 0.88 | 0.89 |
| SW620 | 0.89 | 0.91 | 0.90 | ||
| Event-based image | 0.90 | SW480 | 0.83 | 1 | 0.91 |
| SW620 | 1 | 0.80 | 0.89 |
As part of our experimental validation, we tested the system on blood samples spiked with known cancer cell type. Each sample contained a defined ratio of cancer cells to white blood cells (1
:
100). The validation consisted of two independent experiments: blood spiked with SW620 cells, and blood spiked with SW480 cells, enabling controlled evaluation.
Table 2 presents the number of WBCs and cancer cells detected in each controlled sample, along with the corresponding classification accuracy of the CNN model when given the respective interferograms as input, after projecting the coordinates of the detected cancer cells to the interferogram coordinate space.
| Sample label | CTCs detected | WBCs detected | Detected ratio (WBCs : CTCs) |
Classification accuracy |
|---|---|---|---|---|
| SW480 | 80 | 11 252 |
0.71 : 100 |
100% |
| SW620 | 160 | 10 833 |
1.48 : 100 |
100% |
To further assess the classification performance based on the event data alone, the same cells identified in the event stream were also fed into a MobileNetV2 model that was trained on event-derived input. Notably, these cells were misclassified in approximately 38% of cases, suggesting that the event-based data lacks the robustness required for accurate classification of complex cell types, and that the more sensitive imaging by IPM is needed to grade the rare cells.
The processing time to detect a candidate cancer cell is influenced primarily by the number of activated pixels in each event frame, which scales with the spatial extent of the detected cells. In particular, larger cells generate denser event clusters, resulting in increased computational load. Notably, all processing was performed on a standard CPU, suggesting that further reduction in latency could be achieved through GPU acceleration or algorithmic optimization. These improvements would enhance real-time compatibility in high-throughput applications. Importantly, the trained model yields accurate predictions for every controlled sample, correctly distinguishing the cancer cell types. This successful classification between primary and metastatic colorectal cancer cell lines provides meaningful morphological insights that could be used for diagnosis of cancer from routine blood test, for cancer monitoring, and for personalized treatment decisions.
In this proof-of-concept implementation, cell acquisition was performed in real time, but the triggering mechanism was applied offline to retrospectively identify and extract relevant frames with rare cells using recorded event data, effectively demonstrating the potential to reduce redundant data acquisition. Future real-time triggering is possible by opening a specific area of interest via the application programming interface (API) of the frame-based camera. The event camera processing window was configured to match a 30 frames-per-second acquisition rate of the frame-based camera. However, the system is designed to support faster cameras by adjusting Δt of the event-based camera accordingly in order to enable high-throughput operation driven by the event-based trigger. Validation was performed through two independent spiking experiments at a 1
:
100 cancer to blood cell ratio: one experiment with SW620 cancer cells and another experiment with SW480 cancer cells. These controlled experiments provide initial evidence of reproducibility and feasibility, while future clinical evaluation should assess performance at the lower CTC levels typically observed in blood (<10 cells per 10 mL). Beyond cancer detection in liquid biopsies, the suggested framework opens new possibilities for rare-event identification in biomedical and industrial applications, including stem cell studies for personalized medicine. The ability of the suggested approach to detect rare phenomena with minimal processing supports the development of compact, low-power diagnostic tools suitable for point-of-care or resource-limited settings, especially when label-free imaging is used. Apart from biomedical uses, the suggested dual-sensor approach could support various high-speed monitoring tasks, such as industrial quality control through vibration analysis, analysis for fluid dynamics, and in the automobile industry, by triggering high-resolution imaging only during relevant rare events, thereby reducing data load while capturing key information.
In summary, the proposed system offers a fast, label-free, and computationally efficient quantitative imaging flow cytometry method for rare-cell detection and classification, with strong potential to impact early cancer diagnostics, as well as has potential to contribute to broader rare-event sensing domains.
Data for this article, consisting of the training dataset for SW480 and SW620 cell classification using digital holography and event-based imaging, are available at Zenodo under the following DOI: https://doi.org/10.5281/zenodo.15723128.
| This journal is © The Royal Society of Chemistry 2025 |