Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

An AI-enabled tool for quantifying overlapping red blood cell sickling dynamics in microfluidic assays

Nikhil Kadivar a, Guansheng Li*b, Jianlu Zhengc, Ming Dao*c, George Em Karniadakis*ab and Mengjia Xu*d
aSchool of Engineering, Brown University, Providence, RI, USA. E-mail: george_karniadakis@brown.edu
bDivision of Applied Mathematics, Brown University, Providence, RI, USA. E-mail: guansheng_li@brown.edu
cDepartment of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. E-mail: mingdao@mit.edu
dDepartment of Data Science, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ, USA. E-mail: mengjia.xu@njit.edu

Received 5th February 2026 , Accepted 25th April 2026

First published on 1st May 2026


Abstract

Understanding sickle cell dynamics requires accurate identification of morphological transitions under diverse biophysical conditions, particularly in densely packed and overlapping cell populations. In microfluidic sickling assays, simple dilution to reduce overlap is often undesirable because it reduces statistical power per experiment, and does not eliminate aggregation-driven clustering under hypoxic conditions. Moreover, longitudinal and cyclic deoxygenation–reoxygenation studies require tracking large cell populations within a single field of view, as all cells in the sample may undergo cumulative history-dependent changes. These experimental constraints necessitate robust quantification directly in dense suspensions. Here, we present an automated deep learning framework that integrates AI-assisted annotation, segmentation, classification, and instance counting to quantify red blood cell (RBC) populations across varying density regimes in time-lapse microscopy data. Experimental images were annotated using the Roboflow platform to generate labeled dataset for training an nnU-Net segmentation model. The trained network enables prediction of the temporal evolution of the sickle cell fraction, while a watershed algorithm separates overlapping cells to enhance quantification accuracy. Despite requiring only a limited amount of labeled data for training, the framework achieves high segmentation performance, effectively addressing challenges associated with scarce manual annotations and cell overlap. By quantitatively tracking dynamic changes in RBC morphology, this approach can more than double the experimental throughput via densely packed cell suspensions, capture drug-dependent sickling behavior, and reveal distinct mechanobiological signatures of cellular morphological evolution. Overall, this AI-driven framework establishes a scalable and reproducible computational platform for investigating cellular biomechanics and assessing therapeutic efficacy in microphysiological systems.


Introduction

Sickle cell disease (SCD) is a hereditary hemoglobinopathy characterized by the polymerization of hemoglobin S under deoxygenated conditions, leading to abnormal red blood cell (RBC) shape changes and altered microcirculatory flow.1–6 The morphological transformation of RBCs—from biconcave discocytes to sickled shapes—plays a critical role in the pathophysiology of vaso-occlusion, hemolysis, and impaired oxygen transport.7–9 Quantitative analysis of these morphological transitions is therefore central to understanding the biophysical mechanisms driving disease progression and evaluating therapeutic interventions.

Although experimental imaging and microfluidic assays have advanced considerably, achieving automated and quantitative classification of sickle cell morphologies continues to pose major analytical challenges.10–14 Conventional image analysis methods often rely on manual feature extraction, thresholding, or heuristic shape metrics that lack robustness across heterogeneous patient samples and varying imaging conditions.6,15–17 Moreover, such approaches rarely capture the temporal evolution of cell morphology under dynamically changing biophysical environments, such as drug, oxygen level, and cell overlap.18–20 Recent advances in deep learning and image segmentation, particularly convolutional neural network (CNN) – based architectures, have revolutionized biomedical image analysis by enabling data-driven extraction of morphological and contextual features beyond traditional hand-crafted descriptors.21,22 CNNs have emerged as powerful tools for extracting biophysical and mechanistic information directly from biomedical imaging data, bridging the gap between visual pattern recognition and physiological interpretation.11,23–26 Among these, the U-Net architecture and its derivatives have become foundational in biomedical imaging owing to their encoder–decoder symmetry and skip connections, which facilitate precise localization of fine cellular boundaries while preserving global contextual information.27–30 The nnU-Net framework, in particular, has demonstrated exceptional adaptability and generalization across diverse biomedical datasets by automatically optimizing network configurations and preprocessing pipelines.31

Despite these advances, the application of deep learning frameworks to dynamic biophysical processes—such as RBC sickling dynamics and population-level morphological evolution—remains limited, particularly in addressing challenges like overlapping cells and accurate quantification within dense suspensions. In microfluidic sickling assays, simply diluting the sample to reduce cell overlap is often undesirable. While simple dilution can reduce cell overlap in static imaging experiments, it is not well suited for longitudinal sickling kinetics assays or cyclic hypoxia studies, which require tracking large cell populations over time under transient oxygenation–deoxygenation conditions where the effects can be cumulative and history dependent. Dilution thus reduces the number of cells available for longitudinal tracking within a single field of view per experimental run. Moreover, under hypoxic conditions, sickled RBCs can adhere or aggregate due to altered membrane properties and enhanced cell–cell interactions, leading to persistent clustering that cannot be fully eliminated by lowering bulk concentration. These experimental constraints make it necessary to develop computational frameworks capable of resolving overlapping cells directly under dense, physiologically relevant conditions.

In this study, we present an automated deep learning framework for quantifying RBC sickling dynamics from experimental data (Fig. 1). The workflow integrates AI-assisted annotation in Roboflow with an enhanced nnU-Net architecture, followed by a watershed algorithm to achieve robust segmentation and classification of RBC morphologies. The model was trained on heterogeneous datasets derived from SCD patient samples, facilitating accurate identification and temporal characterization of red blood cell morphological dynamics. By integrating image-based segmentation with quantitative analysis of temporal morphological evolution, the framework effectively captures the progression of sickling driven by oxygen-dependent biophysical alterations, offering new insights into the mechanobiological mechanisms underlying RBC shape evolution. This AI-driven platform provides a scalable and reproducible computational tool for objective evaluation of cellular biomechanics and therapeutic responses within microphysiological systems.


image file: d6lc00108d-f1.tif
Fig. 1 Schematic representation of the AI-enhanced segmentation framework for quantifying RBC sickling dynamics in dense and overlapping fields. A subset of images from microfluidic experiments is sampled to obtain representative frames at user-defined intervals. These frames are subsequently annotated in Roboflow using AI-assisted manual labeling to generate instance masks of red blood cells (RBCs) labeled as healthy or sickled. The annotated images are used to train an enhanced nnU-Net segmentation model that automatically optimizes preprocessing and network configurations. The optimized model weights are then applied during inference to produce predicted segmentation masks for unseen experimental datasets. A marker-controlled watershed post-processing step refines and separates overlapping cells, enabling accurate instance counting and classification. Finally, the temporal evolution of the sickled fraction is quantified from videos of sickling dynamics.

Materials and methods

Preparation of RBC suspensions

Silicone elastomer base and curing agent (Sylgard 184) were obtained from Dow Chemical Company. Whole blood samples were collected from homozygous SCD patients at Massachusetts General Hospital under an Excess Human Material Protocol approved by the Partners HealthCare Institutional Review Board (IRB), with a waiver of informed consent. Following the pretreatment procedure previously described by our group,32 packed red blood cells (RBCs) were gently washed three times with phosphate-buffered saline (1× PBS; Sigma-Aldrich, St. Louis, MO, USA) by centrifugation at 1500 rpm for 3 min at room temperature. The washed RBCs were resuspended in PBS containing 1% (w/v) bovine serum albumin (BSA; EMD Millipore, Billerica, MA, USA) to achieve a hematocrit of 2%.

Double-layer microfluidic device

Microfluidic devices were fabricated following previously reported methods.11 Briefly, poly-dimethylsiloxane (PDMS) was prepared by mixing the elastomer base and curing agent in a 10[thin space (1/6-em)]:[thin space (1/6-em)]1 (w/w) ratio and curing the mixture overnight at 80 °C. Two PDMS layers—one forming the gas channel and the other the cell channel—were cast from silicon wafer molds and subsequently bonded to form a double-layer configuration.

Sickling kinetics assay

Resuspended RBCs were treated with osivelotor (formerly known as GBT021601; Pfizer, New York, NY, USA) at 0% (vehicle control) and 100% modification levels, based on a 1[thin space (1/6-em)]:[thin space (1/6-em)]1 molar ratio of osivelotor to total hemoglobin. The treated suspensions were incubated at room temperature for 1 h and stored at 4 °C until imaging. Brightfield videos were recorded using a high-resolution CMOS camera (The Imaging Source, Charlotte, NC, USA) mounted on an Olympus X71 inverted microscope (Olympus America, Breinigsville, PA, USA) equipped with a 60× oil-immersion objective lens (NA = 1.25). Recordings were acquired under ambient conditions at 4 frames per s with a resolution of 5472 × 3648 (RGB64). To induce hypoxia, a gas mixture of 2% O2 and 5% CO2 balanced with N2 was introduced into the upper gas channel intersecting the lower cell channel.

Ethics statement

Whole blood samples were collected from homozygous SCD patients at Massachusetts General Hospital under an Excess Human Material Protocol approved by the Partners HealthCare Institutional Review Board (IRB), with a waiver of in-formed consent. In vitro microfluidic experiments were conducted under an approved exempt protocol (Massachusetts Institute of Technology IRB protocol E-1523).

Data analysis and performance metrics

We manually annotated 17 microscopy frames in Roboflow to produce pixel-wise instance masks with three classes: background (0), healthy (1), and sickled (2). In total, the labeled dataset contained 4707 annotated cell instances, comprising 2526 healthy cells and 2181 sickled cells. These annotated frames were used to train nnU-Net in a 2D setup using 5-fold cross-validation, with four folds used for training and one fold used for validation in each run. Model performance was further evaluated on five independent experimental time-lapse videos (Videos S1–5) covering multiple cell-density regimes and drug conditions. For quantitative evaluation, we additionally created manual pixel-wise instance masks on a sparse set of frames from Videos S1–5: 13 frames per video sampled every 10 s (65 frames total). These annotations provide reference healthy/sickled masks and corresponding instance counts at discrete time points, enabling both pixel-level metrics and cell-level counting evaluation.

Segmentation accuracy (pixel-wise metrics)

We report foreground Dice/IoU computed on the union of both cell classes (healthy and sickled) against background. Let PH and PS denote the sets of pixels predicted as healthy and sickled, and GH and GS denote the corresponding ground-truth pixel sets. We define the predicted and ground-truth foreground sets as
PFG = PHPS, GFG = GHGS.
The dice similarity coefficient (DSC) and intersection-over-union (IoU) for the foreground are then given by
image file: d6lc00108d-t1.tif
Both metrics range from 0 (no overlap) to 1 (perfect agreement).

Segmentation accuracy (instance-level performance: sensitivity, and selectivity (precision))

To evaluate instance-level classification performance, each manually annotated RBC is treated as a ground-truth object. Predicted instances are first matched to ground-truth instances using greedy one-to-one nearest-neighbor matching of instance centroids. After matching, classification performance is computed by treating sickled cells as the positive class and healthy cells as the negative class.

For a given frame, we define:

• True positives (TP): ground-truth sickled cells that are matched and predicted as sickled.

• False negatives (FN): ground-truth sickled cells that are either unmatched (missed) or matched but predicted as healthy.

• False positives (FP): predicted sickled cells that are either un-matched or matched to a ground-truth healthy cell.

• True negatives (TN): ground-truth healthy cells that are matched and predicted as healthy.

Based on these definitions, we report sensitivity (recall), precision (selectivity), and F1 for sickled-cell identification (positive class), as these directly quantify sickled-cell detection and labeling accuracy relevant to sickling-ratio estimation. We compute:

image file: d6lc00108d-t2.tif
Sensitivity measures the fraction of ground-truth sickled cells correctly identified as sickled, while precision reflects the reliability of sickled predictions. We additionally report the F1-score,
image file: d6lc00108d-t3.tif
which summarizes the balance between sensitivity and precision for sickled-cell identification.

Accuracy of sickled-fraction dynamics

Beyond segmentation and per-cell classification accuracy, the primary biological readout of this study is the time-resolved sickled-cell fraction. For each frame at time t, the sickled fraction r(t) is defined as
image file: d6lc00108d-t4.tif
where nhealthy(t) and nsickled(t) denote the counted numbers of healthy and sickled RBC instances, respectively.

Mean absolute error (MAE)

To quantify the agreement between automated prediction and manual counting, we computed the mean absolute error (MAE) of the sickled-cell fraction over 13 sampled frames corresponding to t = 0, 10, 20, …, 120 s. The MAE was then calculated as
image file: d6lc00108d-t5.tif
where r(tk) and rman(tk) denote the predicted and manually measured sickled-cell fractions at time tk, respectively.

Overlap rate

To quantitatively characterize the severity of cell overlap in each frame and to enable overlap-stratified evaluation of segmentation performance, we defined the frame-wise overlap rate as
image file: d6lc00108d-t6.tif
where Noverlap(t) denotes the number of cells involved in overlapping regions at frame t, and Ntotal(t) denotes the total number of cells in the same frame. Under this definition, Roverlap(t) = 0 indicates that no cells are overlapping, whereas Roverlap(t) = 1 indicates that all cells in the frame are involved in overlap.

Data annotation and labeling

Before performing sickle cell classification on the experimental videos, a representative subset of 17 image frames was annotated to construct the training dataset for the nnU-Net segmentation model. The Roboflow platform33 was employed for data annotation and labeling, providing an efficient interface for segmentation mask generation and dataset management. Fig. 2 illustrates the annotation workflow implemented in Roboflow.
image file: d6lc00108d-f2.tif
Fig. 2 Workflow for dataset acquisition and annotation using the Roboflow platform.33 Starting from microscopy frames (data collection), cell classes are defined (healthy and sickled) and used to guide annotation in Roboflow via AI-assisted manual labeling. The resulting labeled dataset (dataset output) provides class-specific masks for downstream model training and evaluation, with background in black, healthy cells in green, and sickled cells in red.

Data collection

First, 17 representative image frames were extracted from the experimental videos and uploaded to the annotation platform. Owing to variability in experimental conditions—including cell overlap rate, cellular type, sickling duration, and imaging quality—the selected frames exhibited substantial heterogeneity.

Object definition and annotation

Annotation categories were defined for three classes: healthy cells, sickle cells, and background. Creating labeled datasets for cellular segmentation is typically labor-intensive and time-consuming. To streamline this process, AI-assisted annotation tools, including the Segment Anything Model (SAM 2)34 integrated within Roboflow, were utilized to facilitate rapid and consistent mask generation across diverse imaging conditions. Using a semi-automated workflow, the annotator selected individual target cells, after which the SAM-based segmentation assistant automatically generated precise masks delineating cell boundaries. Each red blood cell was subsequently reviewed and assigned to the appropriate class to ensure comprehensive and accurate classification. The resulting masks captured detailed RBC morphologies across representative frames spanning multiple time points across a few experiments.

Dataset output

The finalized annotations were exported as per-pixel label maps and converted into the nnU-Net data format, with label indices defined as background = 0, healthy = 1, and sickled = 2. Although the present work focuses on two morphological classes of RBCs, the same annotation and training framework can be readily extended to additional morphological or cellular classes for broader biological imaging applications.

Training and inference

To enable robust and reproducible segmentation of the RBCs, the training dataset was augmented using random flips and rotations, preserving cellular morphology while improving robustness to orientation variability. As a preprocessing step, all frames were converted to grayscale and contrast-normalized using contrast-limited adaptive histogram equalization (CLAHE), implemented via OpenCV,35 to mitigate illumination non-uniformity in microfluidic recordings. Building on this standardized training set, we employed the nnU-Net framework, a self-configuring deep learning system that automatically adapts preprocessing, architecture, and training hyperparameters to a given biomedical dataset.31 nnU-Net builds upon the well-established U-Net architecture27 but extends it with dynamic configuration of parameters such as input normalization, patch size, and network depth without requiring manual tuning. This adaptability makes it particularly suited for biomedical imaging tasks where dataset size, contrast, and scale can vary substantially.

Training

We trained an nnU-Net segmentation model in a 2D configuration using 5-fold cross-validation to segment experimental microscopy frames into healthy and sickle RBC classes. In each fold, four folds were used for training and one for validation, producing fold-specific model checkpoints that can be combined as an ensemble at inference. Despite the small training set (17 annotated frames), the model achieved strong segmentation performance, underscoring the efficiency of the AI-assisted labeling workflow and the generalization capability of nnU-Net for our biophysical imaging task. Unless otherwise specified, we used the default nnU-Net v2 training configuration,31 which automatically infers key preprocessing and training parameters (e.g., network depth, patch size, batch size, and sampling strategy) from the dataset properties and available GPU/CPU-memory. Training followed the standard nnU-Net training recipe, using stochastic gradient descent with Nesterov momentum and a polynomial learning-rate schedule, together with a composite DSC + cross-entropy loss. Inputs were Z-score normalized on a per-image basis, and patches were sampled with foreground oversampling to ensure adequate representation of RBC pixels. The best model checkpoint for each fold was selected automatically based on validation score.

Inference

After training, we developed a standardized nnU-Net inference pipeline to segment healthy and sickle RBCs in microscopy frames extracted from experimental videos. These frames may correspond to time points not included in training or may originate from new microfluidic experiments not represented in the annotated dataset. The inference workflow converts experiment videos into nnU-Net – compatible inputs and produces per-frame label maps efficiently and reproducibly. The procedure is summarized below.

1. Frame extraction and input standardization: raw experimental videos were decoded using OpenCV (cv2) and converted into individual image frames. During this step, frames were preprocessed (grayscale conversion and CLAHE-based contrast normalization) and resized to match the training resolution (1000 × 1000 pixels for the trained model). Frames were then saved in the nnU-Net format using the required naming convention, ensuring direct compatibility with nnU-Net inference.

2. Model inference: the trained nnU-Net model was subsequently applied to the extracted frames to generate segmentation masks identifying healthy and sickled RBCs.

3. Output generation: predicted segmentations were exported as PNG label maps with integer encodings corresponding to background (0), healthy (1), and sickle (2) regions, providing standardized outputs for downstream quantitative and morphological analysis.

Inference was executed using GPU hardware, enabling fast batch processing of several video datasets with reproducible outputs. Before inference, we resized every frame to the same spatial resolution used during training (1000 × 1000 pixels);empirically, this improved generalization accuracy and was therefore adopted as the default preprocessing setting. The predicted masks were subsequently post-processed for cell counting and sickled fraction estimation as described in the next section.

Watershed-based separation of overlapping cells.

In dense suspensions, touching or overlapping RBCs are frequently merged into a single connected region within a class mask, which biases downstream quantification. To improve instance-level quantification, we implemented a watershed method to separate over-lapping cells within each class mask.36,37 Importantly, nnU-Net provides reliable segmentation of overlapping cells, including heterogeneous assemblies of healthy and sickle cells. Homogeneous overlapping cells are subsequently separated using a watershed technique.

Fig. 3 illustrates the complete workflow of the watershed method to separate overlapping cells. The left panel shows the segmentation results obtained from the nnU-Net prediction, in which healthy cells (green) and sickle cells (red) are masked. The yellow bounding box highlights an example of overlapping cells identified in the prediction, together with a zoomed-in view for clarity. The middle panel presents the key steps of the watershed-based separation procedure. Mask refinement. For each frame, the nnU-Net label map was first converted into class-specific binary masks corresponding to healthy and sickle red blood cells (RBCs). Small spurious regions were removed using an area-based connected-component filter with a minimum object-size threshold, thereby suppressing segmentation noise while preserving valid cell regions. Seed point detection. The overlapping-cell separation procedure began by computing a distance transform within each class-specific mask to identify approximate cell centers, which served as seed points (markers) for boundary propagation. A smoothed distance map was generated from each binary mask, where each individual cell center appears as a local maximum. Shared-edge detection. These local maxima act as markers from which virtual boundaries expand outward until neighboring regions meet, corresponding to the shared edges of overlapping cells and thereby delineating their common boundaries. To control marker generation, these local maxima were retained only if their distance-transform values exceeded a relative peak-height threshold (set as a fraction of the maximum distance value within the merged region). Additionally, a minimum inter-peak distance constraint was enforced to suppress closely spaced maxima and prevent over-segmentation within a single cell. For completeness, the SI provides a sensitivity analysis of these marker-generation hyperparameters and identifies a stable operating range that balances under- and over-segmentation in dense suspensions. Separation of overlapping cells. The overlapping cells were subsequently split along the detected boundaries, dividing the merged region into distinct cell instances. Label assignment. Finally, each detected cell was as-signed a unique label, enabling robust cell-by-cell quantification across densely packed experimental frames. The right panel of Fig. 3 shows the final segmentation results, demonstrating successful separation of overlapping cells.


image file: d6lc00108d-f3.tif
Fig. 3 Overview of the watershed pipeline for separating overlapping RBCs. The left panel shows the nnU-Net segmentation result, where overlapping same-class cells may appear as merged regions. The middle panel summarizes the marker-controlled watershed steps, and the right panel shows the instance-separated output after watershed. Green indicates healthy cells, red indicates sickle cells, and yellow outlines highlight regions that were successfully split.

Limitations of existing microfluidic sickling quantification approaches

Accurate quantification of sickling dynamics is critical for drug screening, mechanobiology studies, and high-throughput assay development.38,39 However, previous studies have been limited by small fields of view (FOV), which restricted the ability to quantify sickling kinetics. While expanding the FOV in microscopy does not inherently cause cell overlap, it often increases cell density within the image, resulting in greater spatial overlap and analytical challenges that render many commonly used methods unreliable or impractical.19,40 Manual counting can be feasible for sparse fields of view, but it quickly becomes time-consuming, subjective, and increasingly error-prone as cell density increases, severely limiting assay throughput and hindering systematic studies such as multi-condition drug screening or repeated trials.41 Shape-descriptor methods, which summarize connected regions using global geometric features and implicitly assume a one-object–one-shape relationship, break down under overlap: multiple cells merge into a single connected component with no visible internal boundaries, leading to loss of object identity and unreliable instance separation.42,43 Threshold- or intensity-based methods rely on intensity contrast to define object boundaries, but overlapping cells of-ten exhibit continuous or additive intensity profiles that eliminate contrast at contact regions, causing multiple cells to merge into a single foreground region.44 Collectively, these limitations motivate the development of segmentation and quantification methods that (i) remain robust under dense and overlapping conditions, (ii) provide instance-level identification and counting rather than region-level segmentation alone, and (iii) enable reproducible, time-resolved analysis suitable for high-throughput microfluidic assays.45,46

Temporal evolution of the sickle cell fraction across different cell overlap rate

To evaluate the predictive accuracy of the proposed framework, we performed controlled experiments under varying cell overlap rate. The corresponding patient sample parameters are summarized in Table 1 patient I. Using the developed approach, we quantified the time-dependent evolution of the sickle-cell fraction across different suspension densities, as shown in Fig. 4. Representative examples corresponding to cell overlap rates of 0.064 (Video 1), 0.271 (Video 2), and 0.501 (Video 3) are provided in Fig. 4(A–C), respectively. For each cell overlap level, the left column shows the original image and the middle column shows the segmentation overlay after nnU-Net and watershed-based instance separation; the top and bottom rows correspond to t = 0 s and t = 120 s, respectively. In the overlays, green denotes healthy cells, red denotes sickle cells, and yellow highlights overlapping regions that are separated by the watershed post-processing. The rightmost panel illustrates the temporal evolution of sickling dynamics under different suspension densities. The black curves represent manually quantified results and the red curves denote predictions generated by the combined nnU-Net and watershed framework. While Fig. 4 confirms accurate recovery of the sickle-cell fraction over time, Fig. S1 (SI) further demonstrates that watershed-based instance separation improves cell counting accuracy compared with nnU-Net segmentation alone. Overall, the sickling dynamics predicted by the nnU-Net augmented with watershed post-processing exhibit strong agreement with manual quantification, demonstrating the accuracy and robustness of the proposed method across a wide range of cell overlap rate. To avoid over-lapping RBCs that can reduce counting accuracy, experiments typically use less densely packed suspensions (e.g., Fig. 4B). To further assess the practical overlap-rate limit of the proposed framework, we evaluated a video with an overlap rate of 0.923, as shown in Fig. S3 (SI 3). The predicted counts of both healthy and sickled cells remained close to the corresponding manual counts, demonstrating the robustness of the method under extremely high-overlap conditions. We did not consider higher overlap rates, since manual counting itself becomes ambiguous and unreliable when the degree of overlap is further increased. Therefore, the proposed method can achieve a practical overlap-rate threshold comparable to that of manual counting, which is approximately 0.9. By allowing the use of denser, overlapping suspensions (e.g., Fig. 4C), our method increases the experimental throughput 2.5-fold.
Table 1 Clinical information about samples in the experiment
Parameter T (°C) PO2 (mmHg) MCV (μm3) MCHC (g dL−1) HbS %
Patient I 25 15.2 88.5 36.8 86.3%
Patient II 25 15.2 84.6 35.4 85.4%



image file: d6lc00108d-f4.tif
Fig. 4 (A–C) Representative sickling dynamics at different cell suspension densities from the same patient, shown at t = 0 s (top row) and t = 120 s (bottom row). For each condition, the left column shows the original micrograph, the middle column displays the nnU-Net prediction with watershed-based instance separation (green: healthy; red: sickle; yellow: overlapping regions separated by watershed), and the right column compares the predicted sickle-cell fraction (solid blue) with manual counts (red dashed).

Temporal evolution of the sickle cell fraction under different drug treatments

We also tested the proposed framework with and without hemoglobin modification by osivelotor, as shown in Fig. 5, where the time-resolved snapshots of sickling dynamics observed in vitro under varying degrees of hemoglobin modification by osivelotor (0% (Video 4) and 100% (Video 5)), and the overlap rates are 11.4% and 14.3%. The corresponding patient sample parameters are summarized in Table 1 patient II. Under 100% hemoglobin modification (Fig. 5A), at the onset of the experiment (t = 0 s), the majority of red blood cells (RBCs) exhibited the characteristic biconcave morphology associated with normoxic conditions. As hypoxia (2% oxygen) progressed over time to t = 120 s, the RBCs underwent morphological transformations into elongated and crescent-shaped forms, representing various sickling states.4
image file: d6lc00108d-f5.tif
Fig. 5 Effect of hemoglobin modification on RBC sickling dynamics. (A and B) Time-resolved micrographs of RBCs under 2% O2 with (A) 0% and 100% hemoglobin modification by osivelotor at t = 0 s and t = 120 s (left panel). Comparison between predicted and manually counted sickling dynamics under different hemoglobin modification levels (right panel).

In contrast, under 100% hemoglobin modification (Fig. 5B), the RBC morphology at t = 120 s remained largely similar to that at t = 0 s, indicating a substantial inhibition of sickling. The right panel compares the predicted sickled fraction from our computational method with that obtained by manual counting. Under 0% hemoglobin modification, the final sickled fractions were approximately 94.0% (prediction) and 95.4% (manual), whereas under 100% modification, the fractions declined markedly to 20.9% (prediction) and 21.0% (manual). The close agreement between the two measurements demonstrates that the proposed framework accurately reproduces experimental observations, confirming its robustness and reliability in quantifying sickling dynamics.

Comparison with prior label-free deep learning approaches

As a baseline, we compare our method with label-free pipelines that infer cell phenotype without pixel-wise supervision.47,48 We first examined LANCE: a Label-Free Live Apoptotic and Necrotic Cell Explorer Using Convolutional Neural Network Image Analysis.47 however, its preprocessing assumptions (illumination/contrast/cell size) did not generalize reliably to our microscopy data, particularly in dense and heterogeneous fields of view. We therefore turned to the fully learned framework of Piansaddhayanon et al.48 as the label-free baseline for quantitative comparison, where a Faster R-CNN detector with a ResNet-50 backbone49 proposes candidate cell bounding boxes and a down-stream ConvNeXt classifier50 assigns each crop a healthy/sickled label (Fig. S2, SI). Further details of the label-free training protocol are provided in the SI section 2. Fig. 6(A–C) compares sickled-cell counts over time from manual counting, the proposed nnU-Net + watershed framework, and the label-free (R-CNN + ConvNeXt) baseline for three representative cases (Video 1, Video 2 and Video 3). Across all three cases, nnU-Net + watershed agrees closely with the manual trajectory, whereas the label-free (R-CNN + ConvNeXt) method consistently underestimates sickled-cell counts. This discrepancy becomes more pronounced in denser, overlapping fields of view, consistent with missed detections and localization ambiguity in crowded scenes.
image file: d6lc00108d-f6.tif
Fig. 6 Comparison of sickled cell counts over time obtained by manual counting, the nnU-Net + watershed method, and the label-free method across three representative cases (A–C). The nnU-Net + watershed method shows closer agreement with manual counting, whereas the label-free method tends to underestimate the sickled cell count.

Quantitative performance of the segmentation and counting pipeline

Pixel-wise segmentation performance

To assess the robustness of the proposed segmentation framework during dynamic sickling progression, we evaluated the time-dependent DSC and IoU for five representative cases. As shown in Fig. 7, both metrics remained consistently high throughout the observation period, with DSC values generally around 0.90 or above and IoU values also maintained at high levels. Since Dice and IoU quantify the overlap between the predicted masks and the reference annotations, these results indicate strong agreement in both the extent and localization of the segmented cell regions. Although modest fluctuations were observed among different cases and time points, no systematic deterioration was found as time progressed. The combination of high overlap accuracy and limited temporal variation demonstrates that the proposed model provides stable and reliable segmentation performance, supporting its use for downstream quantification of sickled-cell dynamics in dense and heterogeneous RBC suspensions.
image file: d6lc00108d-f7.tif
Fig. 7 Time-dependent segmentation performance for five representative cases. (A) Dice score and (B) IoU are plotted versus time (0–120 s) for cases 1–5. The results demonstrate stable and accurate performance across time, although modest variations are observed among different cases.

Instance-level evaluation of segmentation performance using sensitivity and selectivity

To further evaluate the reliability of the proposed framework at the instance level, we quantified the temporal evolution of sensitivity, selectivity (precision), and F1 score for five representative cases, together with the case-wise MAE of the predicted sickled fraction (Fig. 8). As shown in Fig. 8A–C, all three classification metrics remained consistently high over the full observation period, indicating that the model was able to correctly identify sickled cells with few missed detections and few false positive assignments. In particular, the high sensitivity suggests that most true sickled cells were successfully detected, while the high selectivity indicates that the predicted sickled cells were generally reliable. The correspondingly high F1 scores further confirm a favorable balance between detection completeness and classification precision. In addition, the MAE values in Fig. 8D remained low across all five cases, demonstrating that the predicted sickled fractions were in close agreement with manual counting. Although modest variations were observed among cases, especially under more challenging image conditions, no systematic temporal degradation was found. Together, these results show that the proposed nnU-Net–plus–watershed framework provides stable and reliable instance-level classification and counting performance throughout the dynamic sickling process.
image file: d6lc00108d-f8.tif
Fig. 8 Quantitative evaluation of segmentation performance in five representative cases. (A–C) Temporal profiles of sensitivity, selectivity, and F1 score from 0 to 120 s for cases 1–5. (D) MAE summarized for each case. The model maintains stable performance over time, while minor variations across cases reflect differences in image complexity and segmentation difficulty.

Sensitivity analysis of watershed hyperparameters

To assess the sensitivity of watershed-based instance separation to its key hyperparameters, we performed a systematic analysis of the marker-generation stage using a dense cell suspension with case 3 (Fig. 9). The analysis was conducted at two representative time points: t = 0 s, when the population is dominated by healthy cells, and t = 120 s, when sickled cells are predominant. Fig. 9A schematically illustrates the marker-generation process underlying watershed post-processing. First, a distance transform is computed, yielding a scalar field whose local maxima correspond to candidate cell centers. Prior to smoothing, multiple nearby local maxima (e.g., P1, P2, and P3) may arise within partially overlapping or elongated regions due to boundary irregularities and segmentation noise. In this example, P1 and P2 correspond to closely spaced peaks within a single cell, whereas P3 represents a distinct neighboring cell. Gaussian smoothing of the distance transform suppresses minor local extrema and consolidates redundant peaks, preserving dominant maxima (e.g., P1 and P3) while eliminating spurious peaks such as P2. A relative peak-height threshold H is subsequently applied to reject shallow maxima, followed by enforcing a minimum inter-peak distance to suppress closely spaced markers and prevent over-segmentation. Together, these steps transform a noisy distance landscape into a sparse, well-separated set of markers that reliably represent individual cell instances prior to watershed partitioning.
image file: d6lc00108d-f9.tif
Fig. 9 Sensitivity analysis of watershed hyperparameters for marker-based instance separation (case 3, N = 417 RBCs). (A) Schematic illustration of the watershed marker-generation process for overlapping cells, including peak detection before and after Gaussian smoothing of the distance transform, application of a peak-height threshold H for marker generation, and enforcement of a minimum inter-peak distance. (B and C) Absolute cell-counting error relative to manual counting for healthy (green) and sickled (red) cells as key watershed parameters are varied: distance-transform smoothing parameter (left), peak-height threshold for marker generation (middle), and minimum inter-peak distance (right). Results at t = 0 s (B) and t = 120 s (C) highlight distinct sensitivity regimes for healthy versus sickled RBC populations. Solid curves correspond to the full nnU-Net + watershed pipeline, while dashed horizontal lines indicate the baseline counting error obtained using nnU-Net segmentation without watershed post-processing.

For each hyperparameter sweep (Fig. 9B and C), we quantified the absolute cell-counting error relative to manual annotation for healthy and sickled cells separately and compared performance against the nnU-Net – only baseline. Consistent with class prevalence, counting error at t = 0 s is dominated by healthy cells, whereas at t = 120 s it is dominated by sickled cells. Excessive distance-transform smoothing suppresses valid markers and leads to under-segmentation, resulting in a rapid increase in counting error, while mild smoothing yields a broad low-error regime. A similar transition is observed for the relative peak-height threshold, with errors remaining low over an intermediate range before increasing sharply as valid peaks are rejected. In contrast, the minimum inter-peak distance exhibits a non-monotonic dependence, reflecting the trade-off between over-segmentation at small values and merged instances at large values. Importantly, within these stable intermediate parameter ranges, the nnU-Net + watershed pipeline consistently achieves substantially lower counting errors than nnU-Net alone for the dominant cell class at each time point, demonstrating robust and improved instance-level counting in dense, overlapping cell populations.

Automated and reproducible workflow

All scripts, along with the trained model weights, are publicly available at: https://github.com/nikhil-kadivar/rbc-sickling-dynamics.

The pipeline requires Python (3.9), PyTorch, nnU-Net v2, NumPy, SciPy, scikit-image, OpenCV, and Matplotlib. A minimal requirements.txt and environment setup instructions are provided in the repository to ensure easy installation and reproducibility. A central design objective of this tool is to provide a simple, modular, and fully reproducible workflow that can be executed with minimal setup.

# 1) Extract frames

python extract_frames.py --video /path/to/video.mp4 \

--all-frames # OR

--every-n-frames N # OR

--every-sec N

# 2) Run nnU-Net inference to produce PNGs mask files (0 = bg, 1 = healthy, 2 = sickle)

python nnunet_infer.py

# 3) Count, watershed-split, and visualize python count_and_visualize.py

Each stage of the workflow is fully modular and parameterized, enabling users to adjust the frame sampling rate, output resolution, and processing mode without modifying the underlying code. The pipeline can be executed sequentially across multiple experiments, producing standardized and reproducible sickled ratio estimates alongside visual summaries that provide an effective means for validating the predictions. To maximize computational efficiency, the counting and visualization procedures are parallelized, while nnU-Net inference supports both multi-GPU and multi-CPU configurations. This design ensures that the same codebase can scale seamlessly from a standard laptop to a high-performance computing workstation without additional modification. The resulting outputs include per-frame segmentation overlays, labeled masks, test videos, and tabulated sickling statistics, collectively providing transparent and traceable documentation of the analysis process. Moreover, class definitions and threshold parameters are easily configurable, allowing rapid customization of the workflow for different red blood cell morphologies or related cellular imaging applications. Collectively, these features establish a robust, extensible, and fully reproducible computational framework that converts experimental videos into time-resolved, quantitative maps of sickling dynamics—executable through only a few terminal commands.

Conclusion and discussion

We developed and validated an automated, modular, and reproducible computational framework to quantify sickling dynamics in densely packed and overlapping RBC suspensions from microfluidic time-lapse videos. The workflow is designed to ensure reproducibility and methodological standardization at every stage—from data preparation to quantitative analysis—by integrating AI-assisted annotation through the Roboflow platform, robust segmentation using a two-dimensional nnU-Net model, and marker-controlled watershed post-processing for accurate instance separation and cell counting in crowded fields of view. Notably, despite being trained on a limited number of annotated frames, the framework achieves high-fidelity segmentation of overlapping cells and exhibits strong agreement with manual quantification, demonstrating robustness to dense cellular arrangements. Experimental validation further confirms that the automatically estimated sickle-cell fractions closely match manual measurements at different levels of hemoglobin modification by the anti-sickling drug osivelotor and reliably capture the temporal progression of sickling. We further assessed the framework using both pixel-level and instance-level evaluation metrics, and the consistently favorable results in both analyses demonstrate that the proposed method achieves robust and reliable performance from segmentation quality to cell-level identification. Importantly, the entire pipeline—from frame extraction to quantitative output—can be executed through a minimal three-command interface, enabling straightforward repetition, scalability, and reproducible deployment across computational environments and experimental datasets. By enabling the use of densely packed, overlapping RBC suspensions, this method can more than double the experimental throughput.

Although the training dataset consisted of 17 manually annotated frames, these frames were intentionally selected to represent heterogeneous experimental conditions, including multiple cell-density regimes and hemoglobin-modification states. Model robustness was further validated on independent time-lapse videos (Videos S1–5) spanning sparse and highly dense suspensions. These results demonstrate consistent segmentation and quantification performance across varied experimental conditions. A central design objective of this study was to develop a data-efficient analytical pipeline. Manual pixel-wise annotation of densely overlapping cells is labor-intensive and constitutes a major bottleneck in biomedical image analysis. Therefore, minimizing the annotation burden while maintaining reliable instance-level segmentation was a key consideration. The strong performance achieved with a limited training set highlights the adaptability of nnU-Net and the effectiveness of the preprocessing strategy. From a practical perspective, this data efficiency enhances transferability. Because the annotation requirement is modest, the model can be readily fine-tuned using a small number of representative images acquired from different microscopes or related cellular systems. Moreover, the modular and reproducible codebase facilitates straightforward retraining and adaptation to similar dense-cell segmentation problems. As nnU-Net is a self-configuring architecture designed to adapt to new datasets with minimal manual tuning, the proposed framework is expected to generalize effectively across comparable imaging setups.

Beyond sickle cell disease, the framework can be readily extended to other biomedical imaging applications involving dynamic changes in object morphology. Its combination of AI-assisted segmentation and quantitative analysis is applicable to studies of RBC deformability in malaria or spherocytosis, as well as investigations of cellular shape evolution, adhesion, or migration in microfluidic and organ-on-chip systems. More broadly, the open-source and modular architecture provides a generalizable foundation for automated, data-driven characterization of cellular biomechanics and therapeutic responses. Looking forward, this framework establishes a foundation for integrating machine learning with experimental biophysics to enable automated, quantitative characterization of cell mechanics. Future extensions may incorporate temporal tracking, three dimensional volumetric analysis, or multimodal imaging data to capture complex morphological alterations under varying biochemical and biomechanical stimuli. By bridging data-driven models with experimental platforms, this approach can accelerate discovery in hematology, mechanobiology, and therapeutic screening, advancing the broader goal of interpretable and reproducible AI in biomedical research.

Author contributions

N. K.: conceptualization, formal analysis, methodology, data curation, software, validation, visualization, writing – original draft. G. L.: conceptualization, data curation, formal analysis, writing – original draft, visualization, validation. J. Z.: performed all experiments, formal analysis, contributed analytic tools, and wrote the paper. M. D.: conceptualization, project management, formal analysis, provided experiment-related sources, and wrote the paper. G. E. K.: conceptualization, project management, formal analysis, supervision, and wrote the paper. M. X.: conceptualization, formal analysis, and wrote the paper.

Conflicts of interest

All authors declare no conflict of interest.

Data availability

All scripts along with trained model are publicly available at https://github.com/nikhil-kadivar/rbc-sickling-dynamics.

Supplementary information: additional figures and analysis for watershed post-processing. See DOI: https://doi.org/10.1039/d6lc00108d.

Acknowledgements

This work was supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under grant number NIH R01HL154150-06. J. Z. and M. D. also acknowledge partial support from the NIH under grant number R01HL158102. We thank Dr. John M. Higgins for providing the sickle cell samples. High-performance computing resources were provided by the Center for Computation and Visualization at Brown University. The authors thank Pfizer for providing the osivelotor used in this study through a Pure Compound Grant.

References

  1. V. Munoz, P. A. Thompson, J. Hofrichter and W. A. Eaton, Nature, 1997, 390, 196–199 CrossRef CAS PubMed.
  2. W. A. Eaton and J. Hofrichter, Adv. Protein Chem., 1990, 40, 63–279 CrossRef CAS PubMed.
  3. T. Ye, L. Peng and G. Li, Biomech. Model. Mechanobiol., 2019, 18, 1821–1835 CrossRef PubMed.
  4. G. Li, Y. Qiang, H. Li, X. Li, M. Dao and G. E. Karniadakis, Biophys. J., 2023, 122, 2590–2604 CrossRef CAS PubMed.
  5. G. Li, Y. Qiang, H. Li, X. Li, P. A. Buffet, M. Dao and G. E. Kar-niadakis, PLoS Comput. Biol., 2023, 19, e1011223 CrossRef CAS PubMed.
  6. A. Dorken-Gallastegi, Y. Lee, G. Li, H. Li, L. Naar, X. Li, T. Ye, E. Van Cott, R. Rosovsky and D. Gregory, et al., iScience, 2023, 26, 107202 CrossRef CAS PubMed.
  7. G. Li, T. Ye, S. Wang, X. Li and R. U. I. Haq, Phys. Fluids, 2020, 32, 031903 CrossRef CAS.
  8. P. Sundd, M. T. Gladwin and E. M. Novelli, Annu. Rev. Pathol.: Mech. Dis., 2019, 14, 263–292 CrossRef CAS PubMed.
  9. G. Li, H. Li, P. Alioune Ndour, M. Franco, X. Li, I. MacDonald, M. Dao, P. A. Buffet and G. E. Karniadakis, Comput. Biol. Med., 2024, 182, 109198 CrossRef CAS PubMed.
  10. M. Xu, D. P. Papageorgiou, S. Z. Abidi, M. Dao, H. Zhao and G. E. Karniadakis, PLoS Comput. Biol., 2017, 13, e1005746 CrossRef.
  11. Y. Qiang, M. Xu, M. P. Pochron, M. Jupelli and M. Dao, Front. Phys., 2024, 12, 1331047 CrossRef PubMed.
  12. L. Alzubaidi, M. A. Fadhel, O. Al-Shamma, J. Zhang and Y. Duan, Electronics, 2020, 9, 427 CrossRef CAS.
  13. M. Darrin, A. Samudre, M. Sahun, S. Atwell, C. Badens, A. Charrier, E. Helfer, A. Viallat, V. Cohen-Addad and S. Giffard-Roisin, Sci. Rep., 2023, 13, 745 CrossRef CAS PubMed.
  14. S. Wang, T. Ye, G. Li, X. Zhang and H. Shi, PLoS Comput. Biol., 2021, 17, e1008746 CrossRef CAS PubMed.
  15. V. Carvalho, I. M. Gonçalves, A. Souza, M. S. Souza, D. Bento, J. E. Ribeiro, R. Lima and D. Pinho, Micromachines, 2021, 12, 317 CrossRef PubMed.
  16. S. Tavakoli, A. Ghaffari, Z. M. Kouzehkanan and R. Hosseini, Sci. Rep., 2021, 11, 19428 CrossRef CAS PubMed.
  17. S. M. Kiu and Y. C. Wang, BioMedInformatics, 2022, 2, 234–243 CrossRef.
  18. S. M. Recktenwald, K. Graessel, F. M. Maurer, T. John, S. Gekle and C. Wagner, Biophys. J., 2022, 121, 23–36 CrossRef CAS PubMed.
  19. Y. Alapan, Y. Matsuyama, J. Little and U. Gurkan, Technology, 2016, 4, 71–79 CrossRef CAS PubMed.
  20. G. Li, T. Ye, Z. Xia, S. Wang and Z. Zhu, Int. J. Eng. Sci., 2023, 191, 103901 CrossRef CAS.
  21. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken and C. I. Sánchez, Med. Image Anal., 2017, 42, 60–88 CrossRef PubMed.
  22. S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz and D. Terzopoulos, IEEE Trans. Pattern Anal. Mach. Intell., 2021, 44, 3523–3542 Search PubMed.
  23. X. Liu, L. Song, S. Liu and Y. Zhang, Sustainability, 2021, 13, 1224 CrossRef.
  24. M. E. Rayed, S. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir and M. Mridha, Inform. Med. Unlocked, 2024, 47, 101504 CrossRef.
  25. R. R. Mehdi, N. Kadivar, V. Serpooshan, K. J. Myers, G. Karni-adakis and R. Avazmohammadi, International Conference on Functional Imaging and Modeling of the Heart, 2025, pp. 420–429 Search PubMed.
  26. S. Neelakantan, M. Ismail, N. Kadivar, E. McGinn, L. Loza, K. J. Myers, B. J. Smith, R. Rizi, G. Karniadakis and R. Avazmoham-madi, Acta Biomater., 2026, 210, 121–132 CrossRef PubMed.
  27. O. Ronneberger, P. Fischer and T. Brox, International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241 Search PubMed.
  28. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh and J. Liang, International Workshop on Deep Learning in Medical Image Analysis, 2018, pp. 3–11 Search PubMed.
  29. Q. Zhang, K. Sampani, M. Xu, S. Cai, Y. Deng, H. Li, J. K. Sun and G. E. Karniadakis, Transl. Vis. Sci. Technol., 2022, 11, 7–7 CrossRef.
  30. R. R. Mehdi, N. Kadivar, T. Mukherjee, E. A. Mendiola, A. Bersali, D. J. Shah, G. Karniadakis and R. Avazmohammadi, Adv. Sci., 2025, e06933 CrossRef CAS PubMed.
  31. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen and K. H. Maier-Hein, Nat. Methods, 2021, 18, 203–211 CrossRef CAS PubMed.
  32. E. Du, M. Diez-Silva, G. J. Kato, M. Dao and S. Suresh, Proc. Natl. Acad. Sci. U. S. A., 2015, 112, 1422–1427 CrossRef CAS.
  33. B. Dwyer, J. Nelson and J. Solawetz, et al., Roboflow, 2026, https://roboflow.com/ Search PubMed.
  34. N. Ravi, V. Gabeur, Y.-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. Rädle, C. Rolland and L. Gustafson, et al., arXiv, 2024, preprint, arXiv:2408.00714,  DOI:10.48550/arXiv.2408.00714.
  35. G. Bradski, Dr. Dobb's Journal: Software Tools for the Profes-sional Programmer, 2000, vol. 25, pp. 120–123 Search PubMed.
  36. S. Beucher and F. Meyer, Mathematical Morphology in Image Processing, CRC Press, 2018, pp. 433–481 Search PubMed.
  37. S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart and T. Yu, PeerJ, 2014, 2, e453 CrossRef PubMed.
  38. F. F. Costa and N. Conran, Sickle cell anemia: From basic science to clinical practice, Springer, 2016 Search PubMed.
  39. B. Metaferia, T. Cellmer, E. B. Dunkelberger, Q. Li, E. R. Henry, J. Hofrichter, D. Staton, M. M. Hsieh, A. K. Conrey and J. F. Tisdale, et al., Proc. Natl. Acad. Sci. U. S. A., 2022, 119, e2210779119 CrossRef CAS PubMed.
  40. Q. Guo, S. P. Duffy, K. Matthews, A. T. Santoso, M. D. Scott and H. Ma, J. Biomech., 2014, 47, 1767–1776 CrossRef PubMed.
  41. J. C. Caicedo, J. Roth, A. Goodman, T. Becker, K. W. Karhohs, M. Broisin, C. Molnar, C. McQuin, S. Singh and F. J. Theis, et al., Cytometry, Part A, 2019, 95, 952–965 CrossRef PubMed.
  42. E. Meijering, IEEE Signal Process. Mag., 2012, 29, 140–145 Search PubMed.
  43. V. Ulman, M. Maška, K. E. Magnusson, O. Ronneberger, C. Haubold, N. Harder, P. Matula, P. Matula, D. Svoboda and M. Radojevic, et al., Nat. Methods, 2017, 14, 1141–1152 CrossRef CAS PubMed.
  44. T. Wan, S. Xu, C. Sang, Y. Jin and Z. Qin, Neurocomputing, 2019, 365, 157–170 CrossRef.
  45. C. Stringer, T. Wang, M. Michaelos and M. Pachitariu, Nat. Methods, 2021, 18, 100–106 CrossRef CAS PubMed.
  46. E. Moen, D. Bannon, T. Kudo, W. Graf, M. Covert and D. Van Valen, Nat. Methods, 2019, 16, 1233–1246 CrossRef CAS PubMed.
  47. E. B. Hartnett, M. Zhou, Y.-N. Gong and Y.-C. Chen, Anal. Chem., 2022, 94, 14827–14834 CrossRef CAS PubMed.
  48. C. Piansaddhayanon, C. Koracharkornradt, N. Laosaengpha, Q. Tao, P. Ingrungruanglert, N. Israsena, E. Chuangsuwanich and S. Sriswasdi, Sci. Data, 2023, 10, 570 CrossRef CAS PubMed.
  49. K. He, X. Zhang, S. Ren and J. Sun, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778 Search PubMed.
  50. Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell and S. Xie, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11976–11986 Search PubMed.

Footnote

These authors contributed equally to this work.

This journal is © The Royal Society of Chemistry 2026
Click here to see how this site uses Cookies. View our privacy policy here.