Open Access Article
This Open Access Article is licensed under a Creative Commons Attribution-Non Commercial 3.0 Unported Licence

Automated fluorescence image stitching for high-throughput and digital microfluidic biosensors

Zhiqiang Yana, Yulin Renbcd, Jaromír Jarušeke, Jan Brodskýe, Imrich Gablech*e, Haoqing Zhang*bcd and Pavel Neuzil*ef
aSchool of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China
bThe Key Laboratory of Biomedical Information Engineering of the Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P.R. China. E-mail: zhanghaoqing@xjtu.edu.cn
cBioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an 710049, P.R. China
dState Industry-Education Integration Center for Medical Innovations at Xi'an Jiaotong University, P.R. China
eDepartment of Microelectronics, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technická 3058/10, Brno, 616 00, Czech Republic. E-mail: imrich.gablech@vutbr.cz
fMinistry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, 127 West Youyi Road, Xi'an, Shaanxi 710072, P.R. China. E-mail: pavel.neuzil@nwpu.edu.cn

Received 21st October 2025 , Accepted 27th October 2025

First published on 7th November 2025


Abstract

Fluorescence imaging underpins digital PCR (dPCR), microarrays, and microfluidic biosensors, yet precise image integration remains a technical bottleneck when the sample area exceeds the microscope field of view. Current stitching methods often rely on fiducial markers or manual tuning, limiting automation and robustness, particularly in portable or point-of-care devices. We present a marker-free image stitching algorithm that combines partition-detection-based registration with mask-based illumination correction. The algorithm aligns frames using intrinsic structural features and compensates for brightness inconsistencies in an adaptive manner, without requiring platform-specific parameter tuning. Application to three dPCR systems, including droplet- and chip-based formats, showed an increased number of matched feature points within overlapping regions, improving the reliability of image stitching. In addition, it enhanced intensity uniformity by ≈ 29.6% compared with conventional methods. The proposed algorithm was further validated on microarrays and bead-based chips, demonstrating consistent stitching accuracy and signal integrity across different modalities. This generalized and automation-compatible solution supports high-throughput microfluidic imaging, quantitative bioanalysis, and integration with artificial intelligence-enabled diagnostic workflows.


Introduction

Fluorescence imaging is essential for detecting and quantifying biomolecules in microfluidic-based lab-on-chip systems for biosensing applications in clinical diagnostics, pandemic screening, proteomics, genomics, and environmental monitoring.1–3 Array-based microfluidic biosensors benefit from high sensitivity and the ability to to perform absolute quantification of biomolecules.4 Digital polymerase chain reaction (dPCR)5 achieves this by partitioning a reaction into thousands of independent sub-reactions and determining the original number of nucleic acid copies from the number of partitions showing amplification. The term partition refers to either a droplet in droplet digital PCR (ddPCR) or a chamber in chip digital PCR (cdPCR). This strategy improves precision in detecting low-abundance targets6 and is valuable in cancer diagnostics, infectious disease monitoring, and gene copy number analysis.7,8 Single-channel or multiplexed fluorescence detection further enhances dPCR performance.9

These fluorescence-based readouts rely on precise and automated fluorescence imaging,10,11 yet the limited field of view in optical setups necessitates multi-frame image stitching for complete dataset reconstruction.12,13 Challenges such as repetitive partition structures, brightness inconsistencies, and automation limitations hinder scalability, particularly in high-throughput and portable applications.14 Besides, compact and portable dPCR systems worsen these challenges due to variable illumination conditions, non-uniform illumination, field misalignment, and stitching errors that compromise accuracy and reproducibility.15 These challenges reduce the imaging effectiveness in single-cell analysis, point-of-care (POC) diagnostics, and biosensor imaging.5,16

Modern high-end microscopes address some of these limitations with motorized stages that provide integrated stitching functions and advanced illumination correction. Such tools remain hardware-specific, costly, and unsuitable for compact or low-cost dPCR platforms. Our approach differs in being fully software-based, marker-free, and platform-independent, designed to operate even in resource-limited or portable systems without reliance on dedicated hardware. The algorithm, therefore, complements rather than competes with commercial solutions, extending advanced stitching capabilities to broader biosensing scenarios.

Our earlier work17 introduced a silicon cdPCR chip design with alignment marks and described a fiducial-based fluorescence image processing workflow, including image stitching, rotation correction, and reaction well identification. That study was chip-centric, with emphasis on hardware layout and silicon-based device fabrication. In contrast, the present work addresses the broader computational bottleneck by introducing a marker-free fluorescence image stitching algorithm. Instead of relying on alignment marks or block segmentation, the proposed method employs partition–detection encoding for robust registration, incorporates optional rotation invariance, and introduces a global mask-based brightness correction that improves intensity uniformity by ≈ 29.6% compared with adaptive histogram equalization. Unlike the previous study focusing on an in-house-made silicon cdPCR chips, the algorithm presented here is validated across multiple dPCR platforms, such as droplet, chip-based, and commercial QuantStudio, as well as microarrays and bead-based chips, establishing its generalizability. This shift from a hardware-specific workflow to an algorithm-centric framework makes the current study relevant for portable, POC, and artificial intelligence (AI)-integrated biosensing platforms.

Fluorescence-based imaging in droplet microfluidics relies on advanced image processing techniques to ensure accurate signal quantification. Among these, effective image stitching plays a crucial role in improving fluorescence signal integrity, thereby enabling precision diagnostics and fully automated workflows.18–20 Typically, multi-frame microscopic image stitching involves two major steps: image registration and image blending.21,22

Image registration: existing microscopic image registration methods can be broadly categorized into three classes: feature-based, region-based, and marker-based approaches. Feature-based methods employ algorithms such as scale-invariant feature transform (SIFT)23 and speeded-up robust features (SURF)24 to match control points within overlapping regions and to estimate the transformation matrices between adjacent images. Highly regular, close-packed structures of dPCR chips and microarrays often generate false correspondences, increasing registration failure rates. Region-based methods, such as phase correlation,25,26 have been designed specifically for microscopic images; nevertheless, these techniques are generally restricted to pure translational alignment, lack robustness to rotation, and often require manual intervention, factors that limit their applicability to future miniature optical systems. Marker-based registration,17 on the other hand, depends on predefined fiducial points, which are susceptible to manufacturing variations and environmental disturbances.

Image blending aims to generate a seamless panoramic microscopic image based on the registration results.21,22 Multi-band blending27 is a widely used approach that merges overlapping regions through multiresolution decomposition. Unfortunately, its correction is confined to local overlaps and does not compensate for global illumination inconsistencies. Exposure fusion28 addresses global exposure imbalance but neglects intra-image non-uniformities. Alternatively, block-based methods segment a microscopic image into several subregions and process each under the assumption of locally uniform illumination.10 Other techniques, such as adaptive histogram equalization (AHE),29 linear blending,26 and white top-hat transformation,14 also enhance image uniformity to some extent.

Based on the above review, a marker-free, automated, and scalable image stitching framework would be beneficial for enhancing fluorescence imaging in microfluidic biosensing. Nevertheless, current approaches only partially address these needs, particularly in the context of portable and miniaturized microfluidic systems.

Recently, deep-learning-based approaches, particularly the convolutional neural networks method, have been explored for automated image stitching, segmentation, and noise reduction, improving the precision of biosensing platforms.30,31 Motivated by these advancements, this study introduces a partition-detection-based image stitching approach that enhances fluorescence imaging by addressing key limitations in alignment accuracy, brightness uniformity, and automation. The approach are demonstrated by dPCR as a model and can be further extended to microarrays or droplet-based microfluidic chips and other fluorescence biosensing platforms by leveraging spatial partition distributions for marker-free registration. A mask-based illumination correction algorithm ensures uniform fluorescence intensity, improving signal quantification and image clarity. Integrating image registration, brightness correction, and data analysis into a unified workflow enhances efficiency and adaptability across biosensing applications. The method supports single-molecule fluorescence sensors, electrochemical biosensors with optical readouts, and AI-driven biosensing platforms, strengthening its role in clinical diagnostics, genomics, and bio-photonics. Validation on dPCR chips confirms its effectiveness in refining fluorescence imaging workflows for high-throughput research, medical diagnostics, and portable lab-on-a-chip devices.

Materials and methods

The stitching algorithm

We created the algorithm using a MATLAB script, which consists of four main steps: partition detection, image registration, illumination correction, and final image reconstruction. Fig. 1 provides a schematic overview of the method.
image file: d5ra08092d-f1.tif
Fig. 1 Schematic diagram of the proposed method; the dPCR image was generated using a dPCR emulation tool.32 (a) The dPCR chip is divided into four sections for imaging. (b) Partition detection. (c) Partition matching is achieved through the spatial encoding of the detected partitions. (d) Illumination non-uniformity correction: the red circles highlight the comparison before and after correction. (e) Image stitching and dPCR result analysis.

The first step is partition detection (Fig. 1a): an object detection algorithm is performed to localize all partitions in the dPCR images. Let Q denote the number of images to be stitched and Ni the number of detected partitions in image i, where i∈{1,…,Q}. The index of the detected partition is denoted as j, where j∈{1,…,Ni}. The location of partition j in image i is represented as Pi,j = (xi,j, yi,j), and its fluorescent intensity is extracted as Gi,j (Fig. 1b).

The second step is image registration (Fig. 1c): the coordinates of the detected partitions are used to calculate the transformation Tp,q between consecutive images, where p and q represent the image indices. The registration is based on matching partitions in the overlapping regions of two consecutive images. This process is repeated (Q − 1) times to compute all the necessary transformation matrices.

The third step is illumination non-uniformity correction (Fig. 1d): many dPCR systems face challenges in maintaining consistent illumination across multi-frame images. An algorithm was developed to ensure global uniformity of fluorescent intensity through a series of non-uniformity correction masks.

The final step is image stitching and data analysis (Fig. 1e): using the transformation matrices Tp,q, and correction masks Mi, the fluorescence images are processed to generate the final panoramic dPCR image Iall. This panoramic image is then used for generic dPCR result analysis.

This four-step process integrates partition detection, image registration, illumination correction, and stitching into a unified workflow, enabling accurate and seamless multi-frame image analysis.

Fluorescence data extraction from dPCR systems is typically required to capture concurrently two images: one under white light, where all partitions are visible, and another under activation light, where only positive partitions are visible. Then, the white-light images are used to locate the partitions and estimate the transformation matrix between images. Illumination non-uniformity correction is applied to the fluorescence images, which are then stitched together for the final dPCR analysis.

Compared to existing microarray fluorescence image stitching and analysis methods, the proposed approach is characterized by two main features:

1. The image registration is based on partition detection results rather than pixel-value matching in overlapping image regions, which is commonly used in existing techniques. The cdPCR image stitching uses an algorithm that aligns frames through intrinsic structural features and adaptively compensates for brightness inconsistencies without platform-specific parameter tuning. These methods exhibit strong resilience to illumination non-uniformity, image noise, optical system defects, and chamber deformation. In contrast, conventional pixel-based registration methods tend to be less robust under such conditions.

2. Illumination non-uniformity correction is performed globally, enhancing the contrast between positive and negative wells across the entire chip.

Image registration

Partition detection. In this study, we used three types of dPCR systems to validate our image stitching method: the droplet dPCR (ddPCR) developed by Xi'an Jiaotong University (XJTU)33 (Fig. 2a), the cdPCR system developed by Northwestern Polytechnical University (NPU)17,20,32,34,35 (Fig. 2b), and QuantStudio cdPCR chip36 (Fig. 2c).
image file: d5ra08092d-f2.tif
Fig. 2 Partition detection results for different dPCR systems and pictures of the respective chips. The red dots and boxes indicate the location of the detected partitions. (a) XJTU ddPCR,33 (b) NPU cdPCR,35 and (c) QuantStudio cdPCR.36

The optical configurations of the three dPCR systems used in this study are summarized in Table 1. We adapted partition detection techniques to different chip architectures. We applied the circle Hough transform (CHT) to the QuantStudio chips, the region segmentation for the NPU chip, and You Only Look Once (YOLO) v5 enhanced detection robustness37 for the XJTU system. The minimum detectable partition sizes for the CHT algorithm, region segmentation, and YOLO v5 are ≈ 11 × 11, 8 × 8, and 32 × 32 pixels, respectively.

Table 1 Optical and imaging parameters of the three dPCR datasets. 6-FAM stands for 6-carboxyfluorescein
Parameter XJTU dPCR NPU dPCR and QuantStudio dPCR
Camera model DMK 72BUC02, CMOS Canon EOS 70D, CMOS
Aperture value f/1.4
Resolution (pixels) 2752 × 2208 5472 × 3648
Exposure time (ms) Fluorescence: 30 s
Bright-field: 1/30 s
Lens F52D09, 51.9 mm 5X
Dye 6-FAM 6-FAM


Feature descriptor. A series of dPCR images is registered by identifying and matching partitions that appear in consecutive image pairs. Partition matching relies on the spatial topological distribution of neighboring partitions, which serves as a unique footprint for each partition.

This approach requires two essential conditions for the registration algorithm: (1) images must contain overlapping regions, and (2) overlapping regions must include partitions with distinctive features. Most image stitching algorithms, including those beyond dPCR applications, depend on the first condition. Distinctiveness in partition features is usually satisfied in ddPCR systems because droplets often display random distributions that introduce irregularities useful for registration. Minor variations in droplet position, size, or local arrangement provide uniqueness that supports robust alignment. In chip-based dPCR, distinctive features are typically found at block boundaries or near structural irregularities, ensuring reliable registration (Fig. 3a). Perfectly uniform droplet arrays may lack sufficient diversity, reducing registration performance. Additional preprocessing or fiducial references are required under such conditions to achieve accurate stitching.


image file: d5ra08092d-f3.tif
Fig. 3 Partition-encoding-based dPCR image registration method. (a) Left: the spatial distribution of nearby partitions is extracted and used as a “footprint” for each partition. Right: neighborhood of four partitions. (b) The partitions' distribution is encoded into 49 positive influence values as a feature descriptor. (c) Feature descriptors extracted from two images are matched. Examples of matching results are shown for (i) XJTU dPCR, (ii) NPU dPCR, and (iii) QuantStudio dPCR.

This study's dPCR image registration problem can be viewed as a point-set registration problem. It was achieved through encoding the spatial distribution of partitions. We designed a feature descriptor customized for dPCR to generate a set of feature values for any given partition j (Fig. 3b).

First, a neighborhood radius lp is defined and centered on partition j. Partitions within this range are considered neighboring, and their indices form a set SMi,j. The parameter lp is set to eight times the average distance between adjacent partitions, ensuring that a sufficient number of neighboring partitions are included in the feature descriptors.

Next, we defined a 7 × 7 pixel square grid centered on a partition (i, j); the set of 49 node points' indices in the grid is denoted as SNi,j = {1,2,…,49}, and the number of nodes can be increased for improved uniqueness or decreased for faster processing.

Subsequently, we calculated the influence values of all nearby partitions SMi,j on the 49 defined nodes SNi,j, resulting in a total of 49×|SMi,j| influence values, which ∣·∣ denote the cardinality of a set. The influence value of partition m on node point n is then calculated by:

 
image file: d5ra08092d-t1.tif(1)
Here, σ is the standard deviation of the Gaussian distribution function. In this study, σ is determined by the neighborhood radius, with σ = lp/3. ‖Pi,mPi,j,n2 represents the Euclidean distance between a neighboring partition Pi,m and a grid node Pi,j,n, where mSMi,j and nSNi,j.

Finally, these influence values are summed on the 49 grid nodes to obtain values from 49 features Fi,j. The combination of these values forms the feature descriptor for partition (i, j):

 
image file: d5ra08092d-t2.tif(2)

Evidently, the distinctiveness of a partition's feature descriptor Fi,j depends on the uniqueness of the surrounding partition distribution. Highly uniform partition arrangements, such as droplet arrays with regular and repetitive patterns, cause Fi,j to lose uniqueness, making the registration algorithm inapplicable.

Transformation matrix estimation. Feature descriptors of all partitions are compared using a feature-matching algorithm based on Euclidean distance. A partition in image Ip matches a partition in image Iq when the Euclidean distance between their descriptors is smaller than that to any other descriptor in Iq. Some partitions generate incorrect pairings, and a random sample consensus (RANSAC) algorithm38 is applied to estimate the transformation between two images (Fig. 3c).

The transformation is usually rigid, meaning translation and rotation can align each image pair. Radial lens distortion and optical path errors, which occur more frequently in portable devices, can make rigid transformation insufficient to describe the relationship between two images. Affine, similarity, or three-dimensional projection transformations were applied to achieve higher registration accuracy.

Rotation invariance. Specific dPCR systems may experience field-of-view rotation when capturing different chip sections due to hardware limitations or specific design requirements.14 This rotation can cause inconsistencies in the previously mentioned feature descriptor. An optional module enhances rotation invariance for all partition feature descriptors. The feature direction angle of partition (i, j) is determined after identifying the relative positions of its neighboring partitions, ensuring consistency in registration despite rotational variations.
 
image file: d5ra08092d-t3.tif(4)

After that, we rotate the coordinates of its nearby partitions as calculated by:

 
image file: d5ra08092d-t4.tif(5)

Afterward, the rotated coordinates image file: d5ra08092d-t5.tif of the nearby partitions replaced Pi,m in eqn (1) and (2) for feature descriptor generation, ensuring that the descriptor remains consistent when the images are rotated—an example of rotation invariance (Fig. 4). Although the image is rotated, the two matched partitions have nearly identical feature descriptors after alignment.


image file: d5ra08092d-f4.tif
Fig. 4 Rotation invariance of the proposed spatial encoding method. (a) Bright-field XJTU dPCR image to be stitched, with a magnified view showing the neighborhood of a selected partition. (b) Spatial distribution of reaction chambers within the neighborhood. (c) The same distribution after rotation by the estimated feature angle. (d) The generated feature descriptor.

The feature direction angle θ depends on the completeness of detected neighboring partitions. Implementing rotation invariance may reduce the uniqueness of partitions near image edges, which can affect registration accuracy. The rotation invariance module is therefore optional in this study, and its performance is evaluated in Section 3.3.

Illumination non-uniformity correction

Non-uniform illumination across the dPCR chip prevents direct quantification of partition fluorescence from raw grayscale pixel values. The issue becomes more pronounced when the chip is imaged in separate sections, because the same partition may appear with significantly different grayscale values in different images (Fig. 5a). Image processing techniques such as adaptive histogram equalization (AHE)29 can partially ease this effect, yet a dedicated global illumination correction method has remained unavailable. This study introduces a multi-frame dPCR illumination correction algorithm based on partition detection, which automatically generates correction masks and applies them to all pixels of the acquired images.
image file: d5ra08092d-f5.tif
Fig. 5 Example of illumination non-uniformity correction. (a) Two consecutive dPCR images (up) are individually corrected using custom correction masks, resulting in corrected images (down). (b) Stitched images and their partitions, grayscale values distribution with no brightness correction, generic self-adaptive correction, and our mask-based correction.

Illumination correction is performed after partition detection. All captured fluorescence images are first converted to grayscale, and the grayscale values of all detected partitions are collected. These values are then fitted with a Gaussian distribution function, yielding a global mean grayscale value Gall and a standard deviation σall, which together quantify the degree of illumination non-uniformity across the dataset.

The correction mask is constructed as a two-dimensional matrix with the same dimensions as the captured dPCR image. Gall is used as the reference value to calculate a correction coefficient for each pixel. The procedure for calculating these coefficients in a single image follows the method described in our earlier work.34 First, correction coefficients Ri,j are computed for all partitions, as given by:

 
image file: d5ra08092d-t6.tif(6)

The correction coefficients for all pixels in the image are obtained through bilinear interpolation, based on Ri,j and the corresponding pixel coordinates Pi,j. These coefficients are then used to generate a raw image correction mask. A 2D Gaussian smoothing algorithm, applied with a window size of five pixels, refines this mask, producing the final image-correction mask Mi. The corrected image Īi is derived by dividing the original image by the correction mask (Fig. 5a), after which the corrected fluorescence intensity values i,j are re-extracted.

After applying the correction, a uniform panoramic dPCR image is constructed by stitching the series of images along the seams identified during the image registration process (Fig. 5b). The grayscale distribution plot demonstrates that mask-based brightness non-uniformity correction provides a more apparent distinction between negative and positive partition grayscale distributions compared to no correction and self-adaptive correction. Section 3.1 provides a detailed evaluation of the algorithm's performance in correcting illumination non-uniformity.

Since the primary objective of dPCR is to determine nucleic acid copy number by analyzing the fluorescence intensity distribution of partitions, constructing a full panoramic image is not always required. Once transformations between images are defined and corrections are applied, overlapping fluorescence data from standard partitions can be extracted and processed by averaging or computing the median value, ensuring accurate quantification without needing full-image reconstruction.

Results

Image stitching

We tested our stitching algorithms using three datasets (Fig. 6). Different images were stitched together in these datasets along the seams (Fig. 6a). We also presented panoramic dPCR images generated from multiple images (Fig. 6b) for XJTU ddPCR, NPU cdPCR, and QuantStudio cdPCR. Based on the experimental results, our algorithm successfully generated panoramic dPCR images across three different chip types.
image file: d5ra08092d-f6.tif
Fig. 6 Stitched dPCR images. The black area of the chip image has no partitions. (a) Seams in the stitched dPCR chip images. (b) Panoramic dPCR images.

We compared our partition-encoding-based feature descriptor with the SURF and Harris feature descriptor using the NPU dPCR system as an example (Fig. 7a). 125, 63, and 0 feature point pairs were successfully matched using our SURF descriptor and the Harris detector. These results indicate that our method substantially increases the number of matched feature points within overlapping regions, thereby improving the reliability of the stitching process. Furthermore, we compared our dPCR image registration method with a region-matching-based registration method17 and a SURF-feature-based method in terms of processing time, implementation approach, and applicability, as summarized in Table 2. All experiments were conducted on a workstation with an AMD Ryzen 9 5900HX processor (8 cores, 16 threads, and a clock speed of 3.30 GHz) and 32 GB of RAM.


image file: d5ra08092d-f7.tif
Fig. 7 (a) Compares our partition-encoding-based registration method with the SURF-feature-based and Harris-feature-based methods. (b) An example of stitching a pair of dPCR images with a ≈ 20° rotation.
Table 2 Comparison of different dPCR image registration methods
Registration method Processing time Fully automated Rotation invariance Uniqueness to non-uniform illumination Supported transformation matrix
Our method 769 ms 3D projection
Region Matching17 14[thin space (1/6-em)]475 ms 2D rigid
SURF feature24 552 ms 3D projection


Although the presented figures show datasets with a limited number of stitched frames for clarity, the algorithm has also been applied to larger-scale reconstructions involving more than 40 frames. Performance remained stable with no noticeable loss of accuracy or increase in computation time. Scalability, therefore, follows a linear trend with the number of frames, demonstrating suitability for high-throughput imaging workflows that require 10 to 50 or more stitched images.

Furthermore, we compared our method with several commonly used image stitching software tools, including Hugin and ImageJ.25 For the XJTU ddPCR and QuantStudio cdPCR systems, all three methods, our proposed approach, Hugin, and ImageJ, successfully achieved image registration and blending with only negligible alignment errors. Successful blending of the NPU cdPCR system was achieved exclusively using our method.

Rotation invariance

We conducted a series of tests on XJTU dPCR bright-field images to evaluate the rotation invariance of the proposed feature descriptor, where the rotation angle between a pair of consecutive images varied from ≈ −45° to ≈ 45° in ≈ 10° increments. Results (Table 3) show that each pair of images was successfully stitched. The estimation error is ≈ 0.091°, which is negligible. An example of stitching a pair of images with a 20° rotation angle is shown in Fig. 7b.
Table 3 Comparison of true and estimated rotation angles
Rotation angle (°) −45.00 −35.00 −25.00 −15.00 −5.00 5.00 15.00 25.00 35.00 45.00
Estimated angle (°) −44.92 −34.99 −25.03 −14.95 −4.97 5.13 15.09 25.03 34.84 44.96
Absolute error (°) 0.08 0.01 0.03 0.05 0.03 0.13 0.09 0.03 0.16 0.04


Illumination non-uniformity correction

We validated our mask-based illumination non-uniformity correction algorithm using the QuantStudio dPCR system and compared it with the AHE algorithm (Fig. 8a). The results show that the Gaussian fit σall of positive partitions were ≈ 3.38, ≈ 2.57, and ≈ 1.57 under no correction, AHE-based correction, and our mask-based correction, respectively. Compared with the AHE-based correction, which reduced σall by ≈ 24.0%, our method achieved an additional ≈ 29.6% improvement, resulting in an overall reduction of ≈ 53.6%.
image file: d5ra08092d-f8.tif
Fig. 8 Grayscale images of partitions and corresponding grayscale value distributions for two platforms: (a) QuantStudio dPCR chip and (b) NPU dPCR chip. The left panels show the raw images before illumination correction, and the right panels show the corrected images using the proposed mask-based algorithm. Blue curves represent the original grayscale distributions, and red curves represent fitted Gaussian profiles. Correction reduces brightness non-uniformity, yielding narrower and more symmetric distributions.

Furthermore, our correction method was applied to an NPU dPCR chip with an actual average copy number of 0.6075 per reaction chamber (Fig. 8b). Without illumination correction,17 the detected value was 0.5744. After applying global illumination correction, the detected value was 0.6131, reducing the detection error by ≈4.53%.

Discussion

The proposed fluorescence image stitching method significantly improves registration accuracy and global brightness uniformity, making it highly suitable for biosensing applications, particularly in dPCR and microarray imaging. Validation across multiple dPCR platforms confirms its adaptability and robustness, addressing key challenges in fluorescence-based bioanalysis.

Pixel-matching approaches, such as the SURF method, often produce false matches due to anomalous pixels, non-uniform illumination, and the repetitive partition layout in dPCR and microarray images. The spatial partition encoding strategy used in our algorithm achieves precise alignment and substantially increases the number of matched feature points within overlapping regions. Stitching reliability improves accordingly, which is especially important for dPCR systems where misalignment distorts fluorescence quantification and reduces the accuracy of DNA copy number estimation. The region-based matching algorithm17 delivers comparable accuracy but requires ≈ 18.82 times longer processing, making it unsuitable for high-throughput scenarios.

The algorithm performs best when overlapping regions include distinctive spatial features. ddPCR systems often provide sufficient irregularity through random droplet distributions, while cdPCR chips contain structural boundaries that serve as reliable landmarks. Perfectly uniform droplet arrays represent a special case where feature diversity is limited, and preprocessing or fiducial references may be necessary to achieve accurate alignment.

Droplet displacement during imaging is a potential limitation, particularly in field or point-of-need applications. Minor positional shifts between frames can reduce stitching accuracy, although the partition-based approach remains largely tolerant to such variations because the feature descriptor depends only on the local spatial distribution of droplets or chambers. The algorithm also incorporates a random sample consensus (RANSAC) procedure that automatically removes invalid feature correspondences resulting from small displacements. Larger droplet movements may degrade registration accuracy, but these effects can be mitigated through basic image stabilization or frame selection prior to stitching.

The brightness correction algorithm outperforms the commonly used AHE method.29 The σ value of fluorescence intensity across stitched images was reduced from ≈ 3.38 in uncorrected images to ≈ 2.57 with AHE method and further improved to ≈ 1.57 with the proposed method, representing a ≈ 29.6% enhancement in fluorescence intensity uniformity. This correction level is beneficial for accurate fluorescence signal quantification, particularly in microarray and dPCR applications where signal intensity variations can compromise the reliability of quantitative analyses.

Applicability extends beyond dPCR but remains most effective in partition-based microfluidic platforms. Systems that contain repetitive spatial arrangements, such as microarrays or bead-based chips, benefit from the same partition–detection encoding and mask-based illumination correction. Platforms lacking defined spatial repetition are less suited to this approach, although future adaptations may enable broader use. The algorithm should therefore be considered primarily optimized for partitioned architectures, with potential extension to biosensors that share similar spatial layouts.

These improvements make the method suitable for integration into portable and POC diagnostic systems, where hardware-based stitching and illumination correction are not available. Compatibility with software-driven workflows also facilitates incorporation into AI-assisted biosensing platforms without excessive computational demand. Future work will explore adaptive enhancements that extend applicability and enable real-time operation in resource-limited settings.

Conclusion

This study introduces a partition-detection-based fluorescence image stitching method for biosensing applications, integrating marker-free image registration with global brightness correction. The algorithm ensures high registration accuracy and fluorescence intensity uniformity, improving data reliability in dPCR, microarrays, and fluorescence-based biosensors. Validation across multiple dPCR platforms confirms its effectiveness in refining fluorescence imaging workflows for high-throughput research, diagnostics, and portable lab-on-a-chip devices.

Beyond dPCR, the method is applicable to biosensor platforms requiring precise fluorescence imaging, including protein biochips, electrochemical biosensors, and AI-driven biosensing platforms. Consistent fluorescence quantification across various microfluidic architectures enhances its applicability in both research and clinical settings. The ability to process fluorescence images efficiently in portable and high-throughput applications strengthens its role in next-generation biosensor development.

Future advancements in deep learning are expected to further enhance fluorescence image stitching. AI-assisted object detection and real-time adaptive correction algorithms could increase automation and accuracy, reducing manual calibration requirements. We can expect that optimization for real-time processing will improve compatibility with handheld diagnostic devices, extending the application range in POC testing and remote healthcare monitoring.

Conflicts of interest

There are no conflicts to declare.

Data availability

The datasets generated and analyzed during the current study are available from the corresponding authors upon reasonable request.

Acknowledgements

Haoqing Zhang was supported by grant no. 62301412 from the Natural Science Foundation of China, grant no. 2023-JC-QN-0130 from the Natural Science Basic Research Program of Shaanxi Province, P. R. China, and grant no. 2023M732815 and no. 2024T170723 from the P. R. China Postdoctoral Science Foundation. We thank Xi'an Jiaotong University for the XJTU dPCR chip and Charles University in Prague, Czech Republic for the NAICA dPCR chip.

References

  1. Y. S. Lee, J. W. Choi, T. Kang and B. G. Chung, BioChip J., 2023, 17, 112–119 CrossRef CAS PubMed .
  2. C. Murtin, C. Frindel, D. Rousseau and K. Ito, Comput. Biol. Med., 2018, 92, 22–41 CrossRef PubMed .
  3. L. Zhang, R. Parvin, Q. Fan and F. Ye, Biosens. Bioelectron., 2022, 211, 114344 CrossRef CAS PubMed .
  4. M. Luo, H. Yukawa and Y. Baba, Lab Chip, 2022, 22, 2223–2236 RSC .
  5. B. Vogelstein and K. W. Kinzler, Proc. Natl. Acad. Sci. U. S. A., 1999, 96, 9236–9241 CrossRef CAS .
  6. H. Zhang, L. Cao, J. Brodsky, I. Gablech, F. Xu, Z. Li, M. Korabecna and P. Neuzil, TrAC, Trends Anal. Chem., 2024, 117676 CrossRef CAS .
  7. W. Bu, W. Li, J. Li, T. Ao, Z. Li, B. Wu, S. Wu, W. Kong, T. Pan, Y. Ding, W. Tan, B. Li, Y. Chen and Y. Men, Sens. Actuators, B, 2021, 348, 130678 CrossRef CAS .
  8. R. Nyaruaba, C. Mwaliko, D. Dobnik, P. Neužil, P. Amoth, M. Mwau, J. Yu, H. Yang and H. Wei, Clin. Microbiol. Rev., 2022, 35, e00168–00121 CrossRef PubMed .
  9. M. Gaňová, H. Zhang, H. Zhu, M. Korabečná and P. Neužil, Biosens. Bioelectron., 2021, 181, 113155 CrossRef PubMed .
  10. K. Wang, B. Sang, L. He, Y. Guo, M. Geng, D. Zheng, X. Xu and W. Wu, Analyst, 2022, 147, 3494–3503 RSC .
  11. S. Zhou, T. Gou, J. Hu, W. Wu, X. Ding, W. Fang, Z. Hu and Y. Mu, Biosens. Bioelectron., 2019, 128, 151–158 CrossRef CAS PubMed .
  12. Z. Beini, C. Xuee, L. Bo and W. Weijia, IEEE Access, 2021, 9, 74446–74453 Search PubMed .
  13. H. Yang, J. Yu, L. Jin, Y. Zhao, Q. Gao, C. Shi, L. Ye, D. Li, H. Yu and Y. Xu, Analyst, 2023, 148, 239–247 RSC .
  14. M. Jiang, P. Liao, Y. Sun, X. Shao, Z. Chen, P. Fei, J. Wang and Y. Huang, Lab Chip, 2021, 21, 2265–2271 RSC .
  15. A. A. Kojabad, M. Farzanehpour, H. E. G. Galeh, R. Dorostkar, A. Jafarpour, M. Bolandian and M. M. Nodooshan, J. Med. Virol., 2021, 93, 4182–4197 CrossRef CAS PubMed .
  16. G. Pohl and I.-M. Shih, Clin. Microbiol. Rev., 2004, 4, 41–47 CAS .
  17. H. Li, H. Zhang, Y. Xu, A. Tureckova, P. Zahradník, H. Chang and P. Neuzil, Sens. Actuators, B, 2019, 283, 677–684 CrossRef CAS .
  18. G. T. Flaman, N. D. Boyle, C. Vermelle, T. A. Morhart, B. Ramaswami, S. Read, S. M. Rosendahl, G. Wells, L. P. Newman and N. J. A. C. Atkinson, Anal. Chem., 2023, 95, 4940–4949 CrossRef CAS PubMed .
  19. J. Brodský, I. Gablech, H.-h. Yu, J. Y. Ying and P. Neužil, Sens. Actuators, B, 2024, 421, 136535 CrossRef .
  20. J. Zheng, T. Cole, Y. Zhang, D. Yuan and S.-Y. Tang, Lab Chip, 2024, 24, 244–253 RSC .
  21. T. Gou, J. Hu, S. Zhou, W. Wu, W. Fang, J. Sun, Z. Hu, H. Shen and Y. Mu, Analyst, 2019, 144, 3274–3281 RSC .
  22. F. Y. Otuboah, Z. Jihong, Z. Tianyun and C. Cheng, Optik, 2019, 179, 1071–1083 CrossRef CAS .
  23. D. G. Lowe, Int. J. Comput. Vis., 2004, 60, 91–110 CrossRef .
  24. H. Bay, A. Ess, T. Tuytelaars and L. Van Gool, Comput. Vis. Image Underst., 2008, 110, 346–359 CrossRef .
  25. S. Preibisch, S. Saalfeld and P. Tomancak, Bioinformatics, 2009, 25, 1463–1465 CrossRef CAS PubMed .
  26. J. Hu, L. Chen, P. Zhang, K. Hsieh, H. Li, S. Yang and T.-H. Wang, Lab Chip, 2021, 21, 4716–4724 RSC .
  27. P. J. Burt and E. H. Adelson, ACM Trans. Graphics, 1983, 2, 217–236 CrossRef .
  28. Z. G. Li, J. H. Zheng and S. Rahardja, IEEE Trans. Image Process., 2012, 21, 4672–4676 Search PubMed .
  29. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman and K. Zuiderveld, Comput. Vis. Graph Image Process, 1987, 39, 355–368 CrossRef .
  30. Z. Wu, Y. Bai, Z. Cheng, F. Liu, P. Wang, D. Yang, G. Li, Q. Jin, H. Mao and J. Zhao, Biosens. Bioelectron., 2017, 96, 339–344 CrossRef CAS PubMed .
  31. D. M. D. Siu, K. C. M. Lee, B. M. F. Chung, J. S. J. Wong, G. Zheng and K. K. Tsia, Lab Chip, 2023, 23, 1011–1033 RSC .
  32. H. Zhang, Z. Yan, X. Wang, M. Gaňová, M. Korabečná, P. Zahradník, H. Chang and P. Neuzil, Sens. Actuators, B, 2022, 358, 131527 CrossRef CAS .
  33. Y. Ren, J. Ji, H. Zhang, L. Cao, J. Hu, F. Xu and Z. Li, Lab Chip, 2023, 23, 2521–2530 RSC .
  34. Z. Yan, H. Zhang, X. Wang, M. Gaňová, T. Lednický, H. Zhu, X. Liu, M. Korabečná, H. Chang and P. Neužil, Lab Chip, 2022, 22, 1333–1343 RSC .
  35. X. Liu, X. Wang, H. Zhang, Z. Yan, M. Gaňová, T. Lednický, T. Řezníček, Y. Xu, W. Zeng and M. Korabečná, Biosens. Bioelectron., 2023, 232, 115319 CrossRef CAS PubMed .
  36. M. Laig, C. Fekete and N. Majumdar, in Quantitative Real-Time PCR: Methods and Protocols, Springer, 2019, pp. 209–231 Search PubMed .
  37. M. Hussain, IEEE Access, 2024, 12, 42816–42833 Search PubMed .
  38. M. A. Fischler and R. C. Bolles, Commun. ACM, 1981, 24, 381–395 CrossRef .

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.