An image-to-answer algorithm for fully automated digital PCR image processing

The digital polymerase chain reaction (dPCR) is an irreplaceable variant of PCR techniques due to its capacity for absolute quantification and detection of rare deoxyribonucleic acid (DNA) sequences in clinical samples. Image processing methods, including micro-chamber positioning and fluorescence analysis, determine the reliability of the dPCR results. However, typical methods demand high requirements for the chip structure, chip filling, and light intensity uniformity. This research developed an image-to-answer algorithm with single fluorescence image capture and known image-related error removal. We applied the Hough transform to identify partitions in the images of dPCR chips, the 2D Fourier transform to rotate the image, and the 3D projection transformation to locate and correct the positions of all partitions. We then calculated each partition's average fluorescence amplitudes and generated a 3D fluorescence intensity distribution map of the image. We subsequently corrected the fluorescence non-uniformity between partitions based on the map and achieved statistical results of partition fluorescence intensities. We validated the proposed algorithms using different contents of the target DNA. The proposed algorithm is independent of the dPCR chip structure damage and light intensity non-uniformity. It also provides a reliable alternative to analyze the results of chip-based dPCR systems.

Supplementary Section B: The workflow of the proposed algorithm.
The workflow of the proposed algorithm including 3 sections was shown in Fig S 2. Fig S 2 The proposed algorithm was split into 3 sections. 1 st section was consisted of image skew correction, well location correction as well as fluorescent amplitude extraction from each well. 2 nd section was to generate a global light non-uniformity correction mask to remove the influence of non-uniform light distribution on dPCR results. 3 rd section was to perform the image processing of 1 st section and apply the light non-uniformity correction mask of 2 nd section for the dPCR images after thermal cycling, constructing the well number occurrence as a function of fluorescence.
Supplementary Section C: Details of radial lens distortion correction algorithm.
A radial lens distortion is a common form of image distortion. Here, we take an image ( Fig  S 3A) of dPCR chip with obvious lens distortion using smartphone (Huawei P40) as an example to demonstrate our method. In the image, the pincushion distortion is radially symmetrical. The center of distortion is also the center of the image.
Firstly, we build a polar coordinate system and calculate polar coordinate of all pixels in the original image as shown in Fig S 3B. After that, corrected coordinates of these pixels are calculated as shown in Fig S 3C. The polar angle θ does not change after correction, the polar coordinate r i of pixel i is corrected based on distortion equation: where s i is the corrected polar coordinate of pixel i, k is the radial lens distortion correction coefficient. Then, we build a rectangular coordinate system as shown in Fig S 3D. The corrected coordinates of pixels are converted into rectangular coordinates. Then, cubic spline interpolation algorithm is employed to calculate gray value of all pixels in corrected image. Finally, the corrected image is generated based on these pixels. In order to measure the performance of distortion correction and determine the optimal correction coefficient k, upper boundary of dPCR chip is chosen as a standard. Firstly, 5 uniformly distributed wells located in the upper boundary are manually picked for quadratic polynomial fitting. Quadratic coefficient α could be obtained after fitting. Then, a series of k is used for distortion correction. When the absolute value of α is minimal, its corresponding k is considered optimal. The boundary curves before and after distortion correction is shown in Fig S 4. Here, we compared quadratic coefficient α before and after distortion correction as shown in Table S 1. In theory, correction ratio should be very close for all 4 boundaries, however, due to the errors in optical path, they are different from each other. Since we choose upper boundary to measure the degree of distortion, the correction ratio of upper boundary is higher than of other 3 boundaries.

Fig S 5 Schematic diagram of boundary line fitting algorithm. (A) Filtered upper boundary wells. (B) Lines with a different number of support points. (C, D) Fitted upper boundary line using RANSAC method and direct least square method.
To fit boundary lines using boundary wells, including many wrongly detected/filtered ones, we proposed an algorithm based on random sample consensus known as RANSAC method.
Here, we take upper boundary as an example to demonstrate it. The filtered upper boundary wells are shown in Fig S 5A.
We chose randomly five wells to fit a line using least square method. After that, the number of wells that were located on this line is counted. Such condition is judged by calculating vertical distance between the line and well: if the distance is lower than 5 pixels (which could be adjusted to adapt different cases), this well is regarded located on this line.
Subsequently, the above two steps are repeated 2000 times. The lines with different support point number are shown in Fig S 5B. The corresponding line is considered optimal when the support point number is maximum. Finally, the upper boundary (Fig S 5C) is determined by line fitting of all wells located on this optimal line using the least square method. Compared with direct least square fitting (Fig S 5D), this algorithm could eliminate the effect of a few wrongly detected/filtered wells on boundary location.

Supplementary Section E: Details of projection transform
2D fluorescence image is a projection of a real dPCR chip. With the location of 4 corner points of the chip, the projection transformation matrix T between image and chip can be calculated. The wells on dPCR chip are pre-defined and precisely manufactured; therefore, the location of all wells on the chip could also be calculated based on T as shown in Fig S  6A. This process is also schematically illustrated in Fig S 6B. Compared with conventional 2D transform method, our method could perform better if the chip's surface is not vertical to the optical path. The 2D and 3D correction methods are compared in Fig S 7. This issue will probably occur if there is a hardware' assembly error, optical instrument's fabrication error or human operation error. The bias between theoretical position and real position increases rapidly with the increase of z-axis skewed angle (α). Fig S 7 Comparison of 2D and 3D correction method.