DOI:
10.1039/C4RA08110B
(Paper)
RSC Adv., 2014,
4, 52727-52733
A rapid and effective vignetting correction for quantitative microscopy†
Received
4th August 2014
, Accepted 14th October 2014
First published on 14th October 2014
Abstract
Images acquired using optical microscopes are inherently subject to vignetting effects due to imperfect illumination and image acquisition. However, such vignetting effects hamper accurate extraction of quantitative information from biological images, leading to less effective image segmentation and increased noise in the measurements. Here, we describe a rapid and effective method for vignetting correction, which generates an estimate for a correction function from the background fluorescence without the need to acquire additional calibration images. We validate the usefulness of this algorithm using artificially distorted images as a gold standard for assessing the accuracy of the applied correction and then demonstrate that this correction method enables the reliable detection of biologically relevant variation in cell populations. A simple user interface called FlattifY was developed and integrated into the image analysis platform YeastQuant to facilitate easy application of vignetting correction to a wide range of images.
Introduction
Biological research increasingly relies on imaging cellular processes to extract quantitative information using fluorescence microscopy. However, image acquisition by microscopy has inherent limitations due to imperfect illumination of the specimen and optical aberration in the objectives. These deviations result in reduced intensity of the images at the periphery, a phenomenon generally referred to as vignetting. As such, vignetting reduces overall intensity of objects, while increasing noise. Moreover, vignetting effects can significantly affect image segmentation and thus reduce the number of measured objects. In particular, when single cells need to be tracked over multiple frames of a time-lapse experiment, high efficiency of segmentation is critical for reliable analysis. Thus, strong vignetting effects might hinder reliable discrimination of subtle phenotypes when comparing quantitative readouts in different samples, or confound the detection of the intrinsic cell-to-cell variability. Indeed, dissecting the contribution of the various sources of biological variation of different cellular readouts has recently attracted significant interest,1–4 but requires effective methods to minimize the technical noise in these measurements. Thus, when using quantitative microscopy, images have to be computationally corrected by applying a correction function to revert vignetting effects before quantitative information can be reliably extracted.
Different approaches have been taken to derive correction functions to revert effects of vignetting. For example, images of a uniformly fluorescent sample acquired under identical conditions as in the actual experiment can be used to experimentally determine the optical aberration. Although this technique allows very accurate correction of the images, the need to acquire calibration images for each illumination setting often adds undesired additional experimental complications, especially when using complex imaging devices, such as microfluidic chips. Therefore, correction of vignetting is mostly performed a posteriori by estimating the correction function using the information present in the acquired images and a number of different approaches to correct for vignetting correction have been proposed. Often, these methods are based on the assumption of a strong prior, for example on the shape of the correction function.5,6 Other approaches have aimed to correct vignetting by means of segmenting the images into background and objects, which can subsequently be used to derive a correction function.7–11 Similarly, these approaches typically rely on priors on the relative intensity of background and objects,11 and often fail for images, in which a large fraction of the image is covered by the objects under study.8,9
Here, we describe a new implementation of a vignetting correction that is based on estimating the correction function using the intensity of the image background. In particular, we use a simple filtering approach based on assessing the variation of pixel intensities across multiple images to identify regions of the images that only contain background information without assuming any a priori information about the nature of the imaged objects before a correction function is calculated. We apply our vignetting correction to a set of in silico generated images to quantitatively asses the accuracy of the technique and compare the performance of the this algorithm with an available open source solution. We also apply the image correction on typical biological images and characterize its effect on image segmentation and object quantification. We demonstrate that our vignetting correction reduces technical noise in the images and therefore allows to more effectively extract additional biological information from the corrected images.
Results and discussion
Implementation of the correction algorithm
Vignetting is inherent to the imaging process and thus equally affects pixel intensities within the background as well as within objects. However, image intensities within objects can fluctuate substantially due to biological variation, which may hinder the accurate estimation of a vignetting correction based on intensities derived from the imaged objects. In contrast, variations of background intensities across the entire image are exclusively caused by vignetting effects and technical (Gaussian) noise and are thus often the method of choice for vignetting correction.7,10
To identify regions of the images that only contain background information, we used several images taken at different positions of the specimen, assuming that objects are randomly distributed within the field of view. This assumption is best fulfilled when imaging many, relatively small objects, such as yeast cells, but may preclude the effective application of this method for large objects imaged at high magnification, such as mammalian cells.
Thus, for a set of images
|
I(1…i) = [I1(x, y)…Ii(x, y)]
| (1) |
we calculated the coefficient of variation (CV) of the fluorescence intensities for each pixel across multiple images, CV
I(
x,
y). Following our assumption, pixels that contain only background information are characterized by low CV
I(
x,
y), while a high CV
I(
x,
y) is likely caused by pixels (partially) containing information from objects. Visual inspection of histograms of the CV
I(
x,
y) confirmed a relatively broad distribution with a distinct population with low CV
I(
x,
y) and fitting a Gaussian distribution with mean
μfitCV and the standard-deviation
σfitCV to this population efficiently allows to separate pixels containing only background information from others. Importantly, using the CV
I(
x,
y) yielded a much better separation of background and non-background pixels than using mean intensity (ESI Fig. 1
†). Thus, we chose all pixels with a CV
I(
x,
y) lower than a certain threshold (cutoff) for further analysis.
Thus, pixels with
|
 | (2) |
were assumed to only contain background information and the image
|
Icf(x, y) = mean(I1…i(x, y)|CVI(x, y) < cutoff)
| (3) |
contains an incomplete representation of the background intensity. To efficiently interpolate the undefined pixels, we randomly sampled 500 pixels fulfilling condition
(3) and applied a lowess regression model to derive an estimate of the background intensity. Subsequent scaling yields the correction function, CorrFunct, with
|
CorrFunct(x, y) = Icf(x, y)/max(Icf)
| (4) |
Finally, division of the original images by the CorrFunct allows correcting for vignetting effects in the images, while preserving overall intensity levels of the images.
An important advantage of this algorithm is the lack of assumptions on the relative intensities of objects and background. Typical images from fluorescent microscopy experiments are characterized by bright objects and rather low background intensity. However, for live cell microscopy, the use of fluorescent dyes in the medium can also help to simplify image segmentation, yielding images with dark objects and bright background intensity.12,13 In contrast, objects in bright field images display regions of higher and lower intensity than the background. Thus, different algorithms have been proposed that allow vignetting correction for these different scenarios.10 Importantly, our algorithm can readily be applied to all these different images without adjustments and thus should equally well allow correcting vignetting effects from a wide range of images (see ESI Fig. 2† for an example).
Testing of the vignetting correction on artificial images
To provide a quantitative assessment of the accuracy of our algorithm, we wanted to test if the method is capable of retrieving an appropriate correction function from in silico generated images that were artificially distorted using a user defined, known aberration. We simulated a typical biological experiment, in which the absolute and relative fluorescence intensity of objects in two channels is determined. Such measurements are frequently used in ratiometric measurements using biosensors, for example, employing FRET approaches.14 We thus chose pairs of images with small circular objects of defined mean intensity and distribution in both images and applied two different distortions to the images (Fig. 1A). Images were then analyzed for various parameters on the original images, after distortion and on the corrected images (Fig. 1B and C, ESI Fig. S3†). As expected, distortion of the images leads to reduced average intensity of the objects (ESI Fig. S3†) and drastically increases the noise in the ratiometric measurements (Fig. 1B and C). However, application of our vignetting correction readily recovered the uniform appearance of the images (Fig. 1A) and effectively restored quantitative measures of object parameters. Importantly, the quality of correction was independent of the number of objects contained in the simulated images and performed comparable to the popular, publically available software package CellProfiler,10 suggesting its usefulness for correcting vignetting effects (Fig. 1B and C, ESI Fig. S3A and C†).
 |
| Fig. 1 Application of vignetting correction restores object intensity and variability on artificially distorted images. (A) Examples of in silico generated paired images representing two illuminations (Y and C) of the same objects before and after artificial distortion and following correction using our vignetting correction. (B and C) Quantitative analysis of the effect of vignetting and its correction. Images as in (A) were analysed following vignetting correction using our algorithm and CellProfiler software for comparison. (B) The ratio of object intensities derived from the two illuminations from original images, after distortion and after subsequent corrections was determined and displayed as mean ± SD for images containing 70 or 700 objects. (C) The coefficient of variation of data from (B) is shown to better illustrate the effect of vignetting on noise. See ESI Fig. S3† for more details. | |
However, applying this quantitative assessment of our correction algorithm is only suitable when object intensities are known, which is hardly the case for real biological experiments. We therefore sought to apply a measure that is independent of object segmentation and thus prior knowledge of object intensities and simulated the appearance of partially overlapping images. In the absence of vignetting, the overlapping region of two such images should yield an identical representation of the field of view, with the exception of inevitable random Gaussian noise (Fig. 2A). However, vignetting effects are strongly position dependent, causing strong differences in the appearance of the overlapping region of the two images (Fig. 2B), which was effectively restored by vignetting correction (Fig. 2C). We used the mean squared difference of pixel intensities to quantitatively assess the difference in the pixel intensities due to vignetting under these conditions. Thus, for two images I1 and I2 with size sx and sy that overlap over a region of m × n pixel, this measure, hereafter referred to as DiffScore, is given by
|
 | (5) |
 |
| Fig. 2 Application of vignetting correction to partially overlapping images allows for a quantitative assessment of image correction. (A–C): examples of partially overlapping images before (A) and after (B) artificial distortion and following vignetting correction (C). Relevant parameters for calculation of the DiffScore are shown in (A). (D) The DiffScore was calculated for three independent sets of images as in (A–C) containing different number of objects. The obtained DiffScore was normalized relative to the distorted images and displayed as mean ± SEM. Vignetting correction was also applied using CellProfiler for comparison. | |
Indeed, calculation of the relative DiffScore on the original images and after distortion revealed a strong increase of the relative DiffScore upon artificial distortion of the images that was almost completely restored upon application of our vignetting correction method. Similar to our previous analysis, our method performed very much comparable to CellProfiler software (Fig. 2D).
Effect of vignetting correction on real biological data
Having established the usefulness of our technique on in silico generated data, we aimed to further test the effect of vignetting correction on large number of parameters of real images. Therefore, cells expressing a GFP-fusion of the highly abundant cytoplasmic protein Cdc19 were loaded into a microfluidic chip and sets of partially overlapping images were recorded. In addition, a fluorescent dye (with fluorescence emission in the Cy5 channel) was added to the growth medium to facilitate segmentation of the cells without using information of the GFP intensity. Visual inspection confirmed uniform appearance of dye and GFP intensities across the entire images upon application of our correction method (Fig. 3A) and reduced the relative DiffScore in both Cy5 (Fig. 3B) and the GFP images (Fig. 3C). Similarly, application of the correction algorithm significantly increased the number of segmented cells (Fig. 3D). Although the effect on the improvement of object detection may seem small on these images, small improvements in image segmentation are sufficient to significantly enhance the faithful tracking of individual cells through multiple frames in a time-lapse analysis. Importantly, the method also significantly increased average GFP intensity (Fig. 3E and ESI Fig. S4A†) and reduced the coefficient of variation of the cellular GFP intensity (ESI Fig. S4B†), confirming that the application of our correction method effectively reduces one source of noise in the biological images and might thus help to reveal weak, but biologically significant phenotypes that are otherwise confounded by vignetting effects.
 |
| Fig. 3 Vignetting correction significantly affects critical parameters of image analysis in a typical quantitative microscopy experiment. (A) Partially overlapping images of cells expressing GFP loaded in microfluidic chip and incubated in media containing a fluorescent dye (imaged in the Cy5 channel) before and after vignetting correction. (B and C) The DiffScore was calculated for three independently analyzed sets of partially overlapping images before and after vignetting correction and displayed as the mean ± SEM following normalization relative to the uncorrected images for the Cy5 (B) and GFP (C) channels. (D) Three independent sets of images were segmented using the Cy5 image and the effect of vignetting correction on the percentage of correctly segmented cells is shown as mean ± SEM. (E) The distribution of GFP intensities of cells before and after vignetting correction is shown as a box-plot. Note that segmentation was performed using the corrected Cy5 image for better comparison. | |
Dissection of sources of noise in biological images
Therefore, we chose to use our method to dissecting different sources of gene expression noise in cells exposed to varying levels of salt stress. When exposed to high osmolarity by addition of NaCl to the growth medium, yeast cells elicit a complex cellular response that is orchestrated by a classical MAP kinase pathway leading to the transient activation of the MAP kinase Hog1.15 This response involves activation of a transcriptional program, which can be readily monitored by using transcriptional reporters driving the expression of fluorescent proteins under the control of the STL1 promoter.3,4,16 Interestingly, analysis of cells expressing the STL1-qV reporter (a translational fusion of four copies of the fluorescent protein Venus under the control of the STL1 promoter) revealed a high cell-to-cell variation. Specifically, only a fraction of cells efficiently express qV in low salt concentrations. The fraction of expressing cells increases in a concentration dependent manner and is caused by the transient nature of Hog1 activation, which precludes to overcome a repressive chromatin state and thus efficient induction of transcription in all cells.13 Bimodality in gene expression using fluorescent reporters is best observed by flow cytometry, which allows to measure a large number of cells after full maturation of the fluorophore. However, this method does not allow to follow the evolution of fluorescence intensity in single cells throughout the time course of the entire experiment, which can only be provided by live cell microscopy at the expense of largely reduced numbers of analyzed cells.
Therefore, we set out to analyze the expression of qV emanating from an STL1 promoter by live cell microscopy. In such an experiment, fluorescence intensity of the reporter construct is not only determined by concentration-dependent, intrinsic cell-to-cell variation in the kinetics of gene expression, but also influenced by the dynamics of maturation of the fluorophore and technical noise in the detection, including vignetting effects.
We followed cells expressing the STL1-qV reporter over time upon addition of different concentrations of salt and analyzed the data before and after vignetting correction. As expected, this analysis revealed rapid and strong salt-dependent induction the fluorescent reporter. Yet, the mean intensity was largely unchanged upon application of vignetting correction (Fig. 4A). To assess cell-to-cell variation of reporter gene expression, we calculated histograms of fluorescence intensities for all conditions (Fig. 4B). At higher salt concentrations, all cells displayed uniform expression of the reporter (0.2 M and 0.4 M salt, Fig. 4C and data not shown), with populations being best described using a single Gaussian distribution. At lower salt concentrations, we detected several populations of cells expressing the reporter with different efficiency. As expected, a fraction of cells failed to induce the transcriptional reporter at very low salt concentrations.3,4,16 Surprisingly, however, cells expressing the reporter could further be discriminated into low and high expressing cells. Similarly, at intermediate salt concentrations (0.15 M), most cells significantly induced the expression of the reporter construct, but analysis of the histograms again suggested two distinct populations of cells, which induce the marker with higher or lower efficiency.
 |
| Fig. 4 Effect of vignetting correction on the analysis of gene expression noise using a fluorescent reporter construct. (A) Cells expressing the pSTL1-qV reporter were grown in well slides, treated with the indicated salt concentration and expression of the reporter construct was followed over time. The mean fluorescence intensity obtained before (solid lines) and after vignetting correction (dashed lines) is shown as a function of time. (B) Example of data obtained from analysing recordings before and after vignetting correction. The histograms of reporter gene expression are shown together with the optimal fit using a sum of three Gaussian curves. (C) Bubble plot of reporter gene expression indicating optimal fits of the obtained histograms. See Materials and methods for details. | |
Importantly, correcting for vignetting effects before image analysis revealed even better discrimination of the different subpopulations. While all other parameters were kept constant, vignetting correction allowed the more reliable dissection of subpopulations that contribute to the measured histograms. Moreover, vignetting correction lead to better separation of the subpopulations. Thus, we conclude that our vignetting correction helps to reduce noise in quantitative imaging experiments and therefore can be used to more efficiently extract new biologically relevant information from such experiments. Interestingly, previous experiments using flow-cytometry have only identified two distinct populations of expressing and non-expressing cells at low salt concentrations. However, these measurements have been performed upon treatment with the translation inhibitor cycloheximide (CHX) to allow complete maturation of the fluorophores. It therefore seems possible that the different outcome of experiments using these different techniques is caused by an increased auto-fluorescence of cells treated with CHX, which may mask small differences in reporter gene expression.
While the underlying mechanism leading to this unexpected cell-to-cell variation in reporter gene expression remains to be identified, it seems likely that several stable chromatin states may exist at the promoter of the reporter gene that can differentially affect gene expression in response to salt stress. Indeed, the activation transcription is a tightly regulated multi step process that underlies control by salt dependent activation of Hog1 signaling,15 but is likewise subject other environmental inputs, such as glucose repression.17
Implementation of the algorithm and integration into YeastQuant
The described algorithm was encoded in the widely distributed MATLAB computing software. To facilitate easy application of vignetting correction to a wide range of images, we have developed a simple user interface, called FlattifY, which guides the user through the necessary steps for vignetting correction (Fig. 5A). After selection of the folder containing the raw images and entering a search string common to all files that need to be corrected, a small number of files can be selected to define the training set, which is used to calculate the correction factor. Similarly, all files that need to be corrected can be selected and are displayed in the corresponding file list. If the files for correction are contained in multiple folders, this process is simply repeated until all files have been selected and included in the file lists. Then, the correction factor is calculated and displayed. Selecting specific files in the current folder allows previewing the image before and after correction. Finally, applying the correction factor will correct all files from the respective list and save the files under the same name. In addition, a copy of the files before correction is kept.
 |
| Fig. 5 Graphical user interface to apply vignetting correction to biological images and integration of the algorithm into YeastQuant (A) screen shot of a typical analysis of images using the FlattifY program to correct images. (B) Screen shot of the filemaker database for YeastQuant. Only a part of the analysis section of the database is shown to illustrate the integration of the vignetting correction algorithm into this software platform. See text for details. | |
To integrate vignetting correction with image segmentation and analysis, we have incorporated the algorithm into the recently developed YeastQuant software package.13 This software package combines a filemaker database with MATLAB-based image analysis routines. For image analysis, all necessary data are entered into the filemaker database, which connects to MATLAB to start image analysis. For illuminations that should be subjected to vignetting correction using the FlattifY algorithm, vignetting correction can be activated in the analysis tab (Fig. 5B) and image correction is automatically executed before image segmentation.
Conclusions
The algorithm described here provides a novel implementation of a vignetting correction for quantitative microscopy. The proposed non-parametric algorithm is based on estimating a correction function solely from the background intensity. We suggest a simple filtering method that allows to identify regions of the images that contain only background information by comparing the variation of pixel intensities across images taken from a small number (typically 4–6) of different positions of the samples without any other prior knowledge about the objects such as shape or intensity. Therefore, this algorithm is also readily applicable to both images from fluorescence or bright field microscopy irrespective of the relative intensity of objects and background. Obtaining images from different positions of the sample as the only prerequisite for applying our correction algorithm is almost always fulfilled in quantitative microscopy experiments as multiple positions are typically used to maximize the number of cells followed in a given experiment, or to simultaneously compare different genotypes. In fact, when applying the algorithm to our in silico generated test sets, a minimum of three images in the training set was sufficient to obtain satisfactory correction (data not shown). Together with the intuitive user interface and integration into the YeastQuant software, the algorithm therefore provides a straightforward and simple method applicable to a wide variety of microscopy experiments. Using a framework to quantitatively assess the performance of the correction algorithm we provide evidence that the algorithm successfully reduces technical noise in the measurements, and allows to efficiently extract novel biological information. Importantly, these data underline the power of quantitative microscopy to study the dynamics and cell-to-cell variation in biological systems and highlight the benefits of often overlooked vignetting correction for the reliable extraction of quantitative information from microscopy images.
Material and methods
Yeast culture
Yeast strains are listed in ESI Table 1.† Cells were grown in synthetic medium (SD) as described.4 Saturated overnight cultures were diluted to OD600 0.05 and grown for at least four hours before the start of the experiment. For the experiment presented in Fig. 3, cells were loaded into microfluidic chips (Cellasics Y04C, Millipore Corp.) following manufacturers recommendations and imaged while continuously providing fresh SD medium containing fluorescently labelled dextran (Dextran conjugated Alexa 680, MW = 3000, Invitrogen, 1 μg ml−1). For the experiment presented in Fig. 4, well-slides (MGB096-1-2LG, Matrical Bioscience), were coated by incubation with Concanavalin A (Sigma, 1 mg ml−1 in PBS) and rinsed with SD. Following mild sonication (1 min), 200 μl of cells were immobilized in the well slides and imaged following stimulation by adding 100 μl of NaCl adjusted to the appropriate concentration in SD.
Microscopy setup
Images were acquired on automated inverted fluorescence microscopes (Ti-Eclipse, Nikon) in an incubation chamber set at 30 °C using a 60× objective lens and a CCD camera (Orca Flash 4.0, Hamamatsu Photonics). Microscopes were controlled using Micromanager software.18 For the experiment presented in Fig. 3, imaging was performed using a pE2 LED light source (CoolLED) and appropriate filter sets (GFP: F49-470 and F47-525; Cy5: F39-651, F37-684, AHF Analysentechnik AG). For the experiment presented in Fig. 4, a SpectraX LED light source (Lumencor) was used and images were taken using relevant excitation and emission filters (F49-500 and F47-535, respectively, AHF Analysentechnik AG) to visualize YFP.
In silico generation of test images
Test images for the analysis shown in Fig. 1 were simulated with a size of 2048 × 2048 pixels by randomly placing non-overlapping circular objects with a diameter of 50 pixels and object intensity following a normal distribution with mean 1600 and standard deviation of 400. Object positions were saved and used for subsequent analysis to avoid the need for image segmentation prior to object quantification. Background intensity, represented by a normal distribution with a mean of 200 and a standard deviation of 50, was added to each pixel. To artificially distort images to resemble vignetting effects, images were multiplied with a distortion function following a two dimensional Gaussian distribution,
with x0 = 800 and y0 = 800 for illumination “C”, and x0= 1200 and y0 = 500 for illumination “Y”; σ2 = 2 × 106 for both illuminations.
Test images for the analysis shown in Fig. 2 were simulated with a size of 3062 × 3062 pixels and 300 objects were randomly placed on the image with an intensity of 300 and noise was added as described for Fig. 1. The resulting image was divided into two images of size 2048 × 2048 pixels, with an overlap of 1024 × 1024 pixels and separately distorted as described for Fig. 1, illumination “C”.
Image and data analysis
Image analysis for experiments shown in Fig. 3 and 4 was performed as described using YeastQuant.13 Efficiency of image segmentation was calculated by comparing the number of objects obtained by automated segmentation to manual counting. Data are shown as the mean ± SEM for three independently analyzed data sets. For the analysis of cell population in Fig. 4, histograms were calculated from the average intensity of the brightest 500 pixels of each cell that was detected in each condition and time point. Histograms were calculated for log-transformed intensities and fitted to a sum of three Gaussian distributions. In Fig. 4C, circle position corresponds to the mean value of each single Gaussian, and circle size corresponds to relative area that each single Gaussian distribution contributes to the fitted curve.
System requirements and code availability
FlattifY uses Matlab, 2012 or higher, on MAC, PC or LINUX. YeastQuant V8 uses Filemaker 13 and MATLAB 2012 or higher. Code can be downloaded at http://www.bc.biol.ethz.ch/research/peter/ResearchTools and http://www.unil.ch/quantitativesignaling/software.
Acknowledgements
The authors would like to thank members of the Pelet and Peter labs, Kevin Smith and Peter Horvath for helpful discussions and comments on the manuscript and Thibault Courtheoux for bright field images. Work in the Pelet lab is supported by the Swiss National Science Foundation (SNF), and the Peter lab is funded by the European Research Council (ERC), the Swiss National Science Foundation (SNF), the Swiss Initiative in Systems Biology SystemsX (RTD projects YeastX and LiverX) and the ETH Zürich.
References
- A. Sanchez, S. Choubey and J. Kondev, Annu. Rev. Biophys., 2013, 42, 469–491 CrossRef CAS PubMed.
- A. Sanchez and I. Golding, Science, 2013, 342, 1188–1193 CrossRef CAS PubMed.
- G. Neuert, B. Munsky, R. Z. Tan, L. Teytelman, M. Khammash and A. van Oudenaarden, Science, 2013, 339, 584–587 CrossRef CAS PubMed.
- S. Pelet, F. Rudolf, M. Nadal-Ribelles, E. de Nadal, F. Posas and M. Peter, Science, 2011, 332, 732–735 CrossRef CAS PubMed.
- W. P. Yu, Y. K. Chung and J. Soh, Int. Conf. Pattern Recogn, 2004, 666–669 Search PubMed.
- K. He, P.-F. Tang and R. Liang, in Natural Computation, 2009, ICNC '09, Fifth International Conference on, IEEE, Tianjin, 2009, vol. 5, pp. 158–161 Search PubMed.
- F. Piccinini, E. Lucarelli, A. Gherardi and A. Bevilacqua, J. Microsc., 2012, 248, 6–22 CrossRef CAS PubMed.
- A. Shariff, J. Kangas, L. P. Coelho, S. Quinn and R. F. Murphy, J. Biomol. Screening, 2010, 15, 726–734 CrossRef PubMed.
- T. R. Jones, A. E. Carpenter, D. M. Sabatini and P. Golland, Proceedings of the Workshop on Microscopic Image Analysis with Applications in Biology, 2006 Search PubMed.
- A. E. Carpenter, T. R. Jones, M. R. Lamprecht, C. Clarke, I. H. Kang, O. Friman, D. A. Guertin, J. H. Chang, R. A. Lindquist, J. Moffat, P. Golland and D. M. Sabatini, Genome Biol., 2006, 7, R100 CrossRef PubMed.
- M. S. Vokes and A. E. Carpenter, Current protocols in molecular biology, ed. F. M. Ausubel, et al., 2008, ch. 14, unit 14 17 Search PubMed.
- R. Dechant, M. Binda, S. S. Lee, S. Pelet, J. Winderickx and M. Peter, EMBO J., 2010, 29, 2515–2526 CrossRef CAS PubMed.
- S. Pelet, R. Dechant, S. S. Lee, F. van Drogen and M. Peter, Integr. Biol., 2012, 4, 1274–1282 RSC.
- E. A. Jares-Erijman and T. M. Jovin, Nat. Biotechnol., 2003, 21, 1387–1395 CrossRef CAS PubMed.
- H. Saito and F. Posas, Genetics, 2012, 192, 289–318 CrossRef CAS PubMed.
- C. Zechner, J. Ruess, P. Krenn, S. Pelet, M. Peter, J. Lygeros and H. Koeppl, Proc. Natl. Acad. Sci. U. S. A., 2012, 109, 8340–8345 CrossRef CAS PubMed.
- S. Pelet and M. Peter, Commun. Integr. Biol., 2011, 4, 699–702 CrossRef CAS.
- A. Edelstein, N. Amodaj, K. Hoover, R. Vale and N. Stuurman, Current protocols in molecular biology, ed. F. M. Ausubel, et al., 2010, ch. 14, unit 14 20 Search PubMed.
Footnote |
† Electronic supplementary information (ESI) available. See DOI: 10.1039/c4ra08110b |
|
This journal is © The Royal Society of Chemistry 2014 |
Click here to see how this site uses Cookies. View our privacy policy here.