Open Access Article
This Open Access Article is licensed under a Creative Commons Attribution-Non Commercial 3.0 Unported Licence

DetectNano: deep learning detection in TEM images for high-throughput nanostructure characterization

Khalid Ferji
Université de Lorraine, CNRS, LCPM, F-54000 Nancy, France. E-mail: khalid.ferji@univ-lorraine.fr

Received 8th June 2025 , Accepted 13th July 2025

First published on 16th July 2025


Abstract

The rapid and unbiased characterization of self-assembled polymeric vesicles in transmission electron microscopy (TEM) images remains a challenge in polymer science. Here, we present a deep learning-powered detection framework based on YOLOv8, enhanced with Weighted Box Fusion, to automate the identification and size estimation of polymer nanostructures. By incorporating multiple morphologies in the training dataset, we achieve robust detection across unseen TEM images. Our results demonstrate that the model provides accurate vesicle detection within 2 seconds—an efficiency unattainable using traditional image analysis software. The proposed framework enables reproducible and scalable nano-object characterization, paving the way for a general AI-driven automation in polymer self-assembly research.


1. Introduction

Self-assembled nanostructures derived from amphiphilic block copolymers are fundamental in polymer science, offering precise control over morphology, size, and functionality.1,2 Among these, polymersomes—hollow vesicular nanostructures—are widely studied for their potential in drug delivery, encapsulation, and synthetic biology.3–6 Their stability, permeability, and tunability make them attractive candidates for biomedical and nanotechnological applications.7–11 Traditionally, dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), and transmission electron microscopy (TEM) are used for vesicle characterization, but these techniques have their limitations.12,13 DLS provides only an ensemble-averaged size distribution and lacks morphological resolution, while SAXS requires complex data fitting.14–16 TEM remains the gold standard for direct visualization, but its conventional analysis relies heavily on manual segmentation, which is time-consuming, operator-dependent, and prone to bias. To address these challenges, we propose a deep-learning-based detection method that automates the characterization of self-assembled nanostructures in TEM images, ensuring high reproducibility and efficiency. This enables faster innovation, optimizes the analytical workflow, and frees researchers to focus on more critical scientific tasks.

In recent years, artificial intelligence (AI) has emerged as a powerful tool for accelerating discovery in polymer science.17–21 Convolutional neural networks (CNNs),22 particularly object detection models, have demonstrated remarkable success in recognizing and classifying nanoparticles in TEM images.23,24 Several research efforts have already explored the use of deep learning for the detection and characterization of nanoparticles and nanostructures, highlighting the potential of AI in improving analysis speed and precision.25–27 For example, Kamble et al.28 developed a deep learning model for microstructure recognition in polymer nanocomposites, achieving high accuracy. Similarly, Saaim et al.29 utilized machine learning algorithms to automatically segment and classify nanoparticles in high-resolution TEM images, significantly reducing the workload associated with manual annotation. Another relevant study by Lu et al.30 demonstrated the feasibility of using semi-supervised learning approaches for identifying and differentiating the morphologies of nanostructures, enabling automated classification without extensive manual labelling.

Recent efforts in the field of bioimage analysis have demonstrated the power of open-source tools in democratizing the use of deep learning for microscopy applications. For instance, ilastik—developed by Kreshuk and collaborators—has enabled non-expert users to perform machine learning-based segmentation and classification tasks in a highly interactive environment, significantly reducing the technical barrier for researchers handling complex microscopy data.31 Similarly, Henriques and co-workers contributed to the development of ZeroCostDL4Mic, a platform that simplifies the use of deep learning models in microscopy by leveraging free cloud resources and user-friendly interfaces, thus accelerating the adoption of AI in image-based research workflows.32 These initiatives illustrate how thoughtfully designed tools can facilitate the integration of AI into everyday scientific practice—especially for researchers outside the machine learning community.

While AI-based approaches have been successfully implemented in material characterization,33–35 their application to self-assembled polymers remains rare.36,37 The development of a deep-learning-based approach tailored specifically for polymersome detection could offer a significant breakthrough in polymer and materials sciences. This work aims to provide a proof of concept demonstrating that AI can successfully detect vesicles across diverse TEM datasets and offer users an open-source tool38—DetectNano— to assist them in detecting and evaluating size distribution in a straightforward manner (Fig. 1). Importantly, our goal is also to deliver a concrete training example that can serve as a starting point for polymer scientists—particularly non-specialists in AI—seeking to integrate machine learning into their daily research workflows.


image file: d5nr02446c-f1.tif
Fig. 1 (A) Class distribution and schematic 3D representations of polymer nanostructures in the dataset, accompanied by representative transmission electron microscopy (TEM) images illustrating each morphological class: large compound nano-objects (LCN), multicompartment vesicles (MCV), thick membrane multicompartment vesicles (TMCV), unilamellar vesicles (V), and the scale bar. (B) Workflow for automated detection, dataset annotation, and morphological analysis of nanostructures from TEM images using multi-model fusion via Weighted Box Fusion (WBF) of three YOLOv8 models (YOLOv8n, YOLOv8s, and YOLOv8m). TEM images were adapted with permission from ref. 3. Copyright 2022, American Chemical Society.

2. Building a deep learning model for polymer nanostructure detection

Developing an AI model capable of detecting and classifying nanoscale objects in TEM images requires several key components. These include: (i) a well-annotated dataset – essential for teaching the model what to recognize, (ii) a deep learning model – the core algorithm that learns to detect and classify nanostructures, (iii) computational infrastructure – the hardware and software environments used to train the model, and (iv) evaluation metrics – the criteria used to measure the model's accuracy and reliability. By carefully assembling and optimizing these components, we have developed an automated method to detect vesicles and related self-assembled nanostructures in TEM images (Fig. 1). Further details on dataset composition, training configurations, and implementation can be accessed in our public repository on Zenodo.39

Dataset construction

The first step in training a deep learning model is constructing a high-quality dataset. Since our primary objective is to detect polymersomes, we needed a dataset containing clear examples of these nanostructures. Polymersomes are vesicular nanostructures (V) formed by the self-assembly of amphiphilic block copolymers.9,40 They resemble spherical shells composed of a polymeric membrane enclosing an aqueous cavity.

However, to improve the generalization ability of the AI model and enhance its detection accuracy, we also included additional nanostructures commonly observed in polymer self-assembly. These different morphologies help the model learn to distinguish between various forms and prevent it from overfitting to a single vesicle shape. The three selected additional nanostructures are summarized in Fig. 1A: (i) multicompartment vesicles (MCV): unlike simple vesicles, these structures contain multiple hydrophilic cores within a single polymer membrane. (ii) Thick membrane multicompartment vesicles (TMCV): these vesicles represent an intermediate state between MCV and larger structures. They have a thicker polymer membrane, which makes them more stable before merging into larger aggregates. (iii) Large compound nano-objects (LCN): these structures are formed when multiple vesicles fuse together, leading to non-spherical morphologies. Their irregular shape differentiates them from traditional vesicles and provides additional complexity for the AI model to learn. These four nanostructures (V, MCV, TMCV and LCN), along with annotated scale bars, constitute the five object classes used to train the model.

Including these different morphologies improves the model's ability to distinguish between subtle structural variations and ensures better performance in real-world datasets. The dataset was built using 65 high-resolution TEM images.3,7,41–44

Data annotation and preparation

Once the dataset was assembled, it was essential to provide clear annotations so the AI model could learn from labelled examples. Annotations were performed manually using the Roboflow platform,45 where bounding boxes were drawn around each nanostructure to classify them into five predefined classes: V, MCV, TMCV, LCN and scale bar. Annotated scale bars within TEM images served as internal references to convert pixel-based measurements into nanometers, enabling accurate size estimation. A summary of the dataset composition is illustrated in Fig. 1A, showing the frequency of each morphology. The dataset maintains a balanced representation of different nanostructures, ensuring that the model is exposed to a variety of shapes and improving its robustness.

Deep learning model

To detect vesicles and related nanostructures, we implemented a YOLOv8-based object detection framework.46 YOLO (You Only Look Once) is a state-of-the-art deep learning architecture specifically designed for fast and accurate object detection. However, given the variability of TEM images in contrast, resolution, and noise levels, a single model configuration was insufficient to guarantee reliable detection. To overcome this, we employed three different versions of YOLOv8, each offering a balance between computational efficiency and detection precision: YOLOv8n (nano), YOLOv8s (small), and YOLOv8 m (medium). Each model was trained independently and evaluated separately using the same predefined dataset split, ensuring consistency and comparability across results. To further improve detection accuracy and reduce false positives, we implemented a Weighted Box Fusion (WBF) technique, which merges the predictions of all three models into a single refined output.

Training was conducted on PyTorch 2.0 with Ultralytics YOLOv8 using an Intel Core i7-1068NG7 processor under Ubuntu 20.04. The dataset was randomly split into training (72%), validation (17%), and test (11%) sets, ensuring a representative sample for model generalization. Class distributions were preserved across subsets using a fixed random seed to maintain consistency and reproducibility. The training set was used to optimize model parameters, the validation set helped fine-tune hyperparameters and prevent overfitting, while the test set provided an independent evaluation of model performance on unseen data. Key training settings included the use of the Adam optimizer with an initial learning rate of 0.001, an image size of 640 pixels, and a batch size of 1 due to CPU constraints. A total of 85 epochs were used, with data augmentation and image caching enabled to improve convergence.

In object detection tasks, models must learn to simultaneously identify the correct class of each object and accurately localize it within the image using bounding boxes. The performance of such models is therefore assessed using a combination of classification and localization metrics.47 The following evaluation metrics were used to monitor and compare model performance throughout training and validation. A brief description of each metric is provided in Table 1 to guide the interpretation of the results presented in Fig. 2 and 3. The detailed computation of these metrics is handled automatically by the Ultralytics YOLOv8 framework.46


image file: d5nr02446c-f2.tif
Fig. 2 Training (Train) and validation (Val) performance of YOLOv8n, YOLOv8s, and YOLOv8m models. The graphs show how each model learns to localize and classify nanostructures using loss functions (box, class, and DFL: distribution focal loss) during training, and how performance is evaluated on the validation set using standard detection metrics: precision, recall, mAP50, and mAP50-95.

image file: d5nr02446c-f3.tif
Fig. 3 Class-wise evaluation of YOLOv8n, YOLOv8s, and YOLOv8m models based on precision, recall, mAP50, and mAP50-95. Each panel shows the performance metrics for a specific nanostructure class in the validation dataset: all classes combined, large compound nano-objects (LCN), multicompartment vesicles (MCV), thick-membrane multicompartment vesicles (TMCV), vesicles (V), and annotated scale bars. These metrics reflect how accurately each model identifies and localizes each class, highlighting differences in detection sensitivity and reliability depending on structural complexity.
Table 1 Description of evaluation metrics used to assess object detection performance in this study
Metric Definition Purpose and expected trend
Box loss Measures the error in predicting object location (bounding box) Evaluates localization accuracy. Should decrease and approach 0
Class loss Measures the error in classifying detected objects Assesses how well the model assigns labels. Should decrease and approach 0
Distribution focal loss (DFL) Refines bounding box prediction by focusing on high-confidence areas Improves regression accuracy. Should decrease and approach 0
Precision Ratio of correct detections to total detections made by the model Indicates reliability of predictions. Should increase and approach 1
Recall Ratio of correctly detected objects to the total number of ground truth objects Measures model's ability to find all objects. Should increase and approach 1
mAP50 Mean average precision under a moderate detection threshold Measures detection accuracy under moderate conditions. Should increase and approach 1
mAP50-95 Average precision over a range of thresholds Assesses detection performance across varying localization precision levels. Should increase and approach 1


3. Training and performance evaluation of YOLOv8 models

To evaluate the performance of our model, we monitor several standard metrics throughout both training and validation, as summarized in Table 1. During training, these metrics are computed after each epoch to assess how well the model is learning from the labeled data. In parallel, validation metrics are calculated on a separate subset of data not seen during training, providing an estimate of the model's ability to generalize to new, unseen images.

The training process aims to minimize the loss functions (e.g., box loss, class loss, and DFL), which reflect errors in object localization and classification. Simultaneously, the objective is to maximize evaluation metrics such as precision, recall, and mean average precision (mAP), which indicate how accurately and comprehensively the model detects nanostructures. These indicators also help highlight trade-offs, such as under-detection (low recall) versus over-detection (low precision), and can signal overfitting if validation performance deteriorates while training accuracy continues to improve.

The training and evaluation of the YOLOv8 models for polymer nanostructure detection reveal distinct strengths and trade-offs between speed, accuracy, and generalization. Each model exhibits unique characteristics that make it more suitable for specific tasks, yet they remain complementary in their contributions to robust detection performance.

During training, all three models demonstrated a gradual reduction in loss values, with YOLOv8n stabilizing the fastest (Fig. 2). This model, being the smallest in terms of parameters, converged more quickly and maintained a relatively low training loss, indicating efficient learning with minimal overfitting. In contrast, YOLOv8m, with its significantly larger number of parameters, exhibited greater fluctuations in loss, suggesting a more complex optimization process. The longer training time of YOLOv8m (226 minutes) compared to those of YOLOv8n (48 minutes) and YOLOv8s (117 minutes) reflects the computational intensity required for more refined feature extraction. Despite its slower convergence, YOLOv8m's higher recall suggests that it is better at detecting a broader range of nanostructures, albeit at the cost of increased false positives. YOLOv8s, as an intermediary model, balanced both precision and recall, exhibiting moderate convergence speed and a stable reduction in loss values.

The performance metrics provide further insight into the models’ strengths. YOLOv8n excels in precision, achieving the highest score across most nanostructure classes, meaning that its predictions are highly reliable with fewer false positives. However, its recall is lower, indicating that while it detects structures with high confidence, it may miss some instances. On the other hand, YOLOv8m demonstrates superior recall, making it advantageous for detecting more instances of nanostructures, even if some false positives are introduced. YOLOv8s, positioned between these two extremes, achieves a well-balanced trade-off, making it a versatile option when both precision and recall are equally important.

The model-specific performance across different nanostructure classes further supports this complementarity (Fig. 3). YOLOv8m tends to perform better in detecting LCN and MCV, where structural complexity can challenge smaller models. Its ability to capture fine details makes it particularly useful for these intricate structures. Meanwhile, YOLOv8n performs exceptionally well in detecting V and scale bars, where distinct and well-defined edges allow for higher confidence in detection. YOLOv8s, once again, serves as a middle ground, performing consistently across all classes without being heavily biased toward either precision or recall.

Given these observations, it becomes evident that each model has distinct advantages depending on the detection criteria and computational constraints. Rather than favouring a single model, a more effective strategy is to leverage their complementary strengths. By combining the precision of YOLOv8n, the balanced performance of YOLOv8s, and the high recall of YOLOv8m, an optimized detection framework can be achieved. To this end, implementing Weighted Box Fusion (WBF) provides a means to integrate the predictions of all three models, capitalizing on their respective advantages while mitigating their individual weaknesses. This ensemble approach is expected to enhance both detection robustness and reliability, ensuring a more accurate and generalizable characterization of polymer nanostructures in TEM images. Details of the WBF implementation, including the fusion logic and parameters, are provided in our public code repository on Zenodo.39

The impact of WBF on nanostructure detection is illustrated in Fig. 4, where detection outputs from YOLOv8n, YOLOv8s, and YOLOv8m, and their fusion via WBF, are shown in the same unseen TEM image. Detected nanostructures are highlighted by bounding boxes, enabling a direct visual comparison of detection behavior across models. YOLOv8m tends to produce more detections but often assigns incorrect classes (misclassifications), while YOLOv8n is more conservative. WBF applies Soft Non-Maximum Suppression (Soft-NMS) to merge overlapping predictions and balance these extremes, resulting in a cleaner output with improved spatial localization. Examples of misclassified detections are highlighted with white arrows. These improvements are further reflected in the detection counts and confidence scores per class, as shown in Fig. 4E and F.


image file: d5nr02446c-f4.tif
Fig. 4 Evaluation of nanostructure detection performance in an unseen composite TEM image using three YOLOv8 models (n, s, and m) and their fusion via Weighted Box Fusion (WBF). (A–D) Detection outputs from YOLOv8n, YOLOv8s, YOLOv8m, and WBF, respectively. The bounding boxes indicate detected nanostructures, color-coded by class: blue for LCN, green for MCV, yellow for TMCV, and red for unilamellar vesicles (V). The white arrows highlight examples of misclassifications, where the model assigns the wrong morphological class to a detected object. (E) Number of detections per nanostructure class for each model and the WBF output. (F) Average confidence score per class, with error bars representing the standard deviation.

One of the key advantages of using YOLOv8 for nanostructure detection is its ability to provide automated size estimation. Unlike traditional manual methods (using ImageJ for instance), where individual objects must be segmented and measured—often requiring extensive time and user input—YOLOv8 enables rapid and systematic size quantification with minimal effort. By leveraging the bounding box dimensions, vesicle diameters can be efficiently estimated in real time, making this approach particularly well-suited for high-throughput nanostructure characterization.

As shown in Table 2, individual YOLO models exhibit variations in size estimations, particularly for LCN, where YOLOv8m tends to overestimate sizes. WBF refines these measurements by merging predictions across models, reducing variability and ensuring more consistent and reliable size estimates. Compared to manual ImageJ analysis, WBF values are in good agreement, especially for MCV and vesicles, while slight differences for LCN reflect known limitations of bounding-box-based estimation for irregular or compound structures. Notably, generating all class-specific size statistics with YOLOv8 and WBF required less than two seconds, whereas manual measurement of the same image in ImageJ took over 30 minutes, clearly demonstrating the efficiency advantage of our automated approach.

Table 2 Comparison of nanostructure size estimations in TEM images (Fig. 3) obtained using YOLOv8 models, WBF fusion, and manual analysis with ImageJ
Model Size (nm)
LCN TMCV MCV V
YOLOv8n 357.6 ± 107 122.8 ± 26 202.2 ± 32.8 80.6 ± 18
YOLOv8s 334.2 ± 143 129 ± 32 209.8 ± 36 83.4 ± 18.8
YOLOv8m 374.8 ± 136.4 128.4 ± 26.4 208.8 ± 37.6 87 ± 19.2
WBF 308 ± 104 122.4 ± 26 203.8 ± 32.6 84 ± 18.2
ImageJ 301 ± 63 132.1 ± 25.9 202.1 ± 41.6 78.2 ± 23.2


Several recent studies have explored deep learning-based approaches for nanoparticle or nanostructure detection in TEM images.23,28–30 These works mainly target rigid inorganic materials and rely on segmentation or classification strategies rather than real-time object detection. In contrast, this work focuses on soft polymeric morphologies such as vesicles and multicompartment structures, and leverages YOLOv8 combined with WBF to enhance detection robustness across morphologies. To our knowledge, no existing framework addresses this specific application space while enabling automated size estimation using embedded scale bars. This highlights the complementary nature and originality of the present approach.

4. Generalization on unseen TEM images of vesicles from the literature

To assess the robustness of our detection pipeline, we evaluated the model's generalization ability on unseen TEM images of vesicles extracted from the literature, featuring varied contrast levels and different chemical compositions. Unlike the training dataset, which maintained a controlled imaging environment, these external images introduced diverse acquisition conditions, testing the adaptability of the model.

As illustrated in Fig. 5, our model successfully detects vesicles across different datasets, demonstrating high confidence and accurate size estimation despite variations in contrast and imaging artifacts. The confidence score distributions remain consistent with the results obtained on our test dataset, reinforcing the model's reliability in identifying self-assembled nanostructures beyond the initial training conditions. Notably, the size distribution remains coherent with the expected vesicle dimensions, further validating the robustness of the detection approach.


image file: d5nr02446c-f5.tif
Fig. 5 Detection, size distribution, and confidence analysis of vesicles using the WBF-enhanced YOLOv8 model in unseen TEM images extracted from the literature. The middle panels show the corresponding size distributions (in nm) and the right panels present the distribution of detection confidence scores, along with the average ± standard deviation. Example 1 was adapted with permission from ref. 48. Copyright 2017, American Chemical Society. Example 2 was adapted with permission from ref. 6. Copyright 2011, Royal Society of Chemistry. Example 3 was adapted with permission from ref. 49. Copyright 2021, Wiley-VCH Verlag GmbH & Co. KGaA.

These results highlight the generalization capability of our framework, emphasizing its applicability to a broad range of TEM datasets. This adaptability is particularly crucial for polymer self-assembly studies, where consistent nanostructure characterization across datasets is essential. Nevertheless, vesicles with distinct or complex morphologies—such as non-spherical aggregates, onion-like vesicles, or structures exhibiting extreme contrast variations—may not be reliably detected by our current model. In addition to morphological variability, image quality factors—such as resolution, signal-to-noise ratio, or contrast inconsistencies—can significantly affect detection confidence and size estimation, particularly for poorly resolved structures. Overcoming these limitations would benefit from targeted strategies, including advanced data augmentation techniques (e.g., synthetic contrast variation, controlled noise addition, and rotational or spatial transformations), transfer learning from larger microscopy datasets, and increasing dataset diversity by integrating publicly available, community-shared annotated TEM images.

To facilitate such improvements, our framework has been designed to be easily fine-tuned by future users, allowing them to adapt the model to their specific vesicle types by retraining on additional annotated datasets. This flexibility ensures that the model can be continually refined to meet the evolving needs of the polymer and soft matter research community, further extending its applicability to new and emerging nanostructures.

In this study, we developed a YOLOv8-based deep learning model to detect vesicles and other self-assembled nanostructures, enhancing generalization across diverse TEM datasets. Tested on independent TEM images from the literature, the model demonstrated high accuracy in recognizing vesicular structures, reinforcing its potential for automated, scalable, and unbiased nanostructure analysis.

As a proof of concept, DetectNano demonstrates that deep learning models can be effectively trained to analyze soft polymer nanostructures in TEM images. Its ability to provide accurate and reproducible vesicle morphology and size estimation makes it particularly relevant for applications such as drug delivery, where vesicle size influences the circulation time and targeting efficiency,50,51 or synthetic biology, where vesicle-based systems serve as protocells and compartments.52,53 Even in its current form, the framework can support high-throughput screening and quality control tasks in experimental workflows involving vesicular nanocarriers. Furthermore, as an open-source and modular platform, DetectNano is designed to evolve. By providing annotated datasets, pretrained models, and full source code, this framework offers a concrete and accessible entry point for non-specialists in AI to explore deep learning applications in nanoscience. With community-driven contributions, DetectNano could be extended toward more advanced implementations, including real-time analysis pipelines or in situ/flow-based TEM monitoring for continuous nanostructure detection.

Although the current dataset is sufficient to demonstrate proof-of-concept performance, its limited size and source diversity may restrict full generalization to highly heterogeneous TEM conditions. Most images originate from our previous work, potentially introducing bias in contrast and morphology representation. We acknowledge these limitations and recognize the importance of expanding the dataset through public repositories and broader community contributions. This step is essential to move toward robust, generalizable models applicable across varied polymer and nanomaterial systems.

Looking ahead, the next step is to develop a universal model capable of detecting a wide range of polymeric and inorganic nanostructures. Achieving this goal requires a collective effort from the scientific community, emphasizing the need for open-access datasets and collaborative model training. By sharing annotated TEM datasets and uniting efforts across disciplines, the community can accelerate the development of a robust, generalizable AI tool for nanomaterial characterization. Beyond a technical contribution, this study calls for collaborative efforts to harness AI in nanoscience.

Conflicts of interest

The authors declare no competing financial interest.

Data availability

All data supporting the findings of this study—including annotated TEM images, trained models, and source code—are openly available on Zenodo at the following link: https://doi.org/10.5281/zenodo.14995364.

References

  1. F. S. Bates and G. H. Fredrickson, Phys. Today, 1999, 52, 32–38 CrossRef CAS.
  2. J.-L. Six and K. Ferji, Polym. Chem., 2019, 10, 45–53 RSC.
  3. D. Ikkene, A. A. Arteni, C. Boulogne, J.-L. Six and K. Ferji, Macromolecules, 2022, 55, 4268–4275 CrossRef CAS.
  4. D. E. Discher and A. Eisenberg, Science, 2002, 297, 967–973 CrossRef CAS PubMed.
  5. R. P. Brinkhuis, F. P. J. T. Rutjes and J. C. M. Van Hest, Polym. Chem., 2011, 2, 1449–1462 RSC.
  6. A. Blanazs, J. Madsen, G. Battaglia, A. J. Ryan and S. P. Armes, J. Am. Chem. Soc., 2011, 133, 16581–16587 CrossRef CAS PubMed.
  7. D. Ikkene, A. A. Arteni, M. Ouldali, G. Francius, A. Brûlet, J.-L. Six and K. Ferji, Biomacromolecules, 2021, 22, 3128–3137 CrossRef CAS PubMed.
  8. S. Varlas, J. C. Foster, P. G. Georgiou, R. Keogh, J. T. Husband, D. S. Williams and R. K. O'Reilly, Nanoscale, 2019, 11, 12643–12654 RSC.
  9. E. Rideau, R. Dimova, P. Schwille, F. R. Wurm and K. Landfester, Chem. Soc. Rev., 2018, 47, 8572–8610 RSC.
  10. J. He, J. Cao, Y. Chen, L. Zhang and J. Tan, ACS Macro Lett., 2020, 9, 533–539 CrossRef CAS PubMed.
  11. J. Yeow, J. T. Xu and C. Boyer, ACS Macro Lett., 2015, 4, 984–990 CrossRef CAS PubMed.
  12. T. P. T. Dao, A. Brulet, F. Fernandes, M. Er-Rafik, K. Ferji, R. Schweins, J. P. Chapel, F. M. Schmutz, M. Prieto, O. Sandre and J. F. Le Meins, Langmuir, 2017, 33, 1705–1715 CrossRef CAS PubMed.
  13. A. Czajka and S. P. Armes, Chem. Sci., 2020, 11, 11443–11454 RSC.
  14. C. M. Maguire, R. Matthias, W. Peter and A. Prina-Mello, Sci. Technol. Adv. Mater., 2018, 19, 732–745 CrossRef CAS PubMed.
  15. H. Hinterwirth, S. K. Wiedmer, M. Moilanen, A. Lehner, G. Allmaier, T. Waitz, W. Lindner and M. Lämmerhofer, J. Sep. Sci., 2013, 36, 2952–2961 CrossRef CAS PubMed.
  16. A. S. Byer, X. Pei, M. G. Patterson and N. Ando, Curr. Opin. Chem. Biol., 2023, 72, 102232 CrossRef CAS PubMed.
  17. E. P. Fonseca Parra, J. Oumerri, A. A. Arteni, J.-L. Six, S. P. Armes and K. Ferji, Macromolecules, 2025, 58, 61–73 CrossRef CAS.
  18. K. Ferji, Polym. Chem., 2025, 16, 2457–2470 RSC.
  19. W. Ge, R. De Silva, Y. Fan, S. A. Sisson and M. H. Stenzel, Adv. Mater., 2025, 37, 2413695 CrossRef CAS PubMed.
  20. C. Kuenneth, W. Schertzer and R. Ramprasad, Macromolecules, 2021, 54, 5957–5961 CrossRef CAS.
  21. L. Chen, G. Pilania, R. Batra, T. D. Huan, C. Kim, C. Kuenneth and R. Ramprasad, Mater. Sci. Eng., R, 2021, 144, 100595 CrossRef.
  22. Y. LeCun, Y. Bengio and G. Hinton, Nature, 2015, 521, 436–444 CrossRef CAS PubMed.
  23. Z. Sun, J. Shi, J. Wang, M. Jiang, Z. Wang, X. Bai and X. Wang, Nanoscale, 2022, 14, 10761–10772 RSC.
  24. C. Zelenka, M. Kamp, K. Strohm, A. Kadoura, J. Johny, R. Koch and L. Kienle, Ultramicroscopy, 2023, 246, 113685 CrossRef CAS PubMed.
  25. G. Güven and A. B. Oktay, Nanoparticle detection from TEM images with deep learning, 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2018, pp. 1–4 Search PubMed.
  26. J. Madsen, P. Liu, J. Kling, J. B. Wagner, T. W. Hansen, O. Winther and J. Schiøtz, Adv. Theory Simul., 2018, 1, 1800037 CrossRef.
  27. Y. Wu, A. Ray, Q. Wei, A. Feizi, X. Tong, E. Chen, Y. Luo and A. Ozcan, ACS Photonics, 2019, 6, 294–301 CrossRef CAS.
  28. A. Kamble, S. He, J. R. Howse, C. Ward and I. Hamerton, Comput. Mater. Sci., 2023, 229, 112374 CrossRef CAS.
  29. K. M. Saaim, S. K. Afridi, M. Nisar and S. Islam, Ultramicroscopy, 2022, 233, 113437 CrossRef CAS PubMed.
  30. S. Lu, B. Montz, T. Emrick and A. Jayaraman, Digital Discovery, 2022, 1, 816–833 RSC.
  31. S. Berg, D. Kutra, T. Kroeger, C. N. Straehle, B. X. Kausler, C. Haubold, M. Schiegg, J. Ales, T. Beier, M. Rudy, K. Eren, J. I. Cervantes, B. Xu, F. Beuttenmueller, A. Wolny, C. Zhang, U. Koethe, F. A. Hamprecht and A. Kreshuk, Nat. Methods, 2019, 16, 1226–1232 CrossRef CAS PubMed.
  32. L. von Chamier, R. F. Laine, J. Jukkala, C. Spahn, D. Krentzel, E. Nehme, M. Lerche, S. Hernández-Pérez, P. K. Mattila, E. Karinou, S. Holden, A. C. Solak, A. Krull, T.-O. Buchholz, M. L. Jones, L. A. Royer, C. Leterrier, Y. Shechtman, F. Jug, M. Heilemann, G. Jacquemet and R. Henriques, Nat. Commun., 2021, 12, 2276 CrossRef CAS PubMed.
  33. K. Choudhary, B. DeCost, C. Chen, A. Jain, F. Tavazza, R. Cohn, C. W. Park, A. Choudhary, A. Agrawal, S. J. L. Billinge, E. Holm, S. P. Ong and C. Wolverton, npj Comput. Mater., 2022, 8, 59 CrossRef.
  34. A. Stoll and P. Benner, GAMM-Mitt., 2021, 44, e202100003 CrossRef.
  35. X. Zhong, B. Gallagher, S. Liu, B. Kailkhura, A. Hiszpanski and T. Y.-J. Han, npj Comput. Mater., 2022, 8, 204 CrossRef.
  36. K. Hagita, T. Aoyagi, Y. Abe, S. Genda and T. Honda, Sci. Rep., 2021, 11, 12322 CrossRef CAS PubMed.
  37. E. Z. Qu, A. M. Jimenez, S. K. Kumar and K. Zhang, Macromolecules, 2021, 54, 3034–3040 CrossRef CAS.
  38. K. Ferji, DetectNano, https://github.com/ChemDoc/DetectNano.
  39. K. Ferji, Zenodo, 2025, DOI:10.5281/zenodo.14995364.
  40. D. E. Discher and F. Ahmed, Annu. Rev. Biomed. Eng., 2006, 8, 323–341 CrossRef CAS PubMed.
  41. V. L. Romero Castro, B. Nomeir, A. A. Arteni, M. Ouldali, J.-L. Six and K. Ferji, Polymers, 2021, 13, 4064 CrossRef CAS PubMed.
  42. D. Ikkene, A. A. Arteni, H. Song, H. Laroui, J. L. Six and K. Ferji, Carbohydr. Polym., 2020, 234, 115943 CrossRef CAS PubMed.
  43. K. Ferji, P. Venturini, F. Cleymand, C. Chassenieux and J.-L. Six, Polym. Chem., 2018, 9, 2868–2872 RSC.
  44. K. Ferji, C. Nouvel, J. Babin, M.-H. Li, C. Gaillard, E. Nicol, C. Chassenieux and J.-L. Six, ACS Macro Lett., 2015, 4, 1119–1122 CrossRef CAS PubMed.
  45. Roboflow, https://roboflow.com.
  46. YOLOv8, https://yolov8.com.
  47. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick, Computer Vision – ECCV 2014, 2014, pp. 740–755 Search PubMed.
  48. J. Tan, D. Liu, Y. Bai, C. Huang, X. Li, J. He, Q. Xu, X. Zhang and L. Zhang, Polym. Chem., 2017, 8, 1315–1327 RSC.
  49. V. Ibrahimova, H. Zhao, E. Ibarboure, E. Garanger and S. Lecommandoux, Angew. Chem., Int. Ed., 2021, 60, 15036–15040 CrossRef CAS PubMed.
  50. S. Wilhelm, A. J. Tavares, Q. Dai, S. Ohta, J. Audet, H. F. Dvorak and W. C. W. Chan, Nat. Rev. Mater., 2016, 1, 16014 CrossRef CAS.
  51. Y. Barenholz, J. Controlled Release, 2012, 160, 117–134 CrossRef CAS PubMed.
  52. S. Kretschmer, K. A. Ganzinger, H. G. Franquelim and P. Schwille, BMC Biol., 2019, 17, 43 CrossRef PubMed.
  53. C. M. Lee James, H. Bermudez, B. M. Discher, M. A. Sheehan, Y. Y. Won, F. S. Bates and D. E. Discher, Biotechnol. Bioeng., 2001, 73, 135–145 CrossRef PubMed.

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.