Dilpreet Singh
Brar
*a,
Birmohan
Singh
b and
Vikas
Nanda
a
aDepartment of Food Engineering and Technology, Sant Longowal Institute of Engineering and Technology, Longowal 148106, Punjab, India. E-mail: singhdilpreetbrar98@gmail.com
bDepartment of Computer Science and Engineering, Sant Longowal Institute of Engineering and Technology, Longowal 148106, Punjab, India
First published on 14th May 2025
AI revolutionizes the food sector by improving production, supply chains, quality assurance, and consumer safety. Therefore, this work addresses the alarming issue of red chilli powder (RcP) adulteration, with the introduction of an AI-driven framework for RcP adulteration detection, leveraging an empirical evaluation of DenseNet-121 and 169. To optimize convergence and enhance the performance, the AdamClr optimizer was incorporated, in a learning rate range between 0.00005 and 0.01. Two datasets (DS I and DS II) were developed for evaluation of DenseNet models. DS I consists of two classes: Class 1 (Label = C1_PWH) representing pure RcP (variety = Wonder Hot (WH)) and Class 2 (Label = C2_AWH) containing samples adulterated with five natural adulterants (wheat bran (WB), rice hull (RB), wood saw (WS), and two low-grade RcP), whereas DS II comprises 16 classes, including one class of pure RcP and 15 classes representing adulterated RcP with varying concentrations of the five adulterants (each at 5%, 10%, and 15% concentration). For binary classification (for DS I), DenseNet-169 at batch size (BS) 16 delivered an accuracy of 99.99%, while, in multiclass classification (for DS II) for determination of the percentage of adulterant, DenseNet-169 at BS 64 produced the highest accuracy of 95.16%. Furthermore, Grad-CAM explains the DenseNet-169 predictions, amd the obtained heatmaps highlighting the critical regions influencing classification decisions. The proposed framework demonstrated high efficacy in detecting RcP adulteration in binary as well as multiclass classification. Overall, DenseNet-169 and XAI present a transformative approach for enhancing quality control and assurance in the spice industry.
Sustainability spotlightEnsuring food safety and sustainability is essential in the global spice industry. This study presents an AI-driven framework to detect red chili powder (RcP) adulteration, enhancing food quality assessment using DenseNet-121 and DenseNet-169 with the AdamClr optimizer. The model accurately identifies contaminants like wheat bran, rice hulls, and wood sawdust, improving consumer safety and regulatory compliance. This approach promotes ethical food production, minimizes economic losses, and reduces the environmental impact of food fraud. explainable AI (XAI) through Grad-CAM ensures transparency, fostering stakeholder trust. Additionally, the model's efficiency optimizes testing and quality control, supporting sustainable supply chains. By detecting adulteration and ensuring RcP purity, this study advances sustainable food security. The AI-powered method revolutionizes quality assurance in the spice industry and establishes a foundation for future AI applications in food fraud detection, reinforcing global efforts for a safer and more sustainable food system. |
Despite India's legacy as the global leader in spice production and export, the spice market remains highly vulnerable to adulteration, driven by the pursuit of higher profits at the expense of consumer health and safety. Among the diverse spices produced in India, red chilli powder (RcP) stands out as one of the most economically motivated products for adulteration due to its high commercial and industrial demand, vibrant colour, pungency, and significant market value.5 The substances used as adulterants to increase the bulk and the colour intensity of RcP are broadly categorised into two major classes. Class I adulterants are natural materials primarily added to increase bulk, including low-grade or diseased red chilli peppers, wheat bran, rice hulls, wood sawdust, chalk powder, and brick powder. In contrast, Class II adulterants consist of synthetic colouring agents such as Sudan dyes and rhodamine B, which are added to enhance the visual appeal of the product.6 Regular consumption of these adulterants, even in small quantities, can lead to severe health issues ranging from gastrointestinal disorders to carcinogenic effects.7 Therefore, it is imperative to detect and prevent the illegitimate adulteration of RcP to safeguard consumer health and ensure food safety. These growing concerns underscore the urgent need for more effective enforcement strategies, the development of advanced rapid detection technologies, and increased public awareness to mitigate the risks associated with adulterated RcP.8 Traditional methods such as chromatography, DNA fingerprinting, and spectroscopy are sophisticated in terms of operations, require trained professionals, are time-consuming, and are economically expensive.9 The limitations of traditional methods can easily be overcome using AI-driven approaches, such as Deep Learning (DL), which deliver rapid and non-destructive solutions for detecting RcP adulteration.
DL, a specialised subset of Machine Learning (ML), transforms various industries by enabling machines to mimic human cognitive abilities. This advancement is largely driven by Two-Dimensional Convolutional Neural Networks (2D-CNNs), which excel in image analysis tasks.10 One of the core strengths of DL lies in its ability to autonomously identify and learn relevant patterns from extensive datasets, thus eliminating the need for manual feature extraction.11 This makes DL particularly effective in domains like food quality assessment, especially for adulteration detection.5 However, the dataset is crucial for developing deep learning-based techniques for detecting adulterated test products, as carefully prepared and labelled data are the foundation for AI models' wider and more accurate applications. Therefore, several pre-trained 2D-CNN models are currently utilised for food classification, including ResNet,12 EfficientNet,13 DenseNet,14 and Visual Geometry Group (VGG) networks.15
Despite their impressive performance, DL models often face criticism due to their lack of transparency, functioning as “black boxes” with limited interpretability.16 This opacity raises concerns in critical sectors like food safety, where trust and accountability are paramount.17 To mitigate this challenge, explainable artificial intelligence (XAI) emerges as a promising field to make DL models more understandable.18 Techniques such as Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) help identify key features that influence model predictions, thereby improving the clarity and transparency of AI systems.19 Additionally, visualisation tools like Gradient-weighted Class Activation Mapping (Grad-CAM) and saliency maps highlight the crucial regions within an image that guide the model's decision-making, offering a more intuitive grasp of AI outputs.20 Collectively, these methods enhance AI's reliability, trustworthiness, and user acceptance in detecting natural adulterants in RcP.
The objective of this study is to develop an AI-driven model capable of detecting adulteration in RcP. We designed the model to detect the illegal blending of RcP with natural adulterants through empirical evaluation and fine-tuning of DenseNet (121 and 169) architectures in combination with the AdamClr optimizer. To facilitate this, a dataset comprising images of both pure and adulterated RcP from the Wonder Hot (also known as Wonder Heart) variety was created under controlled laboratory conditions. This dataset is used to evaluate 2D-CNN models with specified hyperparameters to classify pure and adulterated RcP at various concentrations. To further enhance interpretability, the explainable AI (XAI) method, Grad-CAM is applied to visualise and justify the predictions made by the most effective model. The results of this research underscore the potential of AI technologies in ensuring food quality assessment. Moreover, the developed model can serve as a valuable tool for food safety authorities in identifying the illegal addition of low-cost adulterants to RcP and similar food products, thereby supporting global initiatives to mitigate food fraud.
Prior to grinding, all raw materials are washed to eliminate surface contaminants and subsequently sun-dried. Once the moisture content of the chilli pods and adulterants is reduced to 7% or lower, the samples are ground using a rotating hammer mill (Make: Natraj, Ahmedabad, Gujarat) equipped with a mesh No. 3 sieve (650 microns). We blended the pure RcP and adulterants to prepare adulterated samples. Each adulterant is incorporated into the pure RcP (WH) at three concentration levels—5%, 10%, and 15%—yielding 15 distinct adulterated classes and a pure class, resulting in a total of 16 classes. To ensure homogeneity, each mixture is thoroughly blended in a planetary mixer for 10 minutes and then passed through a sieve (British Standard Sieve (BSS) No. 30). The final samples are stored in glass containers under refrigeration conditions (4 °C) until further analysis.
All images are taken under controlled conditions within a custom-designed, white-walled wooden image box (IB). The IB measures 1.5 × 1.5 × 1.5 feet (length × width × height) and includes an adjustable sample stage, a voltage-regulated light source, and a fixed eyepiece (camera mount) positioned perpendicular to the sample holder. The distance between the camera and sample stage is adjustable between 10 cm and 25 cm; for this study, it is fixed at 10 cm.
To ensure uniform sample presentation, powders are evenly spread on 250 ml glass dishes using a 25 BSS sieve to avoid clumping or empty spaces. The prepared dish is then placed on the image box stage. For each of the 16 sample classes, four high-resolution images (1728 × 2592 pixels) are taken by rotating the camera to capture varied perspectives. All images are stored and labelled for further analysis.
Two datasets were developed for model training and evaluation. Dataset I (DS I) targeted binary classification, comprising pure WH samples (C1_PWH) and adulterated WH samples (C2_AWH) with a total of 1638 images. To ensure class balance in DSI, 62 images were randomly selected for each adulterant group, as detailed in ESI Table 1.† Furthermore, Dataset II (DS II) supported multi-class classification and included 16 classes: one for pure WH RcP (WH00) and 15 for adulterated samples, representing five natural adulterants at three concentration levels (5%, 10%, and 15%) and the total number of images in DSII is 5852 (doi: https://10.0.68.224/mszn5hk9nv.1). A complete class distribution is provided in ESI Table 1.†
Component | Configuration | |
---|---|---|
DenseNet-121 | DenseNet-169 | |
Initial convolution | 7 × 7 conv., stride 2 | 7 × 7 conv., stride 2 |
Initial pooling | 3 × 3 max pooling, stride 2 | 3 × 3 max pooling, stride 2 |
Dense block 1 | 6 × (1 × 1 conv. + 3 × 3 conv.) | 6 × (1 × 1 conv. + 3 × 3 conv.) |
Transition layer 1 | 1 × 1 conv. + 2 × 2 avg. pooling | 1 × 1 conv. + 2 × 2 avg. pooling |
Dense block 2 | 12 × (1 × 1 conv. + 3 × 3 conv.) | 12 × (1 × 1 conv. + 3 × 3 conv.) |
Transition layer 2 | 1 × 1 conv. + 2 × 2 avg. pooling | 1 × 1 conv. + 2 × 2 avg. pooling |
Dense block 3 | 24 × (1 × 1 conv. + 3 × 3 conv.) | 32 × (1 × 1 conv. + 3 × 3 conv.) |
Transition layer 3 | 1 × 1 conv. + 2 × 2 avg. pooling | 1 × 1 conv. + 2 × 2 avg. pooling |
Dense block 4 | 16 × (1 × 1 conv. + 3 × 3 conv.) | 32 × (1 × 1 conv. + 3 × 3 conv.) |
Classification layer | Global avg. pooling + fully connected softmax layer | Global avg. pooling + fully connected softmax layer |
Total parameters | ∼7.57 million | ∼13.51 million |
Growth rate | 32 | 32 |
Total depth | 242 layers | 338 layers |
Several optimizers are commonly utilized in training 2D-Convolutional Neural Networks (2D-CNNs), each providing distinct advantages. Stochastic Gradient Descent (SGD) is a fundamental technique that updates model parameters using calculated gradients, while momentum-based SGD accelerates convergence by incorporating a fraction of the previous gradient into current updates. Root mean square propagation (RMSprop) dynamically adjusts learning rates by normalizing gradients, making it well-suited for tasks with non-stationary objectives.21 Adaptive moment estimation (Adam) merges the strengths of RMSprop and momentum by applying adaptive learning rates, and its variant, AdamW, further enhances generalization by decoupling weight decay from gradient updates.22 Optimizers like AdaGrad and AdaDelta also adjust learning rates based on the accumulation of past gradients, which is particularly beneficial for handling sparse datasets, although they may lead to diminishing learning rates over time.
For this research, AdamClr (Adam with a cyclical learning rate) is selected due to its capacity to dynamically modulate the learning rate, which aids in avoiding entrapment in local minima and accelerates the convergence process. The cyclical learning rate (Clr) strategy employed here allows the learning rate to vary between a minimum of 0.00005 and a maximum of 0.001, promoting efficient exploration of the loss landscape and mitigating premature convergence.23 The learning rate was adjusted at a frequency determined by the step size, calculated as 25 × (training size/batch size), enabling periodic updates across training epochs. Additionally, a scaling function defined as is applied to gradually decrease the amplitude of learning rate oscillations over time, further refining the training process. The Adam optimizer, enhanced with Clr, effectively leveraged both momentum and adaptive learning rates to improve the overall model performance. The model's training objective was guided by the categorical cross-entropy loss function, expressed as:
where N represents the number of classes, yi denotes the true label for class i (in the one-hot encoded form), and yi signifies the predicted probability for class i, obtained through the softmax activation function in the output layer of the network. This loss function quantifies how closely the predicted probability distribution ŷi aligns with the actual class distribution, ensuring that the model learns to assign higher probabilities to correct classifications. Accuracy was used as the primary evaluation metric for model performance.
Metric | Formulae |
---|---|
a TP: True Positive; TN: True Negative; FP: False Positive; FN: False Negative. | |
Accuracy |
![]() |
Precision |
![]() |
Recall (sensitivity) |
![]() |
F 1-score |
![]() |
The Grad-CAM process started by selecting the final convolutional layer of the network, as this layer retains essential spatial features while encapsulating high-level patterns critical for classification tasks. The selection of this layer is key because it maintains the spatial integrity of features relevant to the target class. Following this, automatic differentiation was used to compute the gradients of the output class score with respect to the selected layer's feature maps. These gradients reflect how variations in the activation maps affect the model's confidence in its prediction.
Subsequently, a global average pooling operation was applied to these gradients, yielding importance weights for each feature map. These weights quantify the influence of individual feature maps on the final output. To create the heatmap, a weighted combination of the activation maps is calculated, spotlighting regions with the highest contribution to the classification outcome. The ReLU function is employed at this stage to remove negative values, ensuring that only features with positive influence are visualised. The resultant heatmap is normalised to a 0–1 scale for consistency in intensity representation.
For better visualisation, the heatmap is colour-coded using OpenCV, where regions of high model attention are highlighted in red or yellow, indicating strong activation. This coloured heatmap is then overlaid onto the original image to provide a clear depiction of the areas most responsible for the model's decision.
This visualisation pipeline is applied across all test samples in the dataset, and the generated heatmaps are stored for comprehensive analysis. This approach enabled a detailed examination of the model's reasoning, providing insights into feature importance and enhancing model interpretability.
The important weights αk for each feature map Ak as eqn (1).
![]() | (1) |
![]() | (2) |
The ReLU activation ensures that only positively contributing regions are highlighted. Grad-CAM thus helps identify the specific parts of an image that most influenced the model's prediction, making it a valuable tool for model debugging, transparency, and increasing confidence in AI-driven decisions.
Both models are trained using two datasets (DS I and DS II). DS I is utilized for binary classification to determine whether a sample is pure or adulterated. In contrast, DS II is employed to train the same models for multi-class classification, enabling them to accurately classify test samples based on the concentration of specific natural adulterants (WS, WB, RB, BM, and GM) in pure RcP (WH). The classification performance of the trained models is evaluated to detect and quantify adulteration in RcP. Furthermore, the decision-making process of the best-performing 2D-CNN model is analyzed using Grad-CAM and explainable AI (XAI) techniques.
Model | Optimizer | Batch_size | Accuracy | Precision | Recall | F 1 score |
---|---|---|---|---|---|---|
DenseNet_121 | Adam Clr | 16 | 99.99 | 99.99 | 99.99 | 99.99 |
32 | 99.99 | 99.99 | 99.99 | 99.99 | ||
64 | 99.99 | 99.99 | 99.99 | 99.99 | ||
DenseNet_169 | Adam Clr | 16 | 99.99 | 99.99 | 99.99 | 99.99 |
32 | 99.99 | 99.99 | 99.99 | 99.99 | ||
64 | 99.99 | 99.99 | 99.99 | 99.99 |
Fig. 2 and 3 illustrate the confusion matrix alongside the model accuracy versus epoch plot and ROC-AUC, collectively clarifying the model's performance and facilitating a comprehensive understanding of its predictive capabilities. The DenseNet-121 achieves better classification accuracy across all BSs, as demonstrated by the confusion matrix (Fig. 2(a, d and g)), where no misclassifications are observed. However, the training and validation accuracy plots (Fig. 2(b, e and h)) indicated that smaller BSs produced more stable learning trends, whereas larger BSs (BS 64) exhibited more pronounced fluctuations. These fluctuations might be attributed to the noisier gradient estimates associated with larger BSs, which also supports the outcomes of the paper, Masters & Luschi.25 Nevertheless, the ROC-AUC curves (Fig. 2(c, f and i)) consistently yield an AUC of 1.00 across all configurations, proclaiming the model's excellent discriminatory power.
Similar to DenseNet-121, DenseNet-169 demonstrates good classification accuracy, reflected in the confusion matrix (Fig. 3(a, d and g)), with zero misclassifications observed. The training and validation accuracy plots (Fig. 3(b, e and h)) also reveal a high accuracy, but the model displays smoother learning curves with smaller BSs, indicative of more stable gradient updates. The ROC-AUC curves (Fig. 3(c, f and i)) achieve consistently high AUC scores of 1.00, indicating better class discrimination.
This robust performance of evaluated models in this work aligns with the dense connectivity principles of DenseNets, which help to alleviate the vanishing gradient problem and enhance feature reuse throughout the network. Moreover, both DenseNet-121 and DenseNet-169 exhibit strong performance in binary classification, attributable to their inherent architectural advantages in feature propagation and gradient flow. The findings of the proposed research work are in accordance with the findings of research conducted by Huang et al.14 However, DenseNet-169 may exhibit greater stability and convergence due to its deeper architecture, which facilitates more complex feature learning and gradient propagation. The observed fluctuations with larger BSs, particularly in DenseNet-121, are consistent with the literature on large-batch training, where noisier gradient estimates can affect convergence behaviour.26
Besides, this research work extends the application of DenseNets to multiclass classification to determine the percentage of adulteration in RcP. The dataset size (DS II) is increased further to evaluate the DenseNets' efficiency in RcP adulteration detection.
Model | Optimizer | Batch_size | Accuracy | Precision | Recall | F 1 score |
---|---|---|---|---|---|---|
DenseNet_121 | Adam Clr | 16 | 91.93 | 92.26 | 91.93 | 91.80 |
32 | 92.03 | 92.46 | 92.03 | 92.02 | ||
64 | 94.05 | 94.35 | 94.05 | 93.96 | ||
DenseNet_169 | Adam Clr | 16 | 89.98 | 90.50 | 89.98 | 89.79 |
32 | 91.93 | 92.32 | 91.93 | 91.81 | ||
64 | 95.16 | 95.16 | 95.16 | 95.10 |
Converse to binary classification, in the multiclass problem, the model must learn more complex feature representations. Larger BSs provide more stable gradients, enhancing convergence and class discrimination.27 Additionally, batch normalization is more effective with larger batches, further improving training stability in multiclass settings with high classification accuracy.28
Furthermore, both models demonstrate strong classification performance, as reflected in the confusion matrices and ROC-AUC curves (Fig. 4 and 5). However, training and validation accuracy plots from DenseNet-169 reveal key differences (Fig. 5(d, e and f)). Smaller BSs in multiclass settings often introduce greater variations in validation accuracy, suggesting higher sensitivity to individual training examples, while the larger BSs result in smoother learning curves but slower initial convergence.
The trade-off in gradient noise is also evident, where larger BSs provide more stable but potentially less precise gradient estimates, leading to consistent model updates but possibly overlooking finer data details.25 In contrast, the smaller BS, though noisier, can escape local minima but exhibit more unstable training dynamics in multiclass settings.26 The consistently high AUC scores (close to 1.0) across all classes and BSs suggest that both models are highly effective in class distinction, likely due to the inherent separability of extracted features.29 Scientifically, specific BSs play a crucial role in multiclass classification. And the results of this research work reveal that multiclass classification using DenseNet-169 with a BS 64 exhibits a gradual and stable increase in both training and validation accuracy (95.16%), ensuring smooth convergence and improved generalisation, while minor fluctuations indicate better handling of overfitting in complex feature hierarchies (Fig. 5(f)).
The consistently high AUC scores suggest that the model is effective at discriminating between classes across different BSs, highlighting their potential for real-world applications. The above findings indicate that DenseNet-169, trained with a BS 64, outperforms comparable models in multiclass classification problems for detecting natural adulterants in pure RcP (WH). By leveraging compound scaling to optimise parameter efficiency, DenseNet-169 demonstrated superior adaptability to the image dataset, efficiently differentiating adulterated samples from authentic RcP. The synergy between its architectural enhancements and BS optimisation (BS 64) established DenseNet-169 as the most effective model for the classification task, achieving an impressive accuracy of 99.99% for binary classification and 95.16% for multiclass classification in RCP adulteration detection.
The overlaid heatmaps generated by Grad-CAM are presented in Fig. 6, illustrating DenseNet-169's decision-making process in detecting natural adulterants in pure RcP, which involves a multiclass classification problem. The model detects concentrations of five natural adulterants, ranging from 5% to 15% (class = 15), with respect to the pure class of RcP. According to Fig. 6, the original image is displayed alongside the corresponding Grad-CAM heatmap visualisation used to interpret the DenseNet-169 decision-making process. The model predicts each class with a confidence level greater than 0.95, demonstrating the perfect alignment of the predicted sample with the true label and explaining the accurate classification of adulteration in RcP at different concentration levels.
![]() | ||
Fig. 6 Visualisation of DensNet-169 prediction for classification of pure RcP and percentage of adulteration by using the Grad-CAM heatmap; (a) to (o) adulterated class and (p) pure WH. |
The colour intensity corresponds to the region of interest, with red indicating the highest relevance, highlighting features that play a crucial role in the decision-making process. These regions may correspond to natural adulterant particles in Rcp, which are associated with granule size, particle distribution, colour intensity, and other physical attributes that are not easily detectable by the human eye (Fig. 6). Meanwhile, blue regions cover less critical areas, signifying minimal influence on the decision-making process.
This visualisation underscores the effectiveness of Grad-CAM in offering valuable insights into the focal regions of DenseNet-169, ensuring that multiclass classification decisions are based on scientifically relevant features rather than extraneous artefacts. Fig. 6(a, d, g, j and m) present heatmaps corresponding to classes containing 5% natural adulterants in RcP, where the red regions indicate the presence of adulterant particles. Furthermore, as the adulterant concentration in samples increases up to 15%, the intensity of the red regions in the heatmaps also increases, which is particularly evident in the representations of classes with 10% and 15% adulteration (Fig. 6). By highlighting the specific regions influencing the model's predictions, Grad-CAM enhances the interpretability of the DenseNet-169 model and aids in validating its reliability for detecting adulteration in RcP.31,32
Expanding the scope of AI applications for detecting unethical adulteration in RcP, the presented research work develops a more comprehensive labelled dataset incorporating multiple types of natural adulterants (five) consisting of 16 classes. DenseNet-169 with an AdamClr optimizer is trained to identify adulteration, achieving an accuracy of 99.99% in binary classification, and for multiclass classification the accuracy is 95.16% for distinguishing pure and adulterated RcP samples at various concentrations. Furthermore, the interpretability of the trained DenseNet-169 model is enhanced using the XAI technique, which explains the novelty of the proposed study.
On the other hand, various studies applied AI methods for quality evaluation of various food products. In a study Fatima et al.32 implemented a Siamese network to detect papaya seed adulteration in black pepper, achieving an accuracy of 92%. Similarly, Rady et al.33 developed an adulteration detection method for minced meat by integrating colour space and texture features to train an ensemble linear discriminant classifier, which attained 98% accuracy in differentiating pure and adulterated samples. In another study, Brar et al.8 utilized a 2D-CNN model to identify corn syrup adulteration in honey by analysing images extracted from test sample videos, achieving 99% classification accuracy. Additionally, Sehirli et al.34 investigated butter adulteration with vegetable fat, where both artificial neural networks (ANNs) and support vector machines (SVMs) achieved an accuracy of 99%.
Moreover, the integration of explainable AI (XAI) with 2D-CNN was explored for determining seabream freshness, where the DenseNet-121 model achieved 100% classification accuracy. This model was further analysed using Grad-CAM and LIME to enhance interpretability.30 Moreover, InceptionV3 was employed alongside LIME to enhance transparency and accuracy in sorting chicken meat into fresh and rotten categories, attaining a sorting accuracy of 96.3%. A 2D-CNN-LIME-based system further guided a robotic arm in processing 1000 fresh and 300 rotten chicken meat samples, achieving precision rates of 94.19% for fresh meat and 97.24% for rotten meat.16 Besides, Benjamin,35 employed the YOLOv5 model for the accurate recognition and classification of bread quality attributes. The model was trained on a comprehensive dataset comprising images of various bread types, each annotated with the corresponding quality labels. By leveraging the advanced object detection capabilities of YOLOv5, the system effectively identifies and categorizes different quality attributes with an accuracy of 92.00%, ensuring a robust and efficient evaluation of bread quality.
All in all, this research work demonstrates that the proposed model serves as a reliable and effective approach for detecting adulteration in RcP. Furthermore, explainable AI technique Grad-CAM is utilized to interpret the decision-making process of the highest-performing 2D-CNN model. Grad-CAM provides visual insights by identifying key image regions that influenced classification. The combined analysis validates the model's reliability and confirmed its ability to focus on adulteration-specific features.
Therefore, future research should focus on expanding the dataset to include a broader range of food products and adulterants under diverse environmental conditions to enhance model adaptability. Furthermore, investigating the performance of alternative AI models, such as vision transformers, lightweight CNN architectures, or hybrid deep learning approaches, could further optimize detection accuracy and computational efficiency. Incorporating real-time data acquisition systems and transfer learning strategies could also facilitate the practical deployment of AI-based adulteration detection tools in industrial and regulatory settings.
Footnote |
† Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d5fb00118h |
This journal is © The Royal Society of Chemistry 2025 |