Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

A comparative analysis of MCDA methods for safe and sustainable by design assessment

Takshak Shende* and Viktor Popov*
Ascend Technologies Ltd, Southampton SO15 2BG, Hampshire, UK. E-mail: takshak@ascendtechnologies.co.uk; viktor@ascendtechnologies.co.uk

Received 15th January 2026 , Accepted 17th March 2026

First published on 19th March 2026


Abstract

The Safe and Sustainable by Design (SSbD) framework is central to the European Union's cleaner and safer production and chemical sustainability goals, necessitating robust tools for implementation. This paper presents a review and comparative computational analysis of Multi-Criteria Decision Analysis (MCDA) methodologies for SSbD. Standard fully compensatory MCDA methods are shown to be fundamentally unsuitable as primary safety gates in regulatory decision contexts because they permit high sustainability performance to offset critical safety hazards, conflicting with the non-compensatory principle of chemical safety legislation like REACH. Three approaches were mathematically formulated and evaluated using a plasticizer case study: compensatory composite indicators, the regulatory-aligned multiple-criteria decision analysis for assessments of chemical alternatives (MCDA-ACA), and the Joint Research Centre (JRC) quantitative SSbD framework. The analysis demonstrates that compensatory methods fail to reliably implement safety-first logic at the gate, while non-compensatory or hybrid frameworks such as MCDA-ACA, the JRC method, and the CI-SSbDC composite indicator successfully implement safety-first logic through discrete value functions, minimum aggregation, and explicit cut-off criteria. We recommend a robust, two-stage hybrid approach for practitioners: (1) apply a validated non-compensatory safety gate to eliminate hazardous alternatives, and (2) subsequently rank the remaining safe options using comprehensive sustainability and performance criteria. This work contributes to operationalizing SSbD by providing a clear and validated methodological pathway for informed decision-making in cleaner chemical innovation.



Sustainability spotlight

Chemical innovation often risks “regrettable substitutions,” where hazardous substances are replaced by alternatives with unforeseen safety or environmental flaws. This work addresses the critical need for robust decision-making tools by evaluating Multi-Criteria Decision Analysis (MCDA) methodologies within the European Union’s Safe and Sustainable by Design (SSbD) framework. We demonstrate that traditional compensatory methods are unsuitable for chemical safety, as they allow high sustainability to mask critical hazards. By validating a “safety gate” approach that prioritizes non-compensatory logic, this review provides a methodological pathway for transitioning to a toxic-free circular economy. This work directly aligns with UN SDG 3 (Good Health and Well-being) and SDG 12 (Responsible Consumption and Production) by operationalizing safer chemical design and reducing exposure to hazardous substances.

1 Introduction

The transition towards a sustainable and circular economy requires fundamental transformations in how chemicals and materials are designed, produced, and managed throughout their life cycles. The European Union's policy framework, anchored in the European Green Deal (2019)1 and the Chemicals Strategy for Sustainability (CSS) (2020),2 establishes a comprehensive approach to achieving zero pollution while maintaining industrial competitiveness. At the heart of this transformation lies the Safe and Sustainable by Design (SSbD) concept, formally established in 2022,3 which integrates safety and sustainability considerations from the earliest stages of chemical innovation.

The SSbD framework emerged as a response to regrettable substitutions, where hazardous chemicals were replaced by alternatives that subsequently proved equally or more problematic.4,5 Traditional chemical development addressed safety and environmental concerns reactively, after market introduction. SSbD represents a paradigm shift towards proactive assessment and design.6,7 The European Commission's 2022 SSbD Recommendation marked a milestone in operationalizing this concept.3 The Joint Research Centre (JRC) developed a structured framework that combines risk assessment methodologies with Life Cycle Assessment (LCA).8,9 Building on its testing in a range of case studies and stakeholder settings, the framework was substantially revised in 2025 (ref. 10) to integrate intrinsic and exposure-based safety into a single holistic safety assessment and to expand the role of evaluation and multi-criteria aggregation in decision-making. This revised framework was formally adopted by the European Commission in March 2026,11 cementing it as the standard for future chemical design.

Multi-Criteria Decision Analysis (MCDA) offers a promising approach for implementing SSbD, providing transparent methods for evaluating alternatives across multiple dimensions.12,13 MCDA can systematically integrate hazard assessment, exposure characterization, environmental sustainability, and socioeconomic considerations.14,15 However, significant gaps remain in translating these concepts into practical frameworks.16,17 Current SSbD applications vary considerably in assessment boundaries, indicator selection, aggregation approaches, and decision rules. The lack of standardized scoring systems hinders comparability across projects and limits progress tracking.18,19

This review addresses these gaps through four primary objectives: (1) systematically review existing SSbD assessment methodologies and their JRC alignment; (2) analyse MCDA techniques for aggregating diverse safety and sustainability indicators with particular attention to the non-compensatory requirements of regulatory safety gates; (3) develop a transparent SSbD scoring system maintaining safety priority and illustrate it on the JRC plasticizer case study; and (4) identify implementation challenges and future research directions.

The remainder of this paper is organized as follows. Section 2 describes the review methodology. Section 3 presents the conceptual foundations and regulatory context of the SSbD framework, highlighting the 2025 revision. Section 4 provides mathematical formulations of three MCDA methods. Section 5 presents the implementation, case study results, and selected sensitivity analyses. Section 6 presents a consolidated discussion of methodological comparisons, challenges, and recommendations, including operational guidance for practitioners. Section 7 concludes the paper.

2 Methods: literature review and method selection

We conducted a focused, narrative review with a structured and reproducible search strategy to identify SSbD-relevant decision-support frameworks and MCDA approaches. Databases searched include Scopus, Web of Science Core Collection, and Google Scholar, complemented by targeted searches of institutional repositories (European Commission, OECD, national research organisations), covering a primary time window of January 2010–December 2025. The initial database search yielded 215 unique records after deduplication; 67 full-text documents were assessed for eligibility, of which 36 were retained as primary SSbD-related decision-support methods or MCDA applications. A further 18 documents were identified as contextual supporting literature (e.g., policy background, methodological guidance on LCA, uncertainty, or FAIR data) through reference list screening and institutional repository searches; these did not pass through the database screening pipeline. From the broader set, three representative methods were selected for detailed comparative analysis: MCDA-ACA,14 the JRC SSbD framework quantitative scoring methodology,8,10,20 and the CI-SSbDC composite indicator.19 Full details of the review scope and research questions, search strings, inclusion and exclusion criteria, the PRISMA-style flow diagram of study selection, and the inventory of all included methods are provided in the SI (Section S1, Fig. S1, and Table S1).

The present review builds on two recent foundational reviews. Lantto21 published a narrative review of MCDA applications in chemical alternatives assessment (CAA), analysing 520 studies and including 21 for detailed analysis, identifying Multi-Attribute Utility Theory (MAUT)/Multi-Attribute Value Theory (MAVT) as the dominant approach and highlighting persistent gaps in stakeholder engagement, external normalisation to regulatory thresholds, and non-compensatory aggregation. Dias et al.22 reviewed MCDA methods applied in the SSbD context, defined requisites for MCDA within the JRC framework, and proposed options for multiattribute aggregation and the use of dashboards to complement aggregate scores. To avoid repetition and to address the rapid development of SSbD-specific frameworks since the European Commission's 2022 Recommendation,3 the present review concentrates on decision-support methods and MCDA approaches explicitly framed for or applied within SSbD and Safe-by-Design contexts,10,23,24 and the integration of non-compensatory safety gates with sustainability assessment.

3 Safe and sustainable by design framework: conceptual foundations and regulatory context

3.1 Policy context and development of SSbD

The SSbD framework represents the culmination of evolving European environmental and chemicals policy, integrating decades of experience in risk assessment (RA), pollution prevention, and sustainable development. Its conceptual foundations trace to green chemistry principles,6 safe-by-design approaches in nanotechnology,25,26 and life cycle thinking. These streams converged within the context of the European Green Deal, which established ambitious targets for climate neutrality, a circular economy, and zero pollution by 2050. The immediate policy driver is the chemicals strategy for sustainability, adopted in October 2020.2 While recognizing the economic importance of the chemicals industry, the CSS mandates a fundamental transformation of innovation practices and specifically promotes SSbD in research and innovation.27

The framework's initial development by the JRC involved extensive stakeholder consultation and a review of over 30 existing frameworks related to safe design and sustainable chemistry.8,28,29 This analysis revealed that while many frameworks addressed safety or sustainability individually, few provided integrated approaches with clear assessment procedures. Building upon Safe-by-Design lessons from the nanomaterials sector, the European Commission formally codified the SSbD concept in a December 2022 recommendation.3 The voluntary recommendation establishes a common language and a structured methodology for assessing chemicals and materials throughout research and innovation.27 The SSbD development coincides with major revisions to European chemicals legislation, and its integration across sectors requires new technical tools, capacity building, and education.30–32

Testing of the 2022 framework in multiple case studies and stakeholder workshops provided the evidence base for a revised 2025 framework.10 The revision maintains the core concept but (i) makes the SSbD principles explicit as the backbone of the framework, (ii) strengthens and structures the scoping analysis and scenario definition, (iii) integrates intrinsic and exposure-based safety into a holistic safety assessment, (iv) incorporates socio-economic assessment into the broader sustainability evaluation, and (v) formalises an evaluation stage that emphasises trade-offs, uncertainty, and the use of MCDA for aggregation and communication.10,11 These changes have direct implications for how MCDA can and should be used to support SSbD decision-making.

3.2 Core principles and structure of the SSbD framework

The SSbD framework integrates two complementary dimensions: guiding design principles and a systematic assessment methodology. This dual structure guides innovation toward safer, more sustainable outcomes, applying to both new and existing chemicals across their full life cycle.8,10 Fig. 1 illustrates the overall assessment flow.
image file: d6su00028b-f1.tif
Fig. 1 SSbD Framework: overall structure in the 2025 revision10 (own elaboration). The diagram shows the scoping analysis and SSbD scenario definition feeding into three assessment parts: holistic safety (intrinsic properties and exposure across life cycle), environmental sustainability (LCA-based), and socio-economic sustainability. Safety combines what were previously steps 1–3 into a single safety assessment, maintaining a non-compensatory safety-first logic. Assessment results are synthesised in an evaluation and documentation stage that can be supported by MCDA and visual dashboards. [PBT = Persistent, Bioaccumulative, and Toxic; PMT: Persistent, Mobile, and Toxic.; SVHC: Substance of Very High Concern; CMR: Carcinogenic, Mutagenic, or Reprotoxic; ED: Endocrine Disrupting].
3.2.1 Design principles. The revised JRC SSbD framework establishes a set of core design principles to guide chemical and material innovation,10 building on and refining earlier formulations.8 These principles include, among others: (1) material efficiency to optimise resource use and reduce waste generation; (2) minimise the use of hazardous chemicals/materials throughout the life cycle; (3) design for energy efficiency; (4) use renewable sources; (5) prevent and avoid hazardous emissions; (6) reduce exposure to hazardous substances; (7) design for end-of-life; and (8) consider the whole life cycle. Additional sector-specific principles may apply depending on the particular application. These principles represent complementary considerations rather than hierarchical requirements, recognising inherent trade-offs that require case-by-case evaluation and stakeholder engagement.
3.2.2 Scoping analysis and SSbD scenario definition. Successful implementation of an assessment framework starts with a crucial scoping analysis that defines the assessment boundaries and involves three main elements: system definition, design objectives, and stakeholder identification.8–10 System definition identifies the chemical or material, its production processes, and applications to capture necessary technological detail. Design objectives clarify the specific safety and sustainability goals of the innovation to properly prioritise assessment efforts. Stakeholder identification engages key actors across the value chain, such as suppliers, manufacturers, users, and recyclers, to ensure a comprehensive view, despite challenges like managing confidential information.9 The 2025 revision formalises this into an SSbD scenario definition step, which tailors the subsequent assessment parts to the specific innovation, maturity level, and available data.10 This initial scoping phase is also where practical constraints like resource limitations and data accessibility are addressed, operating with the understanding that the framework is iterative and will be refined as the innovation matures and more data become available.

3.3 Safety, environmental sustainability, socio-economic assessment, and evaluation

In the 2022 framework,8,20 the assessment phase comprised four sequential yet iterative steps, with an optional fifth addressing socio-economic dimensions. These were: step 1 hazard assessment; step 2 occupational and environmental safety during production; step 3 consumer and professional user safety; step 4 environmental sustainability (LCA); and step 5 socio-economic assessment. The 2025 revision10 reorganises this structure while preserving the underlying content and safety-first logic.

The safety part now explicitly combines intrinsic hazard properties and exposure-based risk assessment across relevant life cycle stages into a single holistic safety assessment. This includes indicators and criteria related to intrinsic properties, such as Persistent, Bioaccumulative, and Toxic (PBT); very Persistent and very Bioaccumulative (vPvB); Persistent, Mobile, and Toxic (PMT); very Persistent and very Mobile (vPvM); Carcinogenic, Mutagenic, or toxic for Reproduction (CMR); and Endocrine Disrupting (ED) properties, alongside process-related occupational exposures, consumer and professional exposures, and environmental releases.10 The environmental sustainability part continues to employ LCA, aligned with the Product Environmental Footprint (PEF) method,33,34 but introduces screening-level options and benchmarks to support low-maturity innovations. The socio-economic part is significantly expanded to cover social fairness (e.g., working conditions and human rights), competitiveness and financial resilience, supply chain vulnerabilities, and life cycle costs.10 Together, these three parts feed into an explicit evaluation stage, where trade-offs and uncertainties are analysed, results are visualised in dashboards, and multi-criteria aggregation (including non-compensatory and hybrid MCDA approaches) can be used in a transparent, documented way.10,22

3.4 Fundamental challenges in MCDA for SSbD

MCDA for SSbD represents a critical methodological frontier in sustainable chemistry. While the JRC SSbD framework (both the 2022 and 2025 versions) provides structured guidance, practical aggregation of diverse criteria still requires sophisticated decision-support methodologies. Conventional fully compensatory MCDA approaches can be problematic for SSbD assessment21,35 because: (1) European regulations employ non-compensatory logic where specific hazard combinations trigger action regardless of favourable properties elsewhere; (2) regulatory thresholds create discrete categories that are poorly captured by purely continuous linear value functions; and (3) the hierarchical structure of SSbD criteria requires careful multi-level aggregation to avoid hidden compensation or instability.

Recent developments address these challenges through innovations in value-function design, aggregation rules, and explicit integration of regulatory thresholds. The MCDA-ACA method demonstrates how discrete value functions and minimum aggregation can align MCDA with REACH Article 57 criteria.14 Lantto's review21 of 21 MCDA applications in chemical alternatives assessment reveals rapid growth, with Multi-Attribute Utility/Value Theory (MAUT/MAVT) the most frequent. Persistent gaps include limited stakeholder engagement, minimal external normalisation to regulatory thresholds, and insufficient attention to non-compensatory aggregation and uncertainty.

Building on this literature and the 2025 framework, we emphasise five fundamental challenges for SSbD-focused MCDA: (1) safety priority: poor safety performance must preclude positive SSbD assessment, which conflicts with naive compensatory additive aggregation; (2) regulatory thresholds: explicit cut-offs and discrete categories (e.g., Substances of Very High Concern (SVHC), PMT/vPvM status) must be respected in the aggregation design; (3) hierarchical structure: multiple lower-level properties combine to determine higher-level classifications (e.g., PBT status), requiring transparent multi-level aggregation; (4) mixed data types: continuous, categorical, and qualitative evidence must be accommodated within a coherent decision model; and (5) uncertainty: substantial data limitations and model uncertainty must be characterised and, where possible, propagated through the decision-support tools. These challenges motivate the hybrid, safety-gate-plus-evaluation architectures and the sensitivity analyses discussed in Sections 4, 5, and 6.

4 Mathematical formulations of MCDA methods

4.1 MCDA-ACA method (London et al. 2024)

The MCDA-ACA method14,35 provides a framework for assessing chemical alternatives that aligns with regulatory principles like REACH Article 57. It uses discrete value functions and a hierarchical aggregation structure with a final minimum aggregation step to prevent the compensation of high-hazard properties by low-hazard ones.
4.1.1 Objective hierarchy. The method uses a two-level objective hierarchy. Lower-level objectives represent individual hazard properties (e.g., persistence (P), bioaccumulation (B), mobility (M), human toxicity (Thu), ecotoxicity (Teco)). These are aggregated into higher-level objectives that represent specific hazard combinations of regulatory concern (e.g., PBT (persistent, bioaccumulative, toxic), vPvB (very persistent, very bioaccumulative)). The final classification of an alternative as “regrettable” or “not regrettable” is determined by the worst-performing higher-level objective.
4.1.2 Discrete value functions. Discrete value functions convert the categorised hazard level of an attribute into a quantitative, non-linear score, creating a convex function.14 The scores are defined as follows:
 
image file: d6su00028b-t1.tif(1)
where v(x) is the value score derived for a specific attribute x based on regulatory thresholds (e.g., REACH Annex XIII). Fig. 2 illustrates this concept for persistence.

image file: d6su00028b-f2.tif
Fig. 2 MCDA-ACA discrete value function illustration. (A) Discrete value function for persistence showing four concern categories (low, moderate, high, very high) with corresponding value scores (1.0, 0.6, 0.25, 0.1) aligned with REACH Annex XIII thresholds. (B) Comparison between discrete function (solid blue line) and a hypothetical continuous linear function (dashed red line), demonstrating how discrete functions prevent inappropriate compensation in the vP (very persistent) region.
4.1.3 Hierarchical aggregation. Lower-level objectives are combined into higher-level objectives using an arithmetic mean, with scaling factors applied to certain combinations to ensure regulatory alignment. If multiple attributes exist for one objective (e.g., persistence in water, soil, sediment), the value for that objective is determined by the worst-performing attribute (minimum value score):14
 
image file: d6su00028b-t2.tif(2)
where v(P) = min(v(Pwater), v(Psediment),…).
 
image file: d6su00028b-t3.tif(3)
Here, SPBTeco and SPB represent the aggregated scores for the higher-level objectives of PBT (Persistence, Bioaccumulation, Toxicity) and vPvB (very Persistent, very Bioaccumulative), respectively. The terms v(P), v(B), and v(Teco) denote the individual value scores for the lower-level objectives of Persistence, Bioaccumulation, and Ecotoxicity, derived using eqn (1). v(Pwater), v(Psediment), etc., represent the attribute-specific value scores for persistence in specific environmental compartments (e.g., half-life in fresh water, half-life in marine sediment). Ideally, data for all relevant compartments should be assessed. The min function in eqn (2) indicates that the Persistence score v(P) is defined by the worst-performing attribute (lowest score) among the available data points (e.g., if Pwater is high (0.25) and Psoil is very high (0.1), then v(P) = 0.1).

The scaling factor of 2/3 is explicitly applied to specific higher-level objectives to align the output with REACH Article 57 criteria. This factor ensures that hazard combinations classified as “High” in all categories (average score 0.25) are reduced to a score of 0.167, placing them below the classification threshold of 0.17 and correctly identifying them as “regrettable” substitutes.14

4.1.4 Overall classification. The final score for an alternative (f) is determined by a minimum aggregation of all higher-level objective scores, ensuring that poor performance in any single critical area cannot be compensated:
 
image file: d6su00028b-t4.tif(4)
 
image file: d6su00028b-t5.tif(5)
Here, fMCDA-ACA is the final decision score for the chemical alternative (0.1 ≤ f ≤ 1.0). v(GWP), v(ODP), and v(Thu) are the value scores for Global Warming Potential (GWP), Ozone Depletion Potential (ODP), and Human Toxicity, respectively. Sx represents the aggregated scores for composite higher-level objectives (e.g., SPB, SPMTeco) calculated in the previous step. The min operator selects the lowest score among all objectives, representing a conservative “weakest link” approach. The threshold of 0.17 is the cutoff value distinguishing acceptable alternatives from those with Substance of Very High Concern (SVHC) properties.14

4.2 Composite indicator approach (Arias et al.19 2024)

The Composite Indicator—Safe and Sustainable by Design and Circularity (CI-SSbDC)19 integrates safety, sustainability, circularity, and economic feasibility into a unified score that ranges from 0.01 to 1. It is structurally composed of five dimensions: Hazard (HD), Health (HeD), Environmental (ED), Circular (CD), and Economic (EcD). A critical feature is the HD cut-off criterion: if any input chemical is classified as ‘most harmful’ (e.g., CMR 1A/1B), the assessment is stopped and the alternative fails. For alternatives that pass this gate, scores for each dimension are calculated using specific value functions and then aggregated using three possible methods:

(1). Additive mean (CIadd) (full compensation):

 
image file: d6su00028b-t6.tif(6)

(2). Geometric mean (CIgeo) (partial compensation):

 
image file: d6su00028b-t7.tif(7)

(3). Harmonic mean (CIhar) (emphasises weak performance):

 
image file: d6su00028b-t8.tif(8)
where n = 5 corresponds to the HD, HeD, ED, CD, and EcD dimensions; wi are user-defined weights (∑iwi = 1); and Scorei are dimension scores in [0.01, 1.0]. This structure implements a hybrid safety approach: a strict cut-off (non-compensatory) followed by a compensatory aggregation for passing alternatives.

4.3 JRC SSbD framework method and its role in the 2025 revision

Fig. 3 summarises the JRC SSbD framework flow with its quantitative scoring methodology as applied in the 2022–2023 case studies.20 We use this scheme as an illustrative instantiation of the revised framework's holistic safety and environmental sustainability assessment parts. In the 2025 revision, the hazard, production, and use components are explicitly integrated into a single holistic safety assessment and evaluated jointly with environmental and socio-economic dimensions in the evaluation stage.10
image file: d6su00028b-f3.tif
Fig. 3 JRC SSbD framework flowchart with quantitative scoring methodology. In the 2022/2023 case studies,20 a four-step assessment (steps 1–4) assigns scores 0–3 for hazard, production, use, and environmental sustainability. An overall safety score is determined by taking the minimum of steps 1–3 and combined with the sustainability level (step 4) using a non-compensatory priority matrix (Fig. 4). In the 2025 revision,10 these safety components are integrated into a holistic safety assessment, which together with environmental and socio-economic assessments feeds into an evaluation and MCDA-supported aggregation stage (own elaboration).
4.3.1 Quantitative safety and environmental scoring in the 2022/2023 case studies. The JRC case studies20 demonstrate a 0–3 scoring system for each assessment “step”, where a minimum score of 2 generally indicates SSbD performance for that aspect. In the revised framework these scores can be understood as components of the holistic safety assessment (hazard plus exposure-based safety across life cycle) and the environmental sustainability assessment, which are then combined (together with socio-economic information where available) in the evaluation stage.10

Step (1) (Hazard Assessment): a score is assigned based on the first criterion that is not passed (H1[double bond, length as m-dash]SVHC, H2 = concern, H3 = other), as defined in:8,9,20,28

 
image file: d6su00028b-t9.tif(9)
where Sstep1 is the quantitative score for step 1 (Hazard Assessment, 0–3). H1, H2, H3 are the hazard criteria levels (H1: most harmful/SVHC properties; H2: concern; H3: other/minimal concern). The score is based on the highest level passed. SVHC, CMR, ED are the examples of critical hazard classifications that trigger a score of 0.

Step (2) (Production & Processing Safety): Risk Characterization Ratio (RCR)-based scoring is applied to each Contributing Scenario (CS) within different life cycle stages (e.g., manufacturing, formulation). The score for a CS is determined as follows:

 
image file: d6su00028b-t10.tif(10)
Here, SCS is the score (0–3) for a specific CS during the production and processing phases. The RCR is image file: d6su00028b-t11.tif. The total RCR is the sum of RCRs across all different exposure pathways within a single CS. The individual RCRs is the RCR for each specific exposure pathway (e.g., dermal, inhalation etc.) within that CS. The levelstep2 is the overall aggregated score for step 2, derived from averaging the SCS scores across all relevant life cycle stages. This levelstep2 is then used in the overall safety score calculation.

Step (3) (Use Phase Safety): scoring based on consumer exposure RCR:

 
image file: d6su00028b-t12.tif(11)
where, Sstep3 is the quantitative score for step 3 (Use Phase Safety, 0–3). The RCR specifically for consumer exposure during the use phase.

Step (4) (Environmental Sustainability): LCA-based scoring for 16 impact categories, grouped into four Environmental Sub-dimensions (S1–S4). An average score is calculated for each group (SES,i). An overall step 4 Level (Levelstep4) is determined using a conditional aggregation rule, which checks if all group averages meet a minimum threshold (e.g., ≥0.6) before rounding the overall average, as shown in the JRC case study:20

 
image file: d6su00028b-t13.tif(12)
Here, Levelstep4 is the overall score/level (0–3) for step 4 (Environmental Sustainability). i is an index representing the Environmental Sub-dimension groups, where i = 1 to 4 (S1 to S4). SES,i is the average score for Environmental Sub-dimension group i. The ∑ is the summation operator over the four sub-dimension scores. The round(x) is the function that rounds the calculated average score to the nearest integer level (0, 1, 2, or 3). min(x, 2) is the minimum function, which ensures the Levelstep4 cannot exceed 2 if the conditional aggregation rule (all SES,i ≥ 0.6) is not met.

Overall SSbD aggregation: a two-stage non-compensatory process determines the final score:

 
Ssafety = min(Sstep1, Levelstep2, Sstep3) (13)

Then, the overall SSbD score (SSSbD) is determined by combining Ssafety and Levelstep4 using the JRC SSbD Priority Matrix (Fig. 4). This matrix enforces the safety-first principle: if Ssafety = 0, then SSSbD = 0, regardless of the sustainability score. Ssafety is the overall aggregated Safety Score (0–3) for the alternative. The Sstep1, Levelstep2, Sstep3 are the scores derived from steps 1, 2, and 3, respectively. The min is the minimum function, enforcing a non-compensatory logic where the overall Safety Score is determined by the lowest score of the three safety steps.


image file: d6su00028b-f4.tif
Fig. 4 JRC SSbD priority matrix heatmap.20 The matrix determines the overall SSbD level (0–3) based on the aggregated safety score (vertical axis, calculated as min(Sstep1, Levelstep2, Sstep3)) and the aggregated environmental sustainability level (horizontal axis, Levelstep4). The matrix enforces a safety-first logic: if Ssafety = 0, the SSbD level is 0 regardless of sustainability performance (own elaboration).
4.3.2 Relation to the 2025 SSbD framework. In the 2025 revision,10 the individual step scores used in the plasticizer case study are conceptually integrated into a holistic safety assessment. Rather than treating hazard, production, and use as separate steps followed by a fixed priority matrix, the revised framework emphasises: (i) explicit documentation of indicators, reference scales, and uncertainties for safety, environmental, and socio-economic dimensions; (ii) the use of dashboards to visualise trade-offs across these dimensions; and (iii) the possible application of MCDA methods (including non-compensatory, hybrid, and outranking approaches) to support aggregation and decision-making.10,22 Our analysis therefore interprets the JRC case-study scoring both as an example of a safety-first, non-compensatory aggregation and as a specific instantiation of a broader, more flexible evaluation stage in the revised framework.

4.4 Integrated risk assessment–LCA methods

Other hybrid approaches explicitly link risk assessment (RA) and LCA within a stage-gate innovation model.36–38 These methods often employ a sequential, non-compensatory logic: an overall assessment proceeds only if risk criteria (e.g., risk characterisation ratio, RCR ≤1) are met. If this safety gate is passed, sustainability (LCA) and socio-economic (SEA) criteria are then evaluated.36 This structure is conceptually similar to the JRC framework and remains consistent with the 2025 SSbD revision, which treats a robust, holistic safety assessment as a prerequisite for subsequent sustainability and socio-economic evaluation.

4.5 Decision support system (DSS) architectures

DSS platforms, such as the HARMLESS SSbD-DSS,17 SUNSHINE,38 and SAbyNA,25 operationalise these hybrid approaches for practitioners. The HARMLESS SSbD-DSS17 platform follow a flexible stage-gate model (e.g., Ideation, Lab, Pilot) and implement parallel assessment of SSbD dimensions (intrinsic safety, human/environmental safety, sustainability, performance). Early stages use qualitative tools (e.g., Advanced Material Earliest Assessment (AMEA), warning flags, design advice, screening priorities (WASP)) to provide early warnings and design advice. Later stages (e.g., Lab phase) use quantitative tools (e.g., Alternative SSbD Design Inspector (ASDI)) that present data in a visual, non-aggregated “heatmap” matrix. This approach intentionally avoids a single aggregated score, leaving the balancing of trade-offs (e.g., between safety and performance) to the expert user.17 In the context of the revised SSbD framework, such DSS architectures provide a practical route for implementing holistic safety assessment and the evaluation/documentation stage across different innovation phases. Specifically, the HARMLESS platform aligns directly with the revised SSbD framework by dynamically adapting its data requirements and integrating New Approach Methodologies (NAMs) tailored to each innovation stage, ensuring a transparent, multi-dimensional evaluation of safety and sustainability trade-offs.

The SUNSHINE e-infrastructure platform operationalises the EC-JRC SSbD framework for advanced (nano)materials.38 Methodologically, it requires a rigorous system definition (specifying the material, targeted functionality, and benchmark) and clear system boundaries to track life-cycle flows. Evaluation across Technology Readiness Levels (TRLs) follows a two-tier process that balances safety, sustainability, and technical performance. Crucially, it deliberately avoids calculating a single, aggregated final score. Instead, Tier 1 uses a screening-level scoring system to calculate individual index scores for hazard, exposure, functionality, and environmental impacts, flagging specific “hotspots of concern.” Identified hotspots then trigger Tier 2, requiring deep, quantitative evaluation via advanced risk assessment models alongside Life Cycle Assessment (LCA), Life Cycle Costing (LCC), and Social LCA (S-LCA). By presenting these metrics as a multi-dimensional profile, SUNSHINE empowers experts to transparently resolve complex trade-offs between a material's safety, environmental footprint, and technical viability throughout product development. This tiered, multi-criteria approach directly aligns with the 2025 revised JRC framework by operationalising its newly formalised evaluation stage, ensuring that trade-offs across holistic safety, environmental, and socio-economic impacts are transparently managed rather than obscured by a single aggregated score.

Similarly, the SAbyNA guidance platform offers an integrative, web-based tool specifically tailored to support industry in applying SSbD concepts to nanomaterials, nano-enabled products, and related processes.25 Operating from the early stages of product development, SAbyNA combines informative modules, which guide users of varying expertise in selecting appropriate hazard and exposure assessment tools, with active assessment modules that evaluate safety, environmental sustainability, and costs across the product's life cycle. Rather than generating a single aggregated score, the platform screens initial life-cycle inputs to flag potential risks and provides targeted Safe-by-Design (SbD) recommendations, such as modifying material dimensions or adjusting process parameters to reduce emissions. By explicitly quantifying and visualising the interplay between safety improvements, cost, and functionality, the SAbyNA platform aligns seamlessly with the revised SSbD framework's mandate to proactively manage multi-dimensional trade-offs throughout the innovation process.

5 Computational results: plasticizer case study

5.1 Case-study description and input data

A complete assessment was conducted for the JRC plasticizer case study,20 evaluating alternatives to di(2-ethylhexyl) phthalate (DEHP). DEHP is a restricted substance (REACH Annex XIV) due to its classification as Reproductive Toxicity Category 1B and its endocrine-disrupting properties. The assessment compared DEHP with five alternatives: acetyl tributyl citrate (ATBC), di(2-ethylhexyl) adipate (DEHA), di(2-ethylhexyl) terephthalate (DEHT), diisononyl cyclohexane-1,2-dicarboxylate (DINCH), and epoxidised soybean oil (ESBO). Input data for hazard properties are summarised in Table 1.
Table 1 Input data for plasticizer assessment.20
Property DEHP ATBC DEHA DEHT DINCH ESBO
Regulatory hazard properties
CMR classification Rep. Tox 1B None None None None None
Endocrine disruptor Yes (HH + ENV) No No No No No
Molecular weight (g mol−1) 390.57 402.48 370.57 390.56 424.67 975.41
[thin space (1/6-em)]
Representative hazard properties (for MCDA-ACA calculation)
Persistence (days, water) 45 15 25 35 22 10
Bioconcentration factor (L kg−1) 2700 250 380 650 890 50
NOEC (mg L−1, chronic) 0.025 0.35 0.18 0.12 0.12 0.45


5.2 Method 1: MCDA-ACA results for plasticizers

The MCDA-ACA method correctly classifies DEHP as ‘Regrettable’ (Table 2 and Fig. 5A). Its final score of 0.067 is driven by its ‘very high’ human toxicity (Rep. Tox 1B), which yields a scaled Thu score of 2/3 × 0.1 = 0.067. This score is the minimum among all higher-level objectives (e.g., SPBTeco = 0.167) and falls below the 0.17 threshold.
Table 2 MCDA-ACA assessment results for plasticizers
Substance P score B score Teco score Thu score Overall (f) Classification
v(P) v(B) v(Teco) v(Thu) min of higher
DEHP 0.25 0.25 0.25 0.10 0.067 Regrettable
ATBC 1.00 1.00 1.00 1.00 0.667 Not regrettable
DEHA 0.60 1.00 1.00 1.00 0.578 Not regrettable
DEHT 0.60 0.60 1.00 1.00 0.489 Not regrettable
DINCH 0.60 0.60 1.00 1.00 0.489 Not regrettable
ESBO 1.00 1.00 1.00 1.00 0.667 Not regrettable



image file: d6su00028b-f5.tif
Fig. 5 Comparative assessment results for six plasticizers. (A) MCDA-ACA scores show DEHP as ‘Regrettable’ (0.067), with alternatives ‘Not Regrettable’ (0.489–0.667). (B) JRC framework step scores/levels based on JRC[thin space (1/6-em)]131878 case study.20. (C) JRC overall SSbD levels, highlighting DINCH as Level 3. S & S: safe & sustainable.

All five alternatives are classified as ‘Not Regrettable’. ATBC and ESBO achieve the highest scores (0.667), reflecting their ‘low’ persistence, bioaccumulation, and ecotoxicity scores (1.0 each), with the overall score limited by the scaled Thu score (2/3 × 1.0 = 0.667). DEHA forms an intermediate tier with a score of 0.578: its ‘moderate’ persistence (v(P) = 0.6) combined with ‘low’ bioaccumulation (BCF = 380 L kg−1 < 500 L kg−1 threshold, v(B) = 1.0) gives SPBTeco = 2/3 × (0.6 + 1.0 + 1.0)/3 = 0.578, which is the binding minimum. DEHT and DINCH attain the lowest scores among the passing alternatives (0.489), because both their persistence and bioaccumulation are ‘moderate’ (BCF of 650 and 890 L kg−1 respectively, each exceeding the 500 L kg−1 threshold, v(B) = 0.6), giving SPBTeco = 2/3 × (0.6 + 0.6 + 1.0)/3 = 0.489. The method successfully acts as a safety gate, classifying DEHP as unacceptable, while providing a three-tier hazard-based ranking for the passing alternatives (ATBC/ESBO > rbin DEHA > rbin DEHT/DINCH).

5.3 Method 2: JRC framework quantitative results for plasticizers

The JRC framework's quantitative assessment (Fig. 5B and C)) clearly differentiates the plasticizers. DEHP achieves SSbD Level 0 (not safe and sustainable). Its hazard score is 0 due to failing H1 criteria (Rep. Tox 1B, ED), which in the original case-study implementation immediately drives the safety score to 0 through the minimum aggregation and priority matrix. This is consistent with the non-compensatory safety-gate logic explicitly endorsed in the revised framework.10

ATBC and DEHA attain SSbD Level 1 (conditional acceptance). They show good average safety scores (2.90 and 3.00 respectively) but are limited by low average environmental sustainability scores (0.38 and 0.48), indicating poor overall performance in the LCA step relative to the reference. DEHT and ESBO reach SSbD Level 2 (safe and sustainable). Both have excellent average safety scores (3.00) and moderate environmental sustainability scores (1.06 and 0.79). The report notes significant trade-offs for ESBO.20 DINCH achieves the highest SSbD Level 3 (highly safe and sustainable), with a perfect average safety score (3.00) and the best average environmental sustainability score (1.50). This demonstrates the framework's hybrid approach: it first filters out DEHP on safety, then differentiates the remaining safe alternatives based on their integrated sustainability performance, identifying DINCH as the optimal choice.

5.4 Method 3: composite indicator framework for plasticizers

The CI-SSbDC method includes a mandatory hazard cut-off based on HD classification.19 DEHP, classified as Reproductive Toxicity Category 1B, falls under the ‘most harmful substance’ category. Therefore, the assessment for DEHP fails the initial safety screen and cannot be assigned an overall CI-SSbDC score or ranked against the alternatives. For the alternatives (ATBC, DEHA, DEHT, DINCH, ESBO), all pass the HD cut-off. However, calculating definitive CI-SSbDC scores requires complete input data across all five dimensions (HD, HeD, ED, CD, EcD), which are not fully available for the JRC plasticizer case (particularly circularity and economic inputs).

To avoid over-interpreting incomplete data and to respond to concerns about uneven comparison standards, we do not use CI-SSbDC to generate quantitative rankings in the head-to-head comparison with MCDA-ACA and the JRC framework. Instead, we treat CI-SSbDC as a conceptual example of a hybrid safety-gate-plus-composite-indicator approach.

5.5 Cross-method synthesis: plasticizer assessment

All three methods correctly identify DEHP as unacceptable, but through different (though related) non-compensatory mechanisms: MCDA-ACA assigns a ‘Regrettable’ classification (score 0.067) based on Thu; the JRC method assigns SSbD Level 0 based on failing hazard criteria in the safety gate; CI-SSbDC disqualifies it via the mandatory ‘most harmful substance’ cut-off. Among the alternatives, rankings diverge based on methodological focus. MCDA-ACA ranks ATBC and ESBO highest based purely on hazard profiles. The JRC framework identifies DINCH as the optimal choice (Level 3) due to its combination of high safety and superior, balanced environmental sustainability, differentiating it from other safe-but-less-sustainable alternatives (Levels 1 and 2).

CI-SSbDC, if fully parameterised, would further differentiate based on circularity and economic dimensions, but its post-gate compensatory aggregation could allow alternatives with strong circularity or economic performance but middling environmental scores to outrank more environmentally balanced options, highlighting the importance of clearly defining the decision context for compensation.

5.6 Sensitivity and uncertainty analysis

To move beyond purely deterministic results and address the recognised importance of robustness analysis,21,22,39 we implemented a comprehensive sensitivity and uncertainty analysis for both the MCDA-ACA method and the JRC framework. Results are summarised in Fig. 6 and full details are provided in ESI S2 (Tables S2–S3 and Fig. S2–S4).
image file: d6su00028b-f6.tif
Fig. 6 Sensitivity and uncertainty analysis for all six substances. (A) MCDA-ACA: P(preferred) = probability each substance is the jointly best-ranked alternative under lognormal QSAR uncertainty (GSD = 1.5; 3000 Monte Carlo simulations); DEHP has P(regrettable) = 100%. (B) JRC SSbD framework: achievable final SSbD level range per step under ±50% input-parameter perturbation; bars span [min, max] achievable SSbD level; dots mark baseline; DEHP is locked at Level 0 by step 1 hazard classification regardless of other inputs; step 4 (environmental sustainability) is the only step that can shift the final SSbD level for all alternatives; step 3 additionally matters for DINCH. Full details in SI S2.
5.6.1 MCDA-ACA. Varying the classification threshold fcrit between 0.15 and 0.20, and switching between the safety-first (minimum) and equal-weight (mean) aggregation, left the rank order fully invariant: DEHP remained Regrettable and all five alternatives remained Not Regrettable with identical ranks at every threshold and under both aggregation schemes. Propagating QSAR uncertainty in BCF, NOEC, and persistence by sampling from lognormal distributions (GSD = 1.5) in 3000 Monte Carlo simulations further confirmed robustness: DEHP had P(Regrettable)[thin space (1/6-em)]=[thin space (1/6-em)]100[thin space (1/6-em)]% and P(preferred)[thin space (1/6-em)]=[thin space (1/6-em)]0[thin space (1/6-em)]% in all runs. Among the alternatives (Fig. 6A), ESBO had the highest P(preferred)[thin space (1/6-em)]=[thin space (1/6-em)]92[thin space (1/6-em)]% (BCF of 50 L kg−1 and persistence of 10 days, both far from any threshold), followed by ATBC at 60[thin space (1/6-em)]%, DEHA at 14[thin space (1/6-em)]%, DINCH at 2[thin space (1/6-em)]%, and DEHT at 1[thin space (1/6-em)]% (see Fig. 6A). MCDA-ACA classifications are therefore robust to QSAR parameter uncertainty.
5.6.2 JRC SSbD framework. Perturbing each step score by ±1 unit yielded 81 scenarios per substance; most alternatives showed possible level shifts, motivating continuous input sensitivity. Varying the use-phase RCR shows that most alternatives are stable within ±50%, but the choice of food-contact scenario is critical: under worst-case measured migration (mussels-in-oil, JRC131878 (ref. 20)), ATBC (RCR[thin space (1/6-em)] =[thin space (1/6-em)] 2.25) and ESBO (RCR[thin space (1/6-em)] =[thin space (1/6-em)] 24.4) drop to Level 0 while DEHT and DINCH remain stable. Propagating individual step-score variation through the JRC priority matrix (Fig. 6B and SI S2.2) reveals that DEHP is irreversibly locked at Level 0 by its hazard classification: because atep 1 score = 0, the safety score min(S1, S2, S3) = 0 regardless of any other input.

For all five alternatives, step 4 (environmental sustainability, LCA) is the only step that can change the final SSbD level within realistic ±50% input perturbation, upward for DEHT, ATBC, and DEHA, and downward for ESBO, while step 3 additionally matters for DINCH. Steps 1 and 2 are fully robust for all alternatives under input perturbation.

Propagating QSAR uncertainty (BCF, NOEC, T1/2; GSD = 1.5) into JRC step 1 hazard criteria and step 3 via NOEC → PNEC → RCR rescaling confirms that the H1 criterion is unconditionally robust for all substances (P = 0%), while H2 shows modest sensitivity for DEHT (P(H2 changes) = 9%, BCF and NOEC both near combined CLP thresholds), which under joint QSAR + LCA uncertainty shifts P(preferred) from DEHT (58%) to DINCH (97.5%; Fig. S4), reinforcing DINCH as the most robust JRC performer. Physico-chemical QSAR uncertainty (vapor pressure, log[thin space (1/6-em)]Kow, DNEL) leaves final SSbD level assignments unchanged: vapor pressure uncertainty is negligible for step 2 (inhalation RCR <0.01% of threshold), and DNEL uncertainty is the dominant physico-chemical pathway but does not alter SSbD level distributions relative to direct RCR perturbation.

We did not conduct full probabilistic uncertainty propagation or global sensitivity analysis, which would require more detailed input distributions and a larger computational effort.39 Nevertheless, the simple scenario analyses presented here demonstrate how MCDA-based SSbD assessments can be stress-tested and reported transparently, responding to calls for more decision-grade robustness analysis in this field.21,22

6 Discussion and recommendations

6.1 Methodological comparison and selection guidelines

Table 3 summarises the comparative evaluation of the methods. MCDA-ACA's primary strength is its proven regulatory alignment. It was specifically designed to address the failure of traditional fully compensatory MCDA methods (which showed poor alignment with REACH criteria35) and achieved high agreement (97–100%) with REACH Article 57 on test sets.14 Its use of discrete value functions and a final minimum aggregation step rigorously prevent compensation. Its relatively low data requirement makes it suitable for rapid regulatory hazard screening.21 However, its focus is narrow (hazard only), and its binary classification, while excellent for screening, provides less differentiation among “not regrettable” alternatives.
Table 3 Comparative evaluation of SSbD assessment methods
Criterion MCDA-ACA JRC framework (quantitative case-study implementation) Composite indicator (CI-SSbDC)
Regulatory alignment High (explicitly calibrated to REACH Article 57; high accuracy on test sets) Very good (uses REACH/CLP criteria and exposure-based RCRs; embedded in official SSbD framework10) Moderate (incorporates hazard cut-off; post-gate aggregation is more generic and value-dependent)
Non-compensation/safety priority Strong (minimum aggregation; no trade-offs across critical hazard combinations) Strong (safety gate and priority matrix; in revised framework, safety remains a precondition for overall SSbD performance) Hybrid (strict hazard cut-off, but full/partial compensation across dimensions after the gate)
Sustainability integration Low (hazard-focused) High (LCA-based environmental sustainability fully integrated; socio-economic assessment added in 2025 revision) High (environmental, circularity, and economic pillars integrated)
Differentiation among safe options Moderate (single score f for non-regrettable alternatives) High (multi-step scores and final SSbD levels with clear interpretation) High (0.01–1.0 composite score; fine-grained ranking possible)
Transparency Excellent (explicit value functions and aggregation) Good (transparent rules; some complexity in conditional aggregation and matrix lookup) Good (value functions and aggregation documented; choice of function shapes involves expert judgement)
Data requirements Low (hazard properties) High (hazard, exposure, LCA, and in revised framework also socio-economic indicators) High (hazard, LCA, circularity, and economic data)
Computational complexity Low (spreadsheet implementable) Moderate (multi-step calculations with conditional rules) Moderate (requires implementing multiple value functions and aggregation options)
Validation Tested on hypothetical and real substance sets Applied to multiple JRC case studies and refined in 2025 revision Applied to a bio-based case study; broader validation still emerging
Sector adaptability Good (thresholds adjustable to sector/regulatory context) Good (framework structure adaptable; indicators and benchmarks can be tailored) Good (flexible indicator sets; value functions adjustable)
Stakeholder acceptance Potentially high (regulatory alignment) High (official EC framework with extensive stakeholder input) Emerging (appealing for communication; concerns about compensation in some contexts)


The JRC framework offers the most comprehensive, policy-aligned approach. Its hybrid structure, combining a non-compensatory safety gate (hazard and exposure-based safety) with detailed environmental sustainability assessment and, in the 2025 revision, an expanded socio-economic dimension and evaluation stage, balances regulatory rigour with sustainability differentiation. The plasticizer case study demonstrates this strength: DEHP is eliminated, while the remaining safe alternatives are clearly differentiated based on their sustainability profiles. The primary limitation is the high data and resource requirement.17,21

Composite indicator approaches like CI-SSbDC are advantageous for integrating multiple dimensions (including circularity and economics) into a single score for communication. The inclusion of a mandatory hazard cut-off is a crucial hybrid feature. However, the post-gate compensatory aggregation (additive, geometric, harmonic means) can, in some decision contexts, lead to rankings that contradict a strict safety-first perspective, particularly if weights heavily favour circularity or economic dimensions once the hazard cut-off is passed.19 This does not make CI-SSbDC “wrong” but underlines the importance of clarifying the decision context and acceptable trade-offs: fully compensatory methods are inappropriate as primary safety gates in regulatory contexts, but they can be legitimate in post-gate optimisation or innovation portfolio analysis where all candidates have passed stringent safety criteria.

6.2 Aggregation strategies: scope conditions for compensation

The fundamental tension in SSbD-MCDA lies in the aggregation method.22 Fully compensatory approaches, common in traditional MCDA (e.g., MAUT/MAVT, which dominate many CAA applications21) and in composite indicators post-gate,19 allow trade-offs in which high sustainability or circularity scores can compensate for safety deficits. This risks “regrettable substitutions” and conflicts with the non-compensatory principle embedded in chemical safety legislation such as REACH.35 Conversely, strict non-compensatory aggregation8,14 aligns closely with regulatory logic by treating safety as a non-negotiable threshold, but on its own may fail to differentiate between alternatives that all pass minimum safety requirements.22

Hybrid approaches17,19,20 therefore offer the most defensible solution by integrating both philosophies. These models apply strict non-compensation as a first-tier “safety gate”, eliminating alternatives that fail critical safety thresholds (e.g., JRC Step 1 hazard criteria, HARMLESS early-warning flags,17 CI-SSbDC “most harmful” cut-off19). Alternatives passing this gate are then evaluated using compensatory or priority-based rules, allowing optimisation within a set of comparably safe options. The JRC framework (hazard cut-off plus safety-priority matrix) exemplifies this hybrid logic, and the revised SSbD framework explicitly embeds such safety-gate-plus-evaluation structures.10

From this analysis, four fundamental principles for aggregation design emerge:

(1) Hierarchical structure: hybrid models should combine a non-compensatory safety gate with a subsequent, potentially compensatory evaluation stage;14,22

(2) Threshold calibration: safety thresholds must be validated against authoritative regulatory sources (e.g., SVHC Candidate Lists), and external, regulatory-based normalisation is critical for stability and interpretability;14,21

(3) Transparency: value functions, aggregation rules, and weights must be clearly documented and, where possible, implemented in open-source tools;17,21 and.

(4) Practical applicability: tiered approaches aligned with innovation stages and accessible tools are essential to reduce adoption barriers for small and medium-sized enterprises (SMEs).17

Our results therefore support that the fully compensatory MCDA and composite indicator methods are fundamentally unsuitable as primary safety gates in SSbD and regulatory decision contexts, but they can play a constructive role in post-gate optimisation and communication among alternatives that have already satisfied strict safety criteria. Hybrid architectures, such as MCDA-ACA plus the JRC framework or MCDA-ACA plus CI-SSbDC, naturally implement this logic and are fully consistent with the 2025 SSbD framework's emphasis on staged evaluation and documentation.10

6.3 Decision-support platforms and method selection guidelines

Decision Support System (DSS) platforms17,25,38 operationalise these concepts for innovators. Their strength lies in tiered workflows aligned with innovation stages (Ideation, Lab, Pilot), managing data needs by starting with qualitative warning tools or screening-level indices (e.g., the HARMLESS “WASP” tool17 and SUNSHINE's Tier 1 “hotspots of concern”38) and moving to quantitative evaluations (e.g., SAbyNA's detailed assessment matrices25 and SUNSHINE's deep-dive LCA/LCC integration38). By intentionally avoiding a single aggregated score, these platforms force users to transparently evaluate trade-offs rather than hiding them in a single index, which is consistent with the 2025 framework's emphasis on dashboards and explicit evaluation.

Based on our comparative analysis and the JRC case studies, method selection guidelines are therefore clear:

(1). For regulatory hazard screening (e.g., REACH compliance), MCDA-ACA14 is optimal due to its validated accuracy and strictly non-compensatory logic.

(2). For comprehensive innovation evaluation, hybrid approaches are most effective. The JRC quantitative framework10,20 is the official and most complete model; DSS platforms such as HARMLESS,17 SUNSHINE,38 or SAbyNA25 provide a practical, tiered workflow for R&D.

(3). For early-stage screening, tiered approaches17,25,38 with simplified data (e.g., QSAR, screening LCA)40 are most practical.

(4). A robust hybrid two-stage approach is recommended: first, apply a non-compensatory safety gate (MCDA-ACA or a JRC-based holistic safety assessment) to eliminate unacceptable alternatives (such as DEHP). Second, apply a comprehensive framework (e.g., the full JRC assessment or equivalent DSS workflow) to differentiate and rank the remaining viable options (e.g., distinguishing DINCH (high SSbD level) from ATBC/DEHA (lower levels) and DEHT/ESBO (intermediate levels)). This differentiation includes visible trade-offs, such as the minor climate-impact increases for DINCH (+3.7,%) and ATBC (+7.7,%) relative to DEHP, based on JRC data.20

6.4 Practical guidance for practitioners: stage-method mapping

To provide more operational guidance, Table 4 maps innovation stages to minimum data requirements, recommended methods, and indicative stop/go logic. The table synthesises insights from the reviewed methods, the JRC frameworks,8,10,20 and recent DSS platform implementation studies.17,25,38,41
Table 4 Illustrative guidance for selecting SSbD assessment methods by innovation stage
Innovation stage Minimum data requirements Recommended methods/tools Indicative stop/go logic
Ideation/early R&D (low TRL) Basic structural information; qualitative hazard flags; indicative use scenarios; screening LCA data or proxies Qualitative SSbD scoping and scenario analysis;10 early-warning and screening tools (e.g., WASP, SUNSHINE tier 1 hotspots);17,38 MCDA-ACA with QSAR-based hazard estimates14,40 Stop: if MCDA-ACA or equivalent safety gate indicates ‘regrettable’/SVHC-like profile. Go with caution: if hazard borderline and data gaps large; prioritise data generation. Go: if clearly ‘not regrettable’ and early sustainability signals positive
Lab/pilot (mid TRL) Experimentally supported hazard data for key endpoints; preliminary exposure scenarios; screening or cradle-to-gate LCA; preliminary socio-economic context MCDA-ACA or equivalent for refined hazard screening; partial implementation of JRC safety and environmental sustainability assessments; DSS platforms (e.g., HARMLESS, SUNSHINE, SAbyNA) with hotspot flagging and heatmap visualisation;17,25,38 exploratory CI-SSbDC or similar composite indicators (post-gate) Stop: if refined safety gate still fails or new endpoints reveal SVHC/PMT/vPvM concerns. Re-design: if trade-offs show severe sustainability burdens needing design changes. Go: if holistic safety acceptable and environmental performance comparable or better than reference
Demonstration/deployment (high TRL) Comprehensive hazard and exposure datasets; full LCA model (cradle-to-grave); socio-economic indicators (costs, criticality, social risk) Full JRC SSbD framework implementation;10 advanced tier 2 assessments (full LCA, LCC, S-LCA);38 MCDA-supported evaluation dashboards;25 composite indicators for communication with non-expert stakeholders where appropriate; formal sensitivity and uncertainty analyses22,39 Stop or conditional approval: if overall SSbD evaluation shows persistent safety concerns or unacceptable trade-offs (e.g., severe climate or toxicity burdens). Go: if safety is robust and sustainability and socio-economic performance are acceptable or superior relative to benchmarks


This stage-method mapping is intentionally high-level and should be adapted to sector-specific and organisational contexts. However, it operationalises the general recommendation emerging from our review and the revised SSbD framework: use simple, non-compensatory safety gates and qualitative screening tools early in innovation, then progressively add detail, quantitative life-cycle metrics, and MCDA-supported evaluation as data and maturity increase.

6.5 Challenges: data quality, uncertainty, and stakeholder acceptance

All SSbD methods confront fundamental challenges from data limitations and pervasive uncertainties, particularly for novel substances.21,22 A key area is QSAR uncertainty: reliance on in silico predictions is common for filling data gaps,21,40 but reliability varies and assessing model applicability domains is crucial.21,40 Practical tiered assessment strategies, moving from qualitative flags to quantitative models as data quality improves, are therefore essential.17,21

Normalisation methods represent another critical design choice. Lantto's review21 revealed limited attention to this issue, with internal normalisation (scaling indicators relative to the current alternative set) dominating practice. Internal normalisation can create problematic features such as rank instability when the set of alternatives changes.22 External normalisation (anchoring scores to fixed, regulatory or benchmark thresholds) helps ensure stability, interpretability, and regulatory alignment.14,21,22

A related and equally critical challenge is the weighting of criteria. While strict non-compensatory stages such as the MCDA-ACA final score avoid explicit weights, compensatory stages (e.g., the JRC framework's environmental aggregation, the CI-SSbDC final score) are highly sensitive to weighting choices.19,22 Default “equal weighting” is itself a strong value judgement (e.g., implicitly giving the same importance to economic indicators as to hazard post-gate, or to single-indicator and multi-indicator groups in the JRC assessment).21 Yet Lantto's review found that only 4 of 21 CAA studies involved stakeholders in defining weights, with most relying on author-defined schemes.21

Further complicating sustainability assessment is the identified trade-off between circularity and LCA impacts. As demonstrated by Blanco,16 improving circularity indicators (e.g., recyclability) can sometimes worsen environmental performance in specific LCA categories (e.g., higher energy use). This confirms that circularity is not a simple proxy for environmental sustainability and should be assessed in parallel, potentially using outranking methods (e.g., ELECTRE-type approaches) that can handle conflicting indicators without full compensation.22

Missing data remain a major obstacle, especially for circularity and socio-economic indicators, and uncertainty-handling strategies (e.g., neutrality assumptions, explicit downgrading, or scenario analysis) must be tailored to the assessment context (internal innovation vs. external certification).21 Finally, divergent stakeholder perspectives are unavoidable. Innovators and SMEs seek flexibility and clear, resource-efficient guidance,17,21 while regulators prioritise legislative alignment and non-compensation for safety.35 Achieving legitimacy therefore requires inclusive development processes (as exemplified by the JRC framework consultations10,28 and user-centred DSS design17) combined with FAIR data practices and transparent documentation of assumptions.

The 2025 SSbD framework explicitly calls for better documentation of assumptions, explicit treatment of uncertainty, and transparent MCDA-supported evaluation,10 which aligns with these recommendations and highlights the need for standardised sensitivity analysis protocols and clearer guidance on normalisation and weighting in compensatory stages.

6.6 Future research directions

This review identifies a clear need for further research to mature SSbD assessment from a set of individual methodologies into a cohesive, practical, and globally accepted framework.10,21,22

Priorities include: (1) methodological integration, to robustly connect early-stage hazard prediction and holistic safety assessment to environmental and socio-economic evaluations within the revised SSbD framework, and to adapt methods for sector-specific challenges such as nanomaterials and complex mixtures;10,17 (2) uncertainty, validation, and data gaps, focusing on practical protocols for handling QSAR and LCA uncertainty, moving beyond purely retrospective validation to prospective studies and usability testing of SSbD tools;21,22,40 and (3) practical implementation and harmonisation, prioritising accessible, open-source DSS platforms that integrate QSAR, LCA, and socio-economic indicators, implement FAIR data principles, and guide users through tiered workflows.17,30 Because most SSbD frameworks have so far been developed in a European regulatory context, comparative research with non-EU approaches is also needed to support global harmonisation and wider uptake.21 As Lantto's review21 concludes, MCDA in CAA is a growing field. Its evolution will depend on addressing these gaps, particularly regarding stakeholder engagement in weighting and standardizing sensitivity analysis protocols to build trust and ensure robust, reproducible assessments.

7 Conclusions

This analysis of MCDA methodologies for SSbD assessment reveals a field of rapid advancement that has evolved from generic applications to specialised methods tailored for European regulatory contexts, yet critical gaps remain. The central finding is that fully compensatory MCDA approaches are not appropriate as primary safety gates in SSbD decision-making. As shown in the literature, traditional compensatory MCDA methods can demonstrate poor alignment with REACH criteria35 because they allow sustainability or economic performance to offset intrinsic safety deficits. This necessitates specialised hybrid methods like MCDA-ACA,14 the JRC framework,8,10,20 and CI-SSbDC,19 which integrate non-compensatory safety gates aligned with regulatory thresholds (e.g., REACH Article 57) before allowing differentiation on sustainability and other criteria.

The current methodological landscape is characterised by diversity reflecting different application contexts, such as regulatory screening versus innovation guidance.17,21 Despite methodological progress, two substantial gaps hinder wide-scale implementation. First, many frameworks remain research prototypes lacking accessible, integrated software and broad validation.17 Second, uncertainty handling and sensitivity analysis remain underdeveloped given pervasive data limitations in chemical assessment.21,22 Our simple scenario analyses illustrate how robustness checks can be incorporated even in relatively small case studies, and we advocate more systematic application of such analyses.

For practical application, we recommend a clear context-based distinction. For regulatory alternatives assessment, practitioners should employ non-compensatory safety screening (e.g., MCDA-ACA or JRC-based holistic safety assessment) to eliminate hazardous options. For innovation evaluation and portfolio management, a hybrid approach is superior: apply the safety gate first, then use a comprehensive framework (such as the full JRC assessment10 or a DSS17,25,38) to rank the remaining safe alternatives based on sustainability and performance. The practical guidance table (Table 4) offers a starting point for mapping methods to innovation stages. Future method development should build on validated hybrid foundations, prioritise safety, validate against regulatory benchmarks, and focus on creating open, accessible tools while actively engaging stakeholders in defining weights, trade-offs, and acceptable levels of uncertainty.21,22

Author contributions

Takshak Shende: conceptualisation, methodology, software, formal analysis, investigation, visualisation, writing—original draft, writing—review and editing. Viktor Popov: conceptualisation, methodology, supervision, writing—review and editing.

Conflicts of interest

There are no conflicts to declare.

Data availability

The data supporting this article have been included as part of the supplementary information (SI). All numerical results are reproducible from a single self-contained Python script provided as supplementary material (SI Section S3). The script implements all three focal MCDA methods (MCDA-ACA, JRC SSbD framework, CI-SSbDC) and the full sensitivity and uncertainty analysis described in Section 5.6. A single command run from the provided directory reproduces all results. Full instructions, dependencies (Python 3.9 or later, image file: d6su00028b-u1.tif, image file: d6su00028b-u2.tif, image file: d6su00028b-u3.tif), and run commands are given in SI S3. Supplementary information is available. See DOI: https://doi.org/10.1039/d6su00028b.

Acknowledgements

This work was supported by UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant reference 10114698]; and by the European Union's Horizon Europe research and innovation programme under grant agreement no. 101130039.

Notes and references

  1. European Commission, The European Green Deal, 2019.
  2. European Commission, Chemicals Strategy for Sustainability: Towards a Toxic-Free Environment, 2020.
  3. European Commission, Commission Recommendation of 8 December 2022 establishing a European assessment framework for ’safe and sustainable by design’ chemicals and materials, 2022.
  4. J. Tickner, M. M. Jacobs and N. B. Mack, Sustain. Chem. Pharm., 2019, 13, 100161 CrossRef.
  5. M. M. Jacobs, T. F. Malloy, J. A. Tickner and S. Edwards, Environ. Health Perspect., 2015, 124, 265 CrossRef PubMed.
  6. P. T. Anastas and J. C. Warner, in. Green Chemistry: Theory and Practice, Oxford University Press, New York, 1998 Search PubMed.
  7. D. Hristozov, A. Zabeo, L. Soeteman-Hernández, L. Pizzol and S. Stoycheva, RSC Sustain., 2023, 1, 838–846 RSC.
  8. C. Caldeira, R. Farcal, C. Moretti, L. Mancini, H. Rauscher, K. Rasmussen, J. Riego Sintes and S. Sala, Safe and Sustainable by Design chemicals and materials: Framework for the definition of criteria and evaluation procedure for chemicals and materials, Joint Research Centre, European Commission Technical Report JRC128591, Publications Office of the European Union, Luxembourg, 2022.
  9. E. Abbate, I. Garmendia Aguirre, G. Bracalente, L. Mancini, D. Tosches, K. Rasmussen, M. Bennett, H. Rauscher and S. Sala, Safe and Sustainable by Design chemicals and materials - Methodological Guidance, Joint Research Centre, European Commission Technical Report JRC138035, Publications Office of the European Union, Luxembourg, 2024.
  10. I. Garmendia Aguirre, E. Abbate, G. Bracalente, L. Mancini, G. Cappucci, D. Tosches, K. Rasmussen, B. Sokull-Klüttgen, H. Rauscher and S. Sala, Safe and Sustainable by Design chemicals and materials. Revised framework (2025), Joint Research Centre, European Commission Technical Report JRC143022, Publications Office of the European Union, Luxembourg, 2025.
  11. European Commission, Commission Recommendation (EU) 2026/510 of 6 March 2026 on revising the European assessment framework for safe and sustainable by design chemicals and materials, Official Journal of the European Union, L 2026/510, 2026, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202600510,%20ELI: http://data.europa.eu/eli/reco/2026/510/oj.
  12. I. Linkov, J. Steevens, G. Adlakha-Hutcheon, E. Bennett, M. Chappell, V. Colvin, J. M. Davis, T. Davis, A. Elder and S. Foss Hansen, et al., J. Nanopart. Res., 2009, 11, 513–527 CrossRef CAS PubMed.
  13. I. Linkov, F. K. Satterstrom, J. Steevens, E. Ferguson and R. C. Pleus, J. Nanopart. Res., 2007, 9, 543–554 CrossRef.
  14. R. L. London, J. Glüge and M. Scheringer, Environ. Sci. Technol., 2024, 58, 19315–19324 CrossRef CAS PubMed.
  15. A. M. Bechu, M. A. Roy, M. Jacobs and J. A. Tickner, Integrated Environ. Assess. Manag., 2024, 20, 1337–1354 CrossRef.
  16. C. F. Blanco, P. Behrens, M. Vijver, W. Peijnenburg, J. Quik and S. Cucurachi, J. Ind. Ecol., 2025, 29, 47–65 CrossRef.
  17. S. Dekkers, V. Adam, V. D. Battista, A. Haase, J. Prinz, G. Nagel, W. Fransman, M. Persson, B. Suarez-Merino and W. Wohlleben, et al., Adv. Sustain. Syst., 2025, 9, e00208 CrossRef.
  18. A. Karakoltzidis, C. Battistelli, C. Bossa, E. Bouman, I. Garmendia Aguirre, I. Iavicoli, M. Zare Jeddi, S. Karakitsios, V. Leso, M. Løfstedt, B. Magagna, D. Sarigiannis, E. Schultes, L. Soeteman-Hernández, V. Subramanian and P. Nymark, RSC Sustain., 2024, 2, 3464–3477 RSC.
  19. A. Arias, M. Cinelli, M. T. Moreira and S. Cucurachi, Sustain. Prod. Consum., 2024, 51, 385–403 CrossRef.
  20. C. Caldeira, R. Farcal, C. Moretti, K. Rasmussen, H. Rauscher, S. Sala, R. Schoonjans and M. Vieira, Safe and Sustainable by Design chemicals and materials: Case studies, Joint Research Centre, European Commission Technical Report JRC131878, Publications Office of the European Union, Luxembourg, 2023.
  21. E. Lantto, Environ. Syst. Decis., 2025, 45, 50 CrossRef.
  22. L. C. Dias, C. Caldeira and S. Sala, Sci. Total Environ., 2024, 916, 169599 CrossRef CAS PubMed.
  23. W. Wohlleben, V. Adam, P. C. Lledó, S. Dekkers, C. Durand, A. Haase, L. G. Soeteman-Hernandez, A. Livieri, S. Martel-Martín and L. Pizzol, et al., RSC Sustain., 2025, 3, 5285–5302 RSC.
  24. P. Marx-Stoelting, G. Rivière, M. Luijten, K. Aiello-Holden, N. Bandow, K. Baken, A. Cañas, A. Castano, S. Denys and C. Fillol, et al., Arch. Toxicol., 2023, 97, 893–908 CrossRef CAS PubMed.
  25. V. Cazzagon, R. Vanhauten, J. Hanlon, A. S. Jiménez, S. Harrison, M. Auffan, H. Braakhuis, M. Boyles, A. Candalija and A. Katsumiti, et al., Environ. Sci.: Nano, 2025, 12, 4008–4025 RSC.
  26. S. Gottardo, A. Mech, J. Drbohlavova, A. Małyska, S. Bøwadt, M. Simeone, H. Rauscher, B. Sokull-Klüttgen and K. Rasmussen, NanoImpact, 2021, 21, 100297 CrossRef CAS PubMed.
  27. L. Reins and J. Wijns, Eur. J. Risk Regul., 2025, 16, 96–113 CrossRef.
  28. C. Caldeira, R. Farcal, I. Garmendia Aguirre, L. Mancini, S. Pahal, K. Rasmussen, H. Rauscher, J. Riego Sintes and S. Sala, Safe and Sustainable by Design chemicals and materials: Review of safety and sustainability dimensions, aspects, methods, indicators, and tools, Joint Research Centre, European Commission Technical Report EUR 31100 EN, Publications Office of the European Union, Luxembourg, 2022.
  29. A. Sudheshwar, C. Apel, K. Kümmerer, Z. Wang, L. Soeteman-Hernández, E. Valsami-Jones, C. Som and B. Nowack, Environ. Int., 2024, 183, 108305 CrossRef CAS PubMed.
  30. L. G. Soeteman-Hernández, C. Apel, B. Nowack, A. Sudheshwar, C. Som, E. Huttunen-Saarivirta, A. Tenhunen-Lunkka, J. Scheper, A. Falk and E. Valsami-Jones, et al., Environ. Sustain., 2024, 7, 363–368 CrossRef.
  31. B. Bouchaut, Redesigning Chemical Innovation: Essays on Safe and Sustainable by Design, 2024, pp. 51–54 Search PubMed.
  32. L. Soeteman-Hernández, J. Tickner, A. Dierckx, K. Kümmerer, C. Apel and E. Strömberg, RSC Sustain., 2025, 3, 2185–2191 RSC.
  33. M. Damiani, N. Ferrara and F. Ardente, 2022.
  34. N. Pelletier, K. Allacker, R. Pant and S. Manfredi, Int. J. Life Cycle Assess., 2014, 19, 387–404 CrossRef.
  35. R. L. London, J. Gluge and M. Scheringer, Environ. Sci. Technol., 2024, 58, 18811–18821 CrossRef CAS PubMed.
  36. B. Salieri, L. Barruetabeña, I. Rodríguez-Llopis, N. R. Jacobsen, N. Manier, B. Trouiller, V. Chapon, N. Hadrup, A. S. Jiménez and C. Micheletti, et al., NanoImpact, 2021, 23, 100335 CrossRef CAS PubMed.
  37. S. Stoycheva, W. Peijnenburg, B. Salieri, V. Subramanian, A. G. Oomen, L. Pizzol, M. Blosi, A. Costa, S. H. Doak and V. Stone, et al., Sustain. Circularity NOW, 2025, 2, year Search PubMed.
  38. A. Livieri, S. Devecchi, L. Pizzol, A. Zabeo, S. Stoycheva, M. J. López-Tendero, E. Badetti, E. Semenzin and D. Hristozov, NanoImpact, 2025, 100573 CrossRef CAS PubMed.
  39. A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana and S. Tarantola, 2008.
  40. J. van Dijk, H. Flerlage, S. Beijer, J. C. Slootweg and A. P. van Wezel, Chemosphere, 2022, 296, 134050 CrossRef CAS PubMed.
  41. J. van Dijk, A. Sharma, B. Nowack, Z. Wang and M. Scheringer, Environ. Sci. Technol., 2025, 59, 14832–14841 CrossRef CAS PubMed.

This journal is © The Royal Society of Chemistry 2026
Click here to see how this site uses Cookies. View our privacy policy here.