Open Access Article
This Open Access Article is licensed under a Creative Commons Attribution-Non Commercial 3.0 Unported Licence

AI-based development of a portfolio of indicators for assessing environmental, social and economic impacts and technological functionality of chemicals and materials

Lisa Pizzol*a, Gloria Fortinab, Arianna Livieria, Fabio Rosadaa, Sarah Devecchiab, Elena Semenzinb and Danail Hristozovc
aGreenDecision Srl, Cannaregio 5904, 30121 Venezia, VE, Italy. E-mail: lisa.pizzol@greendecision.eu
bDepartment of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Via Torino 155, Mestre, VE 30172, Italy
cEast European Research and Innovation Enterprise (EMERGE), Otets Paisiy 46, Sofia, 1303, Bulgaria

Received 24th November 2025 , Accepted 25th February 2026

First published on 11th March 2026


Abstract

The European Commission's (EC) Joint Research Center (JRC) Safe and Sustainable by Design (SSbD) framework (JRC-SSbD framework) has highlighted the importance of assessing safety and sustainability as early as possible in the innovation process and in a pragmatic way. This has created the need to operationalise the framework with cost-effective methods and tools for simplified sustainability assessment. The development of such approaches is not a straightforward task: it requires integration of diverse technical indicators for life cycle assessment of environmental, social and economic impacts. Despite intensive ongoing work in EU research and innovation projects, a comprehensive inventory of such indicators is not yet available. To address this gap, our study employed an AI (Artificial Intelligence)-driven knowledge extraction process to compile an extensive portfolio of 986 environmental, social, economic and functionality indicators grouped in 103 categories relevant for both chemicals and materials. The AI output required substantial human expert intervention, as the categorisation process proved inaccurate across several indicators. In addition, an approach for statistical data analysis was developed and applied to prioritise which indicators should be considered first in simplified sustainability assessments. We expect that this work will have important contribution towards the operationalisation of the JRC-SSbD framework and can help companies to anticipate which information they need to collect to assess the sustainability and functionality of their products, already in the early stages of product development. This can reduce the overall Research and Development and Innovation (R&D&I) costs of the European industries and increase their competitiveness in the transition to a greener economy.



Sustainability spotlight

The findings of this paper provide a relevant foundation for the development of tools to assess sustainability impacts within the SSbD framework evaluation (i.e., Step 2 and 3). Following the SSbD tiered approach which starts with qualitative/simplified models, followed, when (more) data become available, by the semi-quantitative, and the fully quantitative assessment, this study offers an approach for prioritizing sustainability indicators suitable for the qualitative/simplified SSbD assessment, which is especially important at the early innovation stages in which very little data and information are available for new chemicals and emerging materials. In this way, the responsible production and consumption of products that are both sustainable and competitive (in line with the EU Competitiveness Compass) is fostered.

1. Introduction

The Draghi report has highlighted the need for Europe to act faster for innovation to remain competitive, sustainable, and strategically autonomous. These objectives have been reinforced by the European Chemicals Industry Action Plan and the Chemicals Strategy for Sustainability (CSS), which have introduced SSbD1 as a central concept in supporting these policies. In 2022 the EC published its recommendation for establishing a European assessment framework for SSbD of chemicals and materials, which is based on a holistic approach proposed by the EC's JRC.1 The JRC-SSbD framework and Methodological Guidance2 highlight the importance of assessing safety and sustainability already during the early stages of the innovation process and in a pragmatic way. The early integration of SSbD principles in the innovation process is essential as this enables higher flexibility in modifying design choices, reduces long-term costs, and mitigates potential adverse effects on human health and the environment as well as negative socioeconomic impacts.

The revision of the EC-JRC SSbD framework, released in 2025,3 represents a step forward, by requiring safety and sustainability to be addressed in an integrative manner following a holistic approach. In addition, the life cycle perspective stepwise approach proposed by the JRC-SSbD framework should be applied through an iterative process, without a predetermined starting point.4 The iterative structure of the framework allows it to be applied at any stage of the development process. It adopts a tiered methodology, which enables flexibility of application for different innovation scenarios and chemicals/materials/products at different Technology Readiness Levels (TRLs). This flexibility of application is particularly relevant in the early phases of Research and Development (R&D) of innovations, where important SSbD decisions need to be made, but data availability is limited and uncertainty is very high. As the TRL increases, the framework supports the transition from simplified assessments to more comprehensive evaluations, aligning the depth of analysis with the maturity of the technology. Despite representing a foundational approach to SSbD, the JRC-SSbD framework is still in a testing phase, with stakeholders providing feedback on its feasibility and applicability and supporting its further refinement. In particular, regarding the assessment of environmental, social and economic sustainability, which are the main foci of this paper, one of the key open issues concerns the identification of sustainability aspects that are not yet fully addressed by current Life Cycle Assessment (LCA) practices and therefore the selection of appropriate assessment indicators needs to be tackled on a case-by-case basis. Another major gap is the lack of clearly defined criteria for assessing the social and economic dimensions of sustainability. The identified methodological gaps have implications across the iterative, tiered SSbD approach, particularly in the application of simplified SSbD assessments during the early product development stages.

The primary objective of this work was to develop a comprehensive and as exhaustive as possible portfolio of indicators for assessing environmental, social, economic impacts as well as technological functionality of both chemicals and materials. The purpose of this inventory of indicators is to support development of tools for both sustainability assessment and integrated impact assessment applicable at different stages of product development. The development of such simplified and cost-effective tools, especially for the early stages of innovation, is much needed to adequately operationalise the JRC-SSbD framework and therefore encourage its uptake and practical implementation by industries.

2. Materials and methods

To develop the portfolio of indicators for assessment of environmental, social and economic impacts and technological performance (functionality), a stepwise methodology was implemented consisting of three main steps (cf. Fig. 1):
image file: d5su00883b-f1.tif
Fig. 1 The workflow for developing the portfolio of environmental, social, economic and functionality indicators to support the operationalisation of the JRC-SSbD framework.

(1) Literature review: a state-of-the-art review was carried out to gather information on existing tools for assessment of safety, functionality and sustainability for chemicals and (advanced) materials and the related synthesis/manufacturing processes and products;

(2) Portfolio of indicators creation: the outcomes of the literature review were consolidated in a portfolio of sustainability and functionality indicators. The indicators were categorised by a newly developed AI methodology, grouping those indicators that overlap as they conceptually address the same or similar aspects but are structured differently. The purpose of this categorisation was to reduce redundancy and provide a meaningful overview of existing indicators to be considered for development of sustainability and functionality assessment tools.

(3) Statistical data analysis of indicators: a statistical data analysis of the categorised indicators was performed for prioritisation of the most relevant sustainability and functionality indicators to consider for tools development.

2.1 Literature review

The literature review of existing tools consisted of the collection of analysis of both peer-reviewed publications and grey literature (reports, project deliverables, white papers). The review began with a list of relevant sources compiled within the SUNRISE project as results of the literature review performed in the project for the safety5 and sustainability dimensions, which was further complemented by a collection of publications retrieved from Scopus. To this end, the Scopus' search engine was run with the following keywords: “sustainability assessment”, “environmental sustainability assessment”, “social sustainability assessment”, “economic sustainability assessment”, “assessment of sustainability impacts” “sustainability evaluation tools”, “screening assessment for sustainability”, “sustainability assessment approach”, “risk management for emerging materials”, “Safe and Sustainable by Design”, “SSbD application”, “advanced materials”, “chemicals”, “materials”.

This led to the identification of 29 methods and tools for the assessment of sustainability, which were extracted from the literature and further analysed for relevant indicators. Out of the 29 tools, 986 indicators across different dimensions were identified. These tools and the respective indicators are detailed in Section 3.1.

In this study, the term “tools” refers to approaches, methodologies, literature documents, frameworks, guidelines and software solutions designed to conduct either combined (integrated) safety and sustainability assessment, or environmental, social or economic sustainability assessment alone. According to their structure and objectives, tools can include relevant aspects, indicators, or criteria suitable for the assessment of chemicals, materials, processes, or products. The assessment of health and environmental safety is out of the scope of this work.

The scope of the work encompasses the environmental, social, and economic sustainability dimensions. Environmental sustainability refers to the protection and maintenance of natural capital, which is composed of natural, biotic and abiotic resources (e.g., air, water, soil, geological resources and living organisms and their deriving biodiversity). These resources contribute to the production of goods and services for humans, today and in the future.6 Social sustainability refers to social capital, understood as the value created by each individual as a member of society and as a contributor to its functioning. Economic sustainability of innovative products is broadly defined as ensuring their economic viability. In addition to these traditional dimensions, this study also focuses on technological functionality, which is “the ability of a product to be useful and to achieve the goal for which it was designed”.7

2.2 Portfolio of indicators creation

Indicators suggested by the different tools were collected and organised in an Excel file. The definition of what to consider as indicators is not straightforward. Starting from general considerations, ISO 11620:2023 defines the term indicator as an “expression (which can be numeric, symbolic, or verbal) used to characterize activities (events, objects, persons) both in quantitative and qualitative terms in order to assess the value of the activities characterized, and the associated method”.8 Additionally, according to the OECD, an environmental indicator is “a parameter, or a value derived from parameters, which points to, provides information about, or describes the state of a phenomenon, with a significance extending beyond that directly associated with its value” and specifically, sustainability indicators are “quantitative or qualitative variables that provide information on the state and trends of environmental, economic, and social systems in relation to the goal of sustainable development”.9,10 Although established definitions of the term indicator are available, as well as formal distinctions between the terms “aspects”, “criteria”, “indicators”, “questions” which are commonly used in sustainability assessment, this study adopts the term “sustainability indicators” as a broader concept that encompasses all of them. This choice is driven by the fact that the analysed tools rely on a variety of elements (ranging from aspects and criteria to parameters, questions, and guidelines) to guide the assessment process.

The collected indicators went through a categorisation process that firstly allowed the embedding of the different indicators used by the tools into indicators' categories, and secondly, enabled an objective comparison between the indicators. The purpose of this categorisation was to reduce redundancy by consolidating conceptually similar indicators into broader categories (i.e., indicators' categories) representing common features within and across the examined dimensions, thereby streamlining the overview of existing indicators applied in current assessment tools.

The categorisation was achieved through the exploitation of the AI as starting point. Different AI models, all belonging to the family of Large Language Models (LLMs), were tested to identify the one providing the most appropriate output for this study.

Specifically, the model's suitability (i.e., the ability to process the full batch of collected indicators while maintaining contextual coherence across the categorisation task) was assessed. Additionally, suitability was evaluated based on (i) the capacity to retain the overall structure and semantic relationships among indicators throughout long inputs, (ii) the internal consistency of the resulting indicator groupings, and (iii) the stability of the categorisation logic across multiple interactions. Models that failed to preserve context over extended lists of indicators, or that produced fragmented or inconsistent categorisation schemes, were deemed unsuitable for the purposes of this study.

All the tested tools are based on LLMs, i.e., foundation models implemented as large-scale neural networks trained on extensive collections of text using self-supervised learning. These models generate and interpret language by predicting the most likely continuation of a text sequence based on the preceding context, allowing them to capture patterns and relationships in natural language. While the tested LLMs share common training principles and general capabilities, their practical behaviour in applied tasks depends strongly on factors such as model size, the amount of text that can be processed at once, deployment modality (local or cloud-based), access tier, and interaction paradigm. The AI models tested in this study were ChatGPT, DeepSeek, and GitHub Copilot.

ChatGPT is a web-based AI service accessed through an online conversational interface.11 In this study, it was used to support the categorisation of sustainability indicators by processing large sets of textual inputs and proposing groupings based on semantic similarity. An inherent characteristic of the tool is the non-deterministic nature of its outputs, meaning that identical prompts may lead to slightly different categorisation results across multiple runs. This variability required human review and consolidation of the proposed categories to ensure consistency and methodological robustness.

DeepSeek12 was tested as an open-source LLM deployed locally on the authors' machines using a reduced-size configuration.13 This setup allowed full control over the model version and execution environment, ensuring repeatability of the categorisation process and eliminating variability due to platform updates or changes in access conditions. However, the limited model size constrained the amount of text that could be processed simultaneously and reduced the model's ability to maintain contextual coherence when categorising large sets of indicators. As a result, DeepSeek's outputs required more extensive human intervention and consolidation compared to the web-based tools.

GitHub Copilot14 was used as an AI assistant embedded within a code editor environment. In this study, its integration with scripts and structured data files enabled the model to access a more stable and explicit contextual framework compared to conversational interfaces. By operating directly on code and data structures, Copilot was able to maintain contextual continuity across the categorisation workflow and to support iterative refinement of the indicator groupings. This interaction paradigm reduced context loss when processing multiple indicators and facilitated closer integration between the AI-generated categorisation and the Excel-based indicator portfolio.

The objective of the AI-based task was to instruct the AI to categorise the collected indicators according to the type of sustainability or functionality impact they address. In this work, we employed a structured prompting approach based on established prompt patterns shown to improve large language model performance. Specifically, following the prompt pattern catalog of White and colleagues,15 we used the Persona pattern to instruct the model to act as a domain expert, provided contextual information describing the data and its origin to ground the model's understanding of the problem, and defined a clear task to guide the expected output. This combination reflects recommended best practices in prompt engineering.

Once each indicator was assigned to its respective category, the portfolio structure could then be established. Indeed, the structure of the portfolio was designed to consolidate, within a single Excel sheet, each indicator together with its associated dimension, the original tool implementing it, and the category to which it belongs.

2.3 Statistical data analysis

Statistical data analysis was performed to prioritise the most relevant sustainability and functionality indicators within the portfolio to consider for developing tools for (integrated) sustainability and functionality assessment. This was done by estimating:

• Total occurrences of each indicators' category within the portfolio;

• Number of tools considering a specific indicators' category;

• Total occurrences of each indicators' category within each dimension;

• Number of tools considering a specific indicator category within each dimension.

The calculations were based on a systematic counting of the involved information associated with each indicator, considering the indicator category assignments, the dimensions in which the indicators were classified, and the original tool to which they belong. The total occurrences of each indicator category were calculated using the formula = COUNTIF(range, criteria) (e.g., =COUNTIF(A2:A100, “Energy Consumption”)), where column A contains the indicator categories assigned to each indicator. This calculation was also performed by selecting only the indicators (and their corresponding indicator categories) related to a specific sustainability dimension (e.g., environmental sustainability).

The number of tools considering a specific indicator category was calculated using the formula = COUNTA(UNIQUE(B2:B100)), where column B lists the original tools in which each indicator was implemented. Similarly, this calculation was repeated by selecting only the indicators (and their corresponding indicator categories) related to a specific sustainability dimension.

This approach provided an overview of the prevalence and distribution of indicators, offering a basis for identifying the most relevant ones to consider to developing methods and tools for sustainability and functionality assessment, especially screening tools for the early stages of innovation where for the purpose of simplification a lower number of overarching indicators would normally be considered.

3. Results

3.1 Literature review results

29 different tools for the assessment of sustainability were reviewed. Their names, release years, sectors of application, main objectives, assessment structure, target user, developers and references are summarized in Table A1 of the SI.

All the tools were developed after 2007. Out of the 29 tools identified, 7 were developed specifically for AdMa and 9 for nanomaterials, as indicated in the descriptions provided by the tool developers. Eight tools are related to sector-independent assessment, while three were developed for chemical assessment. Out of the two remaining tools, one is tailored to the food and agriculture sector, and the other is a generic tool designed to address emerging or newly arising problems.

The tools differ in terms of the assessment structures they employ, and they have been divided in the following classes: (i) questionnaire i.e., assessments requiring responses to specifically formulated questions; (ii) tools, frameworks, approaches providing a selection of aspects, indicators, criteria, or parameters to be considered in the assessment, without explicitly presenting them in the form of questions; (iii) documents or guidelines that offer general directions for conducting assessments, without detailing specific questions, indicators, criteria, or parameters.

To better explain and give some examples, SUNHINE Tier 1 (tool 27) and Early4AdMa (tool 6) are questionnaires. The JRC-SSbD framework, which defines a set of criteria for assessment, is classified under the second class of tools. The tool named in this study “Green_Chemistry_Nanotechnology” (tool 11) is a guideline which only offers general consideration when dealing with nanomaterials.

A pie chart illustrating the percentage distribution of the 29 tools among the three classes of assessment structure is shown in Fig. 2. The predominant assessment structure consisted of “set of aspects or indicators or criteria or parameters”, although both their format and the number of considered aspects, indicators or criteria varied considerably. The following preferred assessment structure was questionnaire, followed by framework or guideline. Among questionnaires, the number of questions varied as reported in Table 1. The only questionnaire comprising fewer than twenty questions is “WASP” (tool number 29), highlighting the need for the development of tools that require limited information for conducting safety and sustainability assessments. Similarly, the heterogeneity in the number of aspects, indicators, criteria, or parameters considered by the class “set of aspects or indicators or criteria or parameters” is also presented in Table 2. Most of the tools in this class are characterised by the inclusion of between 21 and 60 aspects, indicators, criteria, or parameters.


image file: d5su00883b-f2.tif
Fig. 2 Percentage distribution of the 29 tools among the three classes of assessment structure detected and analysed in this study.
Table 1 Distribution of tools by number of questions in the questionnaire
Number of questions Number of tools
<20 1/10
20–40 4/10
41–79 3/10
>80 2/10


Table 2 Distribution of tools by number of aspects or indicators or criteria or parameters included in tools belonging to the class “set of aspects or indicators or criteria or parameters”
Range for set of aspects or indicators or criteria or parameters Number of tools
<10 1/14
10–20 1/14
21–39 4/14
40–60 4/14
61–100 1/14
>100 3/14


Furthermore, the literature review detected all the possible dimensions considered by the tools, which are: safety, environmental sustainability, social sustainability, economic sustainability, functionality, regulation and governance.

Although safety indicators are not in the focus of this study, safety, as dimension, was thoroughly analysed in the literature review, as the EC-JRC SSbD framework considers safety to be a transversal aspect across all sustainability dimensions, i.e., safety and sustainability are strictly related.1

Table A2 in SI shows the dimensions assessed in each assessed tool. The analysis is qualitative as well as subjective, reflecting the understanding and perspective of the authors of this study. A colour-coding scheme is used to represent the degree of consideration given to each specific dimension within the tool: purple means the dimension is the main focus/assessment purpose of the tool. Green means the dimension is well addressed. Light green means the dimension is addressed but less in comparison with the other dimensions covered by the same tool. Yellow indicates that the dimension is partially addressed, meaning that more than one aspect of the dimension is considered, but not in a comprehensive or exhaustive manner. Light blue indicates that only a single, specific aspect of the dimension is addressed, without considering other aspects. Red means the dimension is not considered.

In addition to safety dimension, the three sustainability dimensions (i.e., environmental, social, and economic), and the dimension of functionality, two further dimensions of interest were detected by the literature review: regulation and governance. The regulatory dimension involves evaluating the existence of norms or legislation related to the subject under assessment. Governance involves assessing a company's or organization's commitment to different sustainability principles in relation to the subject being evaluated. The boundaries between the different dimensions are not strictly defined, as certain aspects may relate to more than one dimension, leading to areas of overlap. This further demonstrates how the dimensions of sustainability are interconnected.

The tools were designed for three broad categories of main users: “regulators and policy-makers”, “innovators” and “enterprises/industries of any dimensions/any kind of organization/or sustainability appliers”. The obtained results are reported in Fig. 3.


image file: d5su00883b-f3.tif
Fig. 3 Percentage of target user categories based on the tools analysed in this study.

3.2 Portfolio of indicators

A total of 986 indicators related to sustainability, functionality, regulation, and governance were identified. It is evident that considering all 986 indicators would result in a highly detailed, data- and time-intensive assessment requiring the involvement of experts from diverse areas. Furthermore, the majority of the indicators used by the different tools address similar aspects of sustainability and/or functionality. Therefore, a categorisation process was implemented to merge the indicators into broader categories related to the same or to similar features across the dimensions examined in this paper. To this end, a process of AI-driven knowledge extraction was carried out. It is important to highlight that, tools which do not employ aspects, indicators, criteria, requirements, principles, parameters, or questionnaires (i.e., tools 8, 9, 10, 11 and 24) were not included as they do not provide the required elements to conduct the assessment. Instead, they offer a general framework or guidelines to be followed. In addition, also tools that focus solely on safety (tools 5 and 18) were excluded from the categorisation process and were therefore not considered in the following statistical data analysis (cf. 3.3).

The AI tools tested for the categorisation of the collected indicators included ChatGPT, DeepSeek (locally deployed using the open-source configuration with 7B parameters), and GitHub Copilot. After an initial attempt to provide the Excel file directly to the AI, it became evident that converting the file into a CSV format would facilitate the categorisation process, as CSV files are easier for the AI to process. All three tools were initially evaluated for their ability to generate thematic categories directly from the CSV representations of the indicator dataset.

For the three AIs, the prompt, developed after several iterations in collaboration with an IT expert, was: “Act as an expert in (environmental/social/economic/…) sustainability. This is a list of (environmental/social/economic/…) sustainability indicators. They are a collection of indicators taken from multiple papers. Some of them have the same meaning but are written with different phrasing. I am going to attach a file; each row represents a different indicator. Can you provide a list of indicators that group together the similar indicators that I have?”

In this way the AI was instructed to easily analyse the data contained in the previously developed Excel file and generate an initial indicator categorisation, grouping multiple indicators from the various tools into single categories when they assess the same or similar sustainability features, even across different sustainability dimensions, through a labelling process that applies “common names” to each identified category.

Among them, DeepSeek produced the most coherent and relevant category structures and was therefore selected to generate the preliminary set of categories. The local instance of DeepSeek proved suitable for identifying and labelling recurring sustainability concepts, but insufficiently robust to consistently assign nearly one thousand indicators to categories without loss of contextual coherence. Therefore, once the preliminary categories had been created, GitHub Copilot in VS Code was then used to assign the indicators to the most appropriate categories. This step required the simultaneous inspection of the full CSV file, the complete list of categories, and their textual definitions, a task for which the code-editor–based interaction of Copilot proved more reliable than the DeepSeek model instance. In particular because GitHub Copilot in VS Code's key strength lies in its ability to infer context directly from the source file, without requiring the user to explicitly describe the problem in a separate interaction.

To minimise the risk of inconsistencies between category generation and category assignment, the category labels and definitions generated with DeepSeek were fixed prior to the assignment phase and reused verbatim during the Copilot-assisted categorisation. GitHub Copilot was therefore not tasked with redefining or interpreting the categories, but only with assigning indicators to an already established and human-validated category set.

All AI-generated outputs were thoroughly reviewed. In cases where the indicators' category” identified by the AI was deemed inappropriate, inaccurate, incorrectly labelled or insufficiently precise by the research team, manual corrections, adjustments, or label selection were performed. In cases where gaps were present, the indicators' category was carefully and directly selected by the research team. Additionally, when an indicator category was considered too specific or too broad with respect to the aims of this study, it was manually reassigned to a different indicators' category or a new indicators' category was developed by the research team. Examples of such intervention by the human research team are provided in Table A3 in SI. Considering that human expert intervention was applied to all AI outputs, and only human expert knowledge could ensure the consistency of indicator categories with the objectives of the study, no different levels of granularity were requested to be developed by the AI.

The initial 986 indicators originate from 21 tools, rather than 29, because, as already explained, tools structured as framework or guideline were excluded, as were those that address only safety (i.e., tool 5). Moreover, the Licara InnovationSCAN (tool 16) questions were not included, as they represent a simpler and more qualitative version of those in the Licara nanoSCAN (tool 17).

From the original set of 986 indicators, the categorisation process resulted in 92 broader categories, called “indicators' categories”, each labelled with a name representing a more generic sustainability indicator. As one indicators' category was labelled “Functionality”, the research team carried out an additional categorisation process without the use of AI, resulting in the development of 11 indicators' categories specific for assessing the functionality dimension, bringing the total to 103 categories of indicators. The majority of the indicators in this category were originally designed as social or economic sustainability indicators, according to the tool from which they originate. The indicators' categories are reported in the tables referenced in the following section.

3.3 Statistical data analysis results

In order to understand the relevance of the indicators' categories for the assessment of sustainability and functionality, statistical analysis was carried out to assess how frequently the indicators appear across the tools. Accordingly, as a first step, the total occurrence of each indicator category in the tools was computed. Additionally, the number of tools referencing each indicator category was calculated, excluding repeated instances of the same category within a single tool.

Both evaluations are reported in Table A4 in the SI, where the ranking of the indicators' categories differs when considering the two evaluations. For example, “Corporate Social Responsibility” is the most frequently mentioned indicators' category, with 57 occurrences. However, as 32 of these are counted within a single tool (tool number 28), it ranks 20th position when considering the number of tools that use it, which are only six out of twenty-one. In contrast, “Waste Production and Management”, “Employment” and “Innovation and R&D” are the indicators' categories adopted by the greatest number of tools: they appear in 12 different tools, with a total of 37, 34 and 22 occurrences respectively. The first nineteen indicators' categories used at least once by the highest number of different tools are the following: “Waste production and management”, “Employment”, “Innovation and R&D”, “Emissions”, “Market dimension and Application Potential”, “Resource Efficiency”, “Energy Consumption”, “Supply chain traceability”, “Impacts on local communities”, “Circular Economy”, “Work Fairness”, “Workplace Conditions”, “Water Consumption”, “Consumer Benefits”, “Climate Change”, “Critical materials”, “Data management and Transparency”, “Social improvement” and, “Wages and Salaries”.

Table A5 in SI identifies the dimension(s) to which each category of indicators belongs to and their relative occurrences. The occurrence of each indicators' category within a specific dimension depends on the explicit inclusion of the original indicators in that dimension by the tools.

However, as highlighted in the literature review, some tools are specifically designed to assess a single dimension, and this is particularly true for the social dimension. Consequently, these tools often replicate the same indicators' categories multiple times, leading to a disproportionate increase in the occurrences of indicators' categories associated with a particular dimension. As a result, the previously presented statistics do not reflect an evenly distributed relevance among the indicators used to evaluate the different dimensions. In particular, if we were to rely solely on these data, indicators' categories pertaining to the economic dimension would be excluded from those considered the most relevant ones. Therefore, the same statistical analysis (total occurrences of the indicators' category in the tools and number of tools per indicators' category) was conducted for each dimension. The results are visible in Fig. 4 for the environmental dimension, in Fig. 5 for the social dimension and in Fig. 6 for the economic dimension. For the environmental dimension, the three indicators' categories used by the highest number of tools are: “Emissions”, “Waste Production and Management” and “Resource Efficiency”, with eleven tools. Following, “Energy Consumption” is mentioned in ten different tools. For the social dimension, the indicators' category used by the highest number of tools is “Employment”, mentioned in eleven different tools. This is followed by “Impacts on Local Communities” mentioned in nine different tools, “Work Fairness” and “Workplace Conditions”, both mentioned in eight different tools. However, “Work fairness” shows a total of 28 occurrences compared with the 21 recorded for “Workplace Conditions”. For the economic dimension, the indicators' categories considered in the highest number of tools are: “Market Dimension and Application Potential” and “Innovation and R&D”, mentioned in eight and seven different tools respectively. Following “Other costs” ties with “Manufacturing costs”, both mentioned in six different tools. However, the first indicators' category shows a total of 29 occurrences, whereas the second one accounts for 19 occurrences.


image file: d5su00883b-f4.tif
Fig. 4 Total occurrences and number of tools per indicators' categories addressing the environmental sustainability dimension.

image file: d5su00883b-f5.tif
Fig. 5 Total occurrences and number of tools per indicators' category addressing the social sustainability dimension.

image file: d5su00883b-f6.tif
Fig. 6 Total occurrences and number of tools per indicators' category addressing the economic sustainability dimension.

Concerning the functionality dimension relative statistical data are reported in Table A6 in SI and in Fig. 7. Among the “Functionality” indicators' categories, only two, “Durability” and “Consumer Needs”, are used by more than three different tools.


image file: d5su00883b-f7.tif
Fig. 7 Total occurrences and number of tools per “Functionality” indicators' category addressing the functionality dimension.

With respect to the regulation and governance dimensions, the statistical data analysis is presented in Fig. 8 and 9, respectively.


image file: d5su00883b-f8.tif
Fig. 8 Total occurrences and number of tools per indicators' category for all indicators' categories addressing the regulation dimension.

image file: d5su00883b-f9.tif
Fig. 9 Total occurrences and number of tools per indicators' category addressing the governance dimension.

The indicators' category “Chemicals/Materials within the Scope of Available Legislation” also appears in the environmental and social sustainability dimensions while “Data Management and Transparency” also appears in both social and economic dimensions. Similarly, in the governance dimension, the indicators' categories “Conflict Management”, “Other Costs”, and “Stakeholder Engagement” are also present in the other two sustainability dimensions.

4. Discussion of the results

The findings of the literature review suggest that different tools assess sustainability with varying emphasis on the environmental, social and economic dimensions and through diverse indicators, depending on their specific scope and structure. Concerning the three pillars of sustainability, there is more agreement in the aspects, indicators or criteria used by tools in the environmental assessment, resulting in a more already consolidated dimension, while strong heterogeneity characterises the economic and social dimensions, highlighting the necessity for global harmonisation.

From the analysis, it is evident that governance is the dimension least considered in the tools, followed by functionality and then regulation. As a matter of fact, the governance and regulation dimensions are not mentioned in the JRC-SSbD Framework and Methodological Guidance. SSbD as a whole, however is form of prevention-based risk governance, while aligning SSbD approaches with regulatory requirements is essential for ensuring their uptake and application by the industries. Functionality analysis is also not explicitly required in the JRC-SSbD framework, but in the JRC-SSbD Methodological Guidance it is suggested to consider it in the preliminary identification of ‘hotspots of concern’ along the life cycle of the chemical, material or product under assessment.16

Functionality is explicitly addressed in SUNSHINE Tier 1 (tool number 27), in the “AdMa_overview” (tool number 1) and it is also discussed in terms of challenges, progress, and opportunities of nanomaterials in “Green Chemistry Nanotechnology” (tool number 11). The JRC-SSbD framework (tool number 15) explicitly states that functionality needs to be considered. Consequently, these are the four tools for which Table A2 in SI does not display the colour red in the “Functionality” column. However, among these, the SUNSHINE Tier 1 tool is the only one that treats functionality as an individual dimension and has developed specific “functionality indicators” to assess it. In contrast, the other tools explicitly incorporate functionality assessment within the broader context of the three main dimensions of sustainability. In some of the remaining tools, functionality is implicitly assessed through indicators associated with the three main dimensions of sustainability.

With regard to main users, it came out that the majority of the tools were developed to help “enterprises/industries of any dimensions/any kind of organization/or sustainability appliers” in decision-making processes. This underscores the need for companies to develop greater awareness of the concept of sustainability and of their sustainability impacts, enhancing safe and sustainable production processes with strong competitive potential. The following most addressed target users are “innovators” and “regulators and policymakers”, respectively.

It is interesting to know that, according to the life cycle-stages considered in the assessment by the tools, the “cradle-to-grave” timeline is the most commonly recommended perspective, aligning with a life cycle thinking approach. However, only the following tools explicitly differentiate the assessment according to specific stages of the material's or product's life cycle: “Early4AdMa” (tool 6), “ivl_LCBROM” (tool 14), “SUNSHINE Tier 1” (tool 27), “Licara NanoSCAN” (tool 17), “Licara InnovationSCAN” (tool 16), “Screening MCDA NANORIGO” (tool 21) and “SUNRISE_WP3” (tool 26). A further analysis could involve differentiating the indicators according to life-cycle stages, clarifying the point at which each indicator should be considered, thereby facilitating their interpretation and application in tools considering the entire lifecycle.

An important issue emerged during this work: the various tools analysed rely on a wide range of approaches and elements (ranging from aspects and criteria to parameters, questions, and guideline) to guide the assessment process. However, a lack of agreement regarding the meaning and use of the term aspects, criteria, indicators and parameters across different tools was highlighted. An example of this inconsistency is illustrated by the differing use of the term criteria in two tools: tool 13 and tool 21. Tool 13 defines the terms environmental, economic, and safety as their three criteria. For each of this criteria, sub-criteria are identified (e.g. global warming potential, capital and flammability). Tool 21 uses the term criteria to indicate more detailed elements (e.g., greenhouse gas contribution, emissions), similar to what tool 13 call sub-criteria. Conversely, tool 23 requires the consideration of “resource efficiency”, “resource criticality” and “dissipation and release” and calls them aspects. At the same time, tool 6, tool 16, tool 27 and tool 29 consider the aspects of tool 23 as indicators, using them to formulate their specific questions to perform the sustainability assessment. Despite this point of concern, all aspects, indicators, criteria, parameters, and questions used for assessment by the tools were collected during the literature review. The issue related to terminology was addressed through a categorisation process during the creation of the portfolio of indicators. However, this challenge highlights that standardisation of terms used in the application of the JRC-SSbD framework would be beneficial for its correct operationalisation, thereby encouraging the SSbD community to move in this direction.

This study organised the identified 986 indicators in 103 categories. During the categorisation process, some challenges emerged from the exploitation of the AI. In most cases, its categorisation was inappropriate, as it failed to grasp the true meaning of the original indicators used by the tools. For instance, the AI was not always able to accurately distinguish whether an indicator referred to “Circular Economy” or to “Waste Production and Management”, a nuance that may be subtle for non-experts. Similarly, some indicators that were incorrectly categorised under the label “Climate Change” were reallocated to “Ozone Depletion”, as they refer to distinct (both related to emission to air) environmental issues. Another example is that indicators related to “Persistent, Bioaccumulative, and Toxic” substances were often misclassified under “Eco-Toxicity”, despite the fact that both categories were developed and labelled by the AI itself. Additionally, the AI tended to generate overly specific categories for certain indicators, which were subsequently consolidated into broader categories by the research team. For instance, indicators such as “Impacts on Local Communities”, “Product Accessibility”, “Food Security”, “Local Water Access”, and “Local Health and Safety Improvement” were grouped together under “Impacts on Local Communities”, allowing for a reduction in the number of indicators' categories where appropriate. Conversely, other indicators were assigned to overly generic categories by the AI. For example, the class “Direct Costs” was further refined by the research team into more specific categories such as “Capital Costs”, “Maintenance Costs”, “Manufacturing Costs”, “Material Costs”, “Personnel Costs”, “Revenues”, “Transportation Costs”, “Use Costs”, “Waste Management Costs”, and “Other Costs”. In this case, the broad category “Direct Costs” was deemed too generic and therefore inadequate for a comprehensive sustainability assessment. Furthermore, some category names were deemed unclear and were therefore revised by experts to more accurately reflect their content. For example, “Corporate Governance” was renamed “Corporate Social Responsibility” to also encompass elements that the AI classified as “Ethical Practices”. Similarly, “Regulatory Scope” was redefined as “Chemicals/Materials Within the Scope of Available Legislation” to provide a clearer description. Moreover, in a few rare cases, the AI left certain gaps in the categorization process. These gaps were subsequently filled by the research team, who reviewed the original indicators and assigned them to the most appropriate categories developed during the whole process. Considering the deep expert-based assessment of each indicator category, the reported statistical results should not be affected by the specificity of the indicator categories (e.g., the risk that specific categories exhibit lower relevance values solely due to their specificity). Indeed, if such categories were developed, it means they relate to features that could not be appropriately grouped within other indicator categories (e.g., water acidification is not a simple water pollution). Consequently, if fewer tools include them, this consistently reflects their lower relevance according to the methodological approach adopted in this study.

Indeed, while the AI proved useful in the initial stages, relying solely on its output would have resulted in numerous inaccuracies. The vast majority of the work required the intervention of the research team, demonstrating the essential role of human expertise in ensuring accurate and meaningful categorisation, necessary for this task.

Findings from the statistical data analysis reveal that, among the resulting 103 indicators' categories, there are some that do not fall within a single, well-defined dimension, as they span across multiple dimensions. For instance, “Animal Welfare” is predominantly associated with the social sustainability dimension, although it also appears within the environmental sustainability dimension. Conversely, “Waste Production and Management” and “Water Consumption” are primarily addressed under the environmental sustainability dimension but are also referenced in the social sustainability dimension. It is worth noting that three indicators' categories fall under the regulatory dimension: “Chemical/material within the scope of available legislations,” “Data management and Transparency,” and “Recognised techniques for characterisation and exposure estimation.” Among these, the first two are also considered in other sustainability dimensions. Specifically, “Chemical/material within the scope of available legislations” is also considered within the environmental and social sustainability dimensions, while “Data management and Transparency” is considered within both the social and economic sustainability dimensions. Similarly, four indicator categories fall under the governance dimension: “Conflicts Management,” “Other Costs,” “Corporate Social Responsibility,” and “Stakeholder Engagement.” With the exception of “Corporate Social Responsibility,” all these indicators' categories are primarily associated with other sustainability dimensions, specifically the social or economic ones. Therefore, nearly all indicator categories within the regulation and governance dimensions are also addressed within the main sustainability dimensions. Since the majority of the indicators' categories in these two dimensions are already represented within the core sustainability dimensions, they may not be considered relevant for a preliminary qualitative assessment in the early stages of product development.

5. Conclusions

This study contributes to the advancement of SSbD assessment by developing a portfolio of sustainability and functionality indicators currently used in existing impact assessment tools and harmonizing them through a classification process into categories. This process initially leveraged AI to generate a preliminary categorisation, which was then refined through a rigorous sustainability-focused review and correction by the research team to ensure alignment with the original meaning of the indicators. The limitations identified in the AI-generated output highlighted the essential contribution of human expertise to achieve accurate and contextually appropriate results.

The statistical data analysis generated a ranking of the relevance of each indicator based on a numerical score, facilitating the selection of the most relevant indicators for development of new methods. This is particularly important for simplified tools for the early stages of innovation which need to consider fewer higher-level indicators. However, since some tools are based on or reference other existing tools, the results obtained in this study may be influenced by redundancies that could introduce systemic bias.

The proposed portfolio of sustainability and functionality indicators for SSbD serves as a valuable inventory for developing new simplified and cost-effective assessment methods and tools, which is much needed, especially for the early stages of product development. The portfolio can help users to effectively understand the sustainability and functionality indicators used in impact assessment, and to facilitate the selection of relevant indicators depending on the specific objectives of their assessment. This is further supported by the results from the statistical data analysis of indicators' categories within the portfolio, which can help to prioritise indicators for developing assessment tools with different levels of data-requirements and tools that also incorporate the regulation and governance dimensions.

Furthermore, the portfolio has important contribution for supporting the operationalisation of the JRC-SSbD framework, particularly in guiding its application across different assessment tiers and informing which methodological gaps need to be addressed. Such gaps are particularly the lack of clearly defined indicators for assessing social and economic sustainability, and the practical limitations in conducting comprehensive assessments such as LCA, Social Life Cycle Assessment (S-LCA), and Life Cycle Costing (LCC) when the available datasets are limited and/or fragmented. Finally, the portfolio can substantially help companies, especially SMEs, to determine which information they need to collect, when dealing with sustainability and functionality assessment of their products, already in the early stages of product development, which can reduce their R&D&I costs and increase their competitiveness in the transition towards a greener economy.

Conflicts of interest

The author(s) declare(s) that there is no conflicts of interest.

Data availability

The data supporting this article have been included as part of the supplementary information (SI) (as Excel file).17–43 Supplementary information: two files – a Word file containing 5 additional tables (Tables A1–A5) and an Excel file (containing the indicators database). See DOI: https://doi.org/10.1039/d5su00883b.

Acknowledgements

Funding for this research was received from European SUNRISE HEU (grant agreement 101137324).

References

  1. EC, Safe and Sustainable by Design Chemicals and Materials: Framework for the Definition of Criteria and Evaluation Procedure for Chemicals and Materials, Publications Office, 2022,  DOI:10.2760/487955.
  2. E. Abbate, A. M. J. Ragas, C. Caldeira, L. Posthuma, I. Garmendia Aguirre and A. C. Devic, et al., Operationalization of the safe and sustainable by design framework for chemicals and materials: challenges and proposed actions, Integrated Environ. Assess. Manag., 2025, 21(2), 245–262,  DOI:10.1093/inteam/vjae031.
  3. I. Garmendia Aguirre, E. Abbate, G. Bracalente, L. Mancini, G. M. Cappucci, D. Tosches, et al., Safe and sustainable by design chemicals and materials. Revised Framework, Publications Office of the European Union, 2025,  DOI:10.2760/5103785.
  4. I. Garmendia Aguirre, K. Rasmussen and H. Rauscher, Safe and Sustainable by Design: Driving Innovation Toward Safer and More Sustainable Chemicals, Materials, Processes and Products, Sustain. Circ. NOW, 2025, 2636–1704,  DOI:10.1055/a-2636-1704.
  5. V. Pomar-Portillo, B. Suarez-Merino, S. Aparicio, E. Badetti, M. Boyles and A. Brunelli, et al., Methods and tools for the safety assessment part of the European Commission's safe and sustainable by design framework when applied to advanced materials, Environ. Int., 2025, 205, 109904,  DOI:10.1016/j.envint.2025.109904.
  6. De Groot RS de, Functions of Nature: Evaluation of Nature in Environmental Planning. Management and Decision Making, Wolters-Noordhoff, Groningen, 1992, p. 315 Search PubMed.
  7. L. Pizzol, A. Livieri, B. Salieri, L. Farcal, L. G. Soeteman-Hernández and H. Rauscher, et al., Screening level approach to support companies in making safe and sustainable by design decisions at the early stages of innovation, Clean. Environ. Syst., 2023, 10, 100132,  DOI:10.1016/j.cesys.2023.100132.
  8. ISO. ISO 11620:2023, Information and documentation—Library performance indicators, https://www.iso.org/obp/ui/en/#iso:std:iso:11620:ed-4:v1:en.
  9. OECD, OECD Environmental Indicators: Towards Sustainable Development 2001, 2001, https://www.oecd.org/en/publications/oecd-environmental-indicators_9789264193499-en.html,  DOI:10.1787/9789264193499-en.
  10. OECD, OECD Handbook for Internationally Comparative Education Statistics: Concepts, Standards, Definitions and Classifications, 2004, https://www.oecd.org/en/publications/oecd-handbook-for-internationally-comparative-education-statistics_9789264104112-en.html,  DOI:10.1787/9789264104112-en.
  11. Open AI, OpenAI Help Center, How ChatGPT and our foundation models are developed, 2025, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed Search PubMed.
  12. DeepSeek, Model Mechanism and Training Methods of DeepSeek, 2025, https://cdn.deepseek.com/policies/en-US/model-algorithm-disclosure.html.
  13. Ollama, deepseek-r1, 2025, https://ollama.com/deepseek-r1 Search PubMed.
  14. GitHub Copilot. GitHub, GitHub Copilot Your AI pair programmer, 2025, https://github.com/features/copilot/plans.
  15. J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, et al., A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT, arXiv, 2023, preprint, arXiv.2302.11382,  DOI:10.48550/arXiv.2302.11382, https://arxiv.org/abs/2302.11382.
  16. EC & JRC, Safe and Sustainable by Design Chemicals and Materials: Methodological Guidance, Publications Office, 2024,  DOI:10.2760/28450.
  17. X. Jia, Z. Li, F. Wang and Y. Qian, Integrated sustainability assessment for chemical processes, Clean Technol. Environ. Policy, 2016, 18(5), 1295–1306,  DOI:10.1007/s10098-015-1075-x.
  18. B. Giese, M. Drapalik, L. Zajicek, D. Jepsen, A. Reihlen and T. Zimmermann, Advanced materials: Overview of the field and screening criteria for relevance assessment, Texte, 2020, 132, 2020 Search PubMed.
  19. A. Medina, S. F. Hansen, F. J. Rodriguez Macias and A. Baun, A design-phase environmental safe-and-sustainable-by-design categorization tool for the development and innovation of nano-enabled advanced materials (AdMaCat), Environ. Sci. Nano, 2024, 11(9), 3761–3773,  10.1039/D4EN00068D.
  20. A. Wiek, S. Zemp, M. Siegrist and A. I. Walter, Sustainable governance of emerging technologies—Critical constellations in the agent network of nanotechnology, Technol. Soc., 2007, 29(4), 388–406,  DOI:10.1016/j.techsoc.2007.08.010.
  21. OIA, What to include in the Impact Analysis Preliminary Assessment Form, Office of Impact Analysis, 2023 Search PubMed.
  22. T. Widler, C. Meili, E. Semenzin, V. Subramanian, A. Zabeo, D. Hristozov, et al., Organisational Risk Management of Nanomaterials Using SUNDS: The Contribution of CENARIOS®, Managing Risk in Nanotechnology, F. Murphy, E. M. McAlea and M. Mullins, Springer International Publishing, Cham, 2016, pp. 219–235,  DOI:10.1007/978-3-319-32392-3_12.
  23. OECD, Early Awareness and Action System for Advanced Materials (Early4AdMa): Pre-regulatory and anticipatory risk governance tool to Advanced Materials, OECD Series on the Safety of Manufactured Nanomaterials and other Advanced Materials, 2023, https://www.oecd.org/en/publications/early-awareness-and-action-system-for-advanced-materials-early4adma-pre-regulatory-and-anticipatory-risk-governance-tool-to-advanced-materials_326fb788-en.html,  DOI:10.1787/326fb788-en.
  24. ECHA, Guidance on Socio-Economic Analysis – Restrictions, 2008 Search PubMed.
  25. Global Social Compliance Programme, Environmental Implementation Guidelines, 2010 Search PubMed.
  26. Global Social Compliance Programme, Reference Tool on Supply Chain Social Performance Management Systems, 2013 Search PubMed.
  27. Global Social Compliance Programme, Reference Tool on Social & Labour Management Systems for Suppliers, 2013 Search PubMed.
  28. J. E. Hutchison, The Road to Sustainable Nanotechnology: Challenges, Progress and Opportunities, 2016 Search PubMed.
  29. Global Sustainability Standards Board, Consolidated Set of the GRI Standards, 2024 Search PubMed.
  30. T. Kärnman, S. Schellenberger, M. Gottfridsson, M. Halling, K. Johansson and T. Rydberg, et al., Life Cycle Based Risk and Opportunity Mapping: A systematic collaborative procedure to integrate environmental and health aspects in early innovation as possible pre-screening to the safe and sustainable by design assessments, Chemistry, 2025 DOI:10.26434/chemrxiv-2025-p93rp-v2.
  31. EMPA & TNO, diamonds.tno.nl, 2025, https://diamonds.tno.nl/projects/licara.
  32. T. Van Harmelen, E. K. Zondervan-van Den Beuken, D. H. Brouwer, E. Kuijpers, W. Fransman and H. B. Buist, et al., LICARA nanoSCAN - A tool for the self-assessment of benefits and risks of nanoproducts, Environ. Int., 2016, 91, 150–160,  DOI:10.1016/j.envint.2016.02.021.
  33. S. Hansen, A. Baun and K. Astrup-Jensen, NanoRiskCat: a Conceptual Decision Support Tool for Nanomaterials. Version: 1.0, Environmental Protection Agency, 2011 Search PubMed.
  34. D. Hristozov, E. Badetti, P. Bigini, A. Brunelli, S. Dekkers and L. Diomede, et al., Next Generation Risk Assessment approaches for advanced nanomaterials: Current status and future perspectives, NanoImpact, 2024, 35, 100523,  DOI:10.1016/j.impact.2024.100523.
  35. N. El Hage, Guidelines for Sustainability Assessment in Food and Agriculture, FAO, 2012 Search PubMed.
  36. S. Purker, C. R. Lalyer and B. Giese, Decision support for selection of new materials considering socio-economic and broader environmental aspects, Sustain. Prod. Consum., 2023, 39, 438–450,  DOI:10.1016/j.spc.2023.05.032.
  37. M. J. Hutchins and J. W. Sutherland, An exploration of measures of social sustainability and their application to supply chain decisions, J. Clean. Prod., 2008, 16(15), 1688–1698,  DOI:10.1016/j.jclepro.2008.06.001.
  38. H. Wigger, T. Zimmermann and C. Pade, Broadening our view on nanomaterials: highlighting potentials to contribute to a sustainable materials management in preliminary assessments, Environ. Syst. Decis., 2015, 35(1), 110–128,  DOI:10.1007/s10669-014-9530-5.
  39. V. Rerimassie, D. Stemerding, E. de Bakker and R. van Est, Beyond public acceptance, Public Accept Des Soc Incubator Promis Nanotechnologies, 2018 Search PubMed.
  40. S. Stoycheva, A. Zabeo, L. Pizzol and D. Hristozov, Socio-Economic Life Cycle-Based Framework for Safe and Sustainable Design of Engineered Nanomaterials and Nano-Enabled Products, Sustainability, 2022, 14(9), 5734,  DOI:10.3390/su14095734.
  41. SUNRISE, D3.1 Sustainability assessment building blocks for Tier 1, 2025, Grant agreement ID: 101137324, https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e51eb7a7ff&appId=PPGMS.
  42. M. Traverso, S. Valdivia, A. Luthin, L. Roche, G. Arcese, S. Neugebauer, et al., Methodological Sheets for Subcategories in Social Life Cycle Assessment (S-LCA) 2021, U N Environ Programme UNEP, 2021 Search PubMed.
  43. V. Adam, V. D. Battista, F. Testard, M. Persson, D. Persson, D. Gargouri, et al., Decision Support System for Safe-and-Sustainable-by-Design Advanced Materials: Case study demonstration, 2025, https://www.authorea.com/users/893079/articles/1270021-decision-support-system-for-safe-and-sustainable-by-design-advanced-materials-case-study-demonstration?commit=f09a34c2de6c3da96fe773eff7573aa34e716ea6,  DOI:10.22541/au.173991289.98651419/v1.

This journal is © The Royal Society of Chemistry 2026
Click here to see how this site uses Cookies. View our privacy policy here.