Open Access Article
Lisa Pizzol
*a,
Gloria Fortinab,
Arianna Livieri
a,
Fabio Rosadaa,
Sarah Devecchiab,
Elena Semenzin
b and
Danail Hristozov
c
aGreenDecision Srl, Cannaregio 5904, 30121 Venezia, VE, Italy. E-mail: lisa.pizzol@greendecision.eu
bDepartment of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Via Torino 155, Mestre, VE 30172, Italy
cEast European Research and Innovation Enterprise (EMERGE), Otets Paisiy 46, Sofia, 1303, Bulgaria
First published on 11th March 2026
The European Commission's (EC) Joint Research Center (JRC) Safe and Sustainable by Design (SSbD) framework (JRC-SSbD framework) has highlighted the importance of assessing safety and sustainability as early as possible in the innovation process and in a pragmatic way. This has created the need to operationalise the framework with cost-effective methods and tools for simplified sustainability assessment. The development of such approaches is not a straightforward task: it requires integration of diverse technical indicators for life cycle assessment of environmental, social and economic impacts. Despite intensive ongoing work in EU research and innovation projects, a comprehensive inventory of such indicators is not yet available. To address this gap, our study employed an AI (Artificial Intelligence)-driven knowledge extraction process to compile an extensive portfolio of 986 environmental, social, economic and functionality indicators grouped in 103 categories relevant for both chemicals and materials. The AI output required substantial human expert intervention, as the categorisation process proved inaccurate across several indicators. In addition, an approach for statistical data analysis was developed and applied to prioritise which indicators should be considered first in simplified sustainability assessments. We expect that this work will have important contribution towards the operationalisation of the JRC-SSbD framework and can help companies to anticipate which information they need to collect to assess the sustainability and functionality of their products, already in the early stages of product development. This can reduce the overall Research and Development and Innovation (R&D&I) costs of the European industries and increase their competitiveness in the transition to a greener economy.
Sustainability spotlightThe findings of this paper provide a relevant foundation for the development of tools to assess sustainability impacts within the SSbD framework evaluation (i.e., Step 2 and 3). Following the SSbD tiered approach which starts with qualitative/simplified models, followed, when (more) data become available, by the semi-quantitative, and the fully quantitative assessment, this study offers an approach for prioritizing sustainability indicators suitable for the qualitative/simplified SSbD assessment, which is especially important at the early innovation stages in which very little data and information are available for new chemicals and emerging materials. In this way, the responsible production and consumption of products that are both sustainable and competitive (in line with the EU Competitiveness Compass) is fostered. |
The revision of the EC-JRC SSbD framework, released in 2025,3 represents a step forward, by requiring safety and sustainability to be addressed in an integrative manner following a holistic approach. In addition, the life cycle perspective stepwise approach proposed by the JRC-SSbD framework should be applied through an iterative process, without a predetermined starting point.4 The iterative structure of the framework allows it to be applied at any stage of the development process. It adopts a tiered methodology, which enables flexibility of application for different innovation scenarios and chemicals/materials/products at different Technology Readiness Levels (TRLs). This flexibility of application is particularly relevant in the early phases of Research and Development (R&D) of innovations, where important SSbD decisions need to be made, but data availability is limited and uncertainty is very high. As the TRL increases, the framework supports the transition from simplified assessments to more comprehensive evaluations, aligning the depth of analysis with the maturity of the technology. Despite representing a foundational approach to SSbD, the JRC-SSbD framework is still in a testing phase, with stakeholders providing feedback on its feasibility and applicability and supporting its further refinement. In particular, regarding the assessment of environmental, social and economic sustainability, which are the main foci of this paper, one of the key open issues concerns the identification of sustainability aspects that are not yet fully addressed by current Life Cycle Assessment (LCA) practices and therefore the selection of appropriate assessment indicators needs to be tackled on a case-by-case basis. Another major gap is the lack of clearly defined criteria for assessing the social and economic dimensions of sustainability. The identified methodological gaps have implications across the iterative, tiered SSbD approach, particularly in the application of simplified SSbD assessments during the early product development stages.
The primary objective of this work was to develop a comprehensive and as exhaustive as possible portfolio of indicators for assessing environmental, social, economic impacts as well as technological functionality of both chemicals and materials. The purpose of this inventory of indicators is to support development of tools for both sustainability assessment and integrated impact assessment applicable at different stages of product development. The development of such simplified and cost-effective tools, especially for the early stages of innovation, is much needed to adequately operationalise the JRC-SSbD framework and therefore encourage its uptake and practical implementation by industries.
![]() | ||
| Fig. 1 The workflow for developing the portfolio of environmental, social, economic and functionality indicators to support the operationalisation of the JRC-SSbD framework. | ||
(1) Literature review: a state-of-the-art review was carried out to gather information on existing tools for assessment of safety, functionality and sustainability for chemicals and (advanced) materials and the related synthesis/manufacturing processes and products;
(2) Portfolio of indicators creation: the outcomes of the literature review were consolidated in a portfolio of sustainability and functionality indicators. The indicators were categorised by a newly developed AI methodology, grouping those indicators that overlap as they conceptually address the same or similar aspects but are structured differently. The purpose of this categorisation was to reduce redundancy and provide a meaningful overview of existing indicators to be considered for development of sustainability and functionality assessment tools.
(3) Statistical data analysis of indicators: a statistical data analysis of the categorised indicators was performed for prioritisation of the most relevant sustainability and functionality indicators to consider for tools development.
This led to the identification of 29 methods and tools for the assessment of sustainability, which were extracted from the literature and further analysed for relevant indicators. Out of the 29 tools, 986 indicators across different dimensions were identified. These tools and the respective indicators are detailed in Section 3.1.
In this study, the term “tools” refers to approaches, methodologies, literature documents, frameworks, guidelines and software solutions designed to conduct either combined (integrated) safety and sustainability assessment, or environmental, social or economic sustainability assessment alone. According to their structure and objectives, tools can include relevant aspects, indicators, or criteria suitable for the assessment of chemicals, materials, processes, or products. The assessment of health and environmental safety is out of the scope of this work.
The scope of the work encompasses the environmental, social, and economic sustainability dimensions. Environmental sustainability refers to the protection and maintenance of natural capital, which is composed of natural, biotic and abiotic resources (e.g., air, water, soil, geological resources and living organisms and their deriving biodiversity). These resources contribute to the production of goods and services for humans, today and in the future.6 Social sustainability refers to social capital, understood as the value created by each individual as a member of society and as a contributor to its functioning. Economic sustainability of innovative products is broadly defined as ensuring their economic viability. In addition to these traditional dimensions, this study also focuses on technological functionality, which is “the ability of a product to be useful and to achieve the goal for which it was designed”.7
The collected indicators went through a categorisation process that firstly allowed the embedding of the different indicators used by the tools into indicators' categories, and secondly, enabled an objective comparison between the indicators. The purpose of this categorisation was to reduce redundancy by consolidating conceptually similar indicators into broader categories (i.e., indicators' categories) representing common features within and across the examined dimensions, thereby streamlining the overview of existing indicators applied in current assessment tools.
The categorisation was achieved through the exploitation of the AI as starting point. Different AI models, all belonging to the family of Large Language Models (LLMs), were tested to identify the one providing the most appropriate output for this study.
Specifically, the model's suitability (i.e., the ability to process the full batch of collected indicators while maintaining contextual coherence across the categorisation task) was assessed. Additionally, suitability was evaluated based on (i) the capacity to retain the overall structure and semantic relationships among indicators throughout long inputs, (ii) the internal consistency of the resulting indicator groupings, and (iii) the stability of the categorisation logic across multiple interactions. Models that failed to preserve context over extended lists of indicators, or that produced fragmented or inconsistent categorisation schemes, were deemed unsuitable for the purposes of this study.
All the tested tools are based on LLMs, i.e., foundation models implemented as large-scale neural networks trained on extensive collections of text using self-supervised learning. These models generate and interpret language by predicting the most likely continuation of a text sequence based on the preceding context, allowing them to capture patterns and relationships in natural language. While the tested LLMs share common training principles and general capabilities, their practical behaviour in applied tasks depends strongly on factors such as model size, the amount of text that can be processed at once, deployment modality (local or cloud-based), access tier, and interaction paradigm. The AI models tested in this study were ChatGPT, DeepSeek, and GitHub Copilot.
ChatGPT is a web-based AI service accessed through an online conversational interface.11 In this study, it was used to support the categorisation of sustainability indicators by processing large sets of textual inputs and proposing groupings based on semantic similarity. An inherent characteristic of the tool is the non-deterministic nature of its outputs, meaning that identical prompts may lead to slightly different categorisation results across multiple runs. This variability required human review and consolidation of the proposed categories to ensure consistency and methodological robustness.
DeepSeek12 was tested as an open-source LLM deployed locally on the authors' machines using a reduced-size configuration.13 This setup allowed full control over the model version and execution environment, ensuring repeatability of the categorisation process and eliminating variability due to platform updates or changes in access conditions. However, the limited model size constrained the amount of text that could be processed simultaneously and reduced the model's ability to maintain contextual coherence when categorising large sets of indicators. As a result, DeepSeek's outputs required more extensive human intervention and consolidation compared to the web-based tools.
GitHub Copilot14 was used as an AI assistant embedded within a code editor environment. In this study, its integration with scripts and structured data files enabled the model to access a more stable and explicit contextual framework compared to conversational interfaces. By operating directly on code and data structures, Copilot was able to maintain contextual continuity across the categorisation workflow and to support iterative refinement of the indicator groupings. This interaction paradigm reduced context loss when processing multiple indicators and facilitated closer integration between the AI-generated categorisation and the Excel-based indicator portfolio.
The objective of the AI-based task was to instruct the AI to categorise the collected indicators according to the type of sustainability or functionality impact they address. In this work, we employed a structured prompting approach based on established prompt patterns shown to improve large language model performance. Specifically, following the prompt pattern catalog of White and colleagues,15 we used the Persona pattern to instruct the model to act as a domain expert, provided contextual information describing the data and its origin to ground the model's understanding of the problem, and defined a clear task to guide the expected output. This combination reflects recommended best practices in prompt engineering.
Once each indicator was assigned to its respective category, the portfolio structure could then be established. Indeed, the structure of the portfolio was designed to consolidate, within a single Excel sheet, each indicator together with its associated dimension, the original tool implementing it, and the category to which it belongs.
• Total occurrences of each indicators' category within the portfolio;
• Number of tools considering a specific indicators' category;
• Total occurrences of each indicators' category within each dimension;
• Number of tools considering a specific indicator category within each dimension.
The calculations were based on a systematic counting of the involved information associated with each indicator, considering the indicator category assignments, the dimensions in which the indicators were classified, and the original tool to which they belong. The total occurrences of each indicator category were calculated using the formula = COUNTIF(range, criteria) (e.g., =COUNTIF(A2:A100, “Energy Consumption”)), where column A contains the indicator categories assigned to each indicator. This calculation was also performed by selecting only the indicators (and their corresponding indicator categories) related to a specific sustainability dimension (e.g., environmental sustainability).
The number of tools considering a specific indicator category was calculated using the formula = COUNTA(UNIQUE(B2:B100)), where column B lists the original tools in which each indicator was implemented. Similarly, this calculation was repeated by selecting only the indicators (and their corresponding indicator categories) related to a specific sustainability dimension.
This approach provided an overview of the prevalence and distribution of indicators, offering a basis for identifying the most relevant ones to consider to developing methods and tools for sustainability and functionality assessment, especially screening tools for the early stages of innovation where for the purpose of simplification a lower number of overarching indicators would normally be considered.
All the tools were developed after 2007. Out of the 29 tools identified, 7 were developed specifically for AdMa and 9 for nanomaterials, as indicated in the descriptions provided by the tool developers. Eight tools are related to sector-independent assessment, while three were developed for chemical assessment. Out of the two remaining tools, one is tailored to the food and agriculture sector, and the other is a generic tool designed to address emerging or newly arising problems.
The tools differ in terms of the assessment structures they employ, and they have been divided in the following classes: (i) questionnaire i.e., assessments requiring responses to specifically formulated questions; (ii) tools, frameworks, approaches providing a selection of aspects, indicators, criteria, or parameters to be considered in the assessment, without explicitly presenting them in the form of questions; (iii) documents or guidelines that offer general directions for conducting assessments, without detailing specific questions, indicators, criteria, or parameters.
To better explain and give some examples, SUNHINE Tier 1 (tool 27) and Early4AdMa (tool 6) are questionnaires. The JRC-SSbD framework, which defines a set of criteria for assessment, is classified under the second class of tools. The tool named in this study “Green_Chemistry_Nanotechnology” (tool 11) is a guideline which only offers general consideration when dealing with nanomaterials.
A pie chart illustrating the percentage distribution of the 29 tools among the three classes of assessment structure is shown in Fig. 2. The predominant assessment structure consisted of “set of aspects or indicators or criteria or parameters”, although both their format and the number of considered aspects, indicators or criteria varied considerably. The following preferred assessment structure was questionnaire, followed by framework or guideline. Among questionnaires, the number of questions varied as reported in Table 1. The only questionnaire comprising fewer than twenty questions is “WASP” (tool number 29), highlighting the need for the development of tools that require limited information for conducting safety and sustainability assessments. Similarly, the heterogeneity in the number of aspects, indicators, criteria, or parameters considered by the class “set of aspects or indicators or criteria or parameters” is also presented in Table 2. Most of the tools in this class are characterised by the inclusion of between 21 and 60 aspects, indicators, criteria, or parameters.
![]() | ||
| Fig. 2 Percentage distribution of the 29 tools among the three classes of assessment structure detected and analysed in this study. | ||
| Number of questions | Number of tools |
|---|---|
| <20 | 1/10 |
| 20–40 | 4/10 |
| 41–79 | 3/10 |
| >80 | 2/10 |
| Range for set of aspects or indicators or criteria or parameters | Number of tools |
|---|---|
| <10 | 1/14 |
| 10–20 | 1/14 |
| 21–39 | 4/14 |
| 40–60 | 4/14 |
| 61–100 | 1/14 |
| >100 | 3/14 |
Furthermore, the literature review detected all the possible dimensions considered by the tools, which are: safety, environmental sustainability, social sustainability, economic sustainability, functionality, regulation and governance.
Although safety indicators are not in the focus of this study, safety, as dimension, was thoroughly analysed in the literature review, as the EC-JRC SSbD framework considers safety to be a transversal aspect across all sustainability dimensions, i.e., safety and sustainability are strictly related.1
Table A2 in SI shows the dimensions assessed in each assessed tool. The analysis is qualitative as well as subjective, reflecting the understanding and perspective of the authors of this study. A colour-coding scheme is used to represent the degree of consideration given to each specific dimension within the tool: purple means the dimension is the main focus/assessment purpose of the tool. Green means the dimension is well addressed. Light green means the dimension is addressed but less in comparison with the other dimensions covered by the same tool. Yellow indicates that the dimension is partially addressed, meaning that more than one aspect of the dimension is considered, but not in a comprehensive or exhaustive manner. Light blue indicates that only a single, specific aspect of the dimension is addressed, without considering other aspects. Red means the dimension is not considered.
In addition to safety dimension, the three sustainability dimensions (i.e., environmental, social, and economic), and the dimension of functionality, two further dimensions of interest were detected by the literature review: regulation and governance. The regulatory dimension involves evaluating the existence of norms or legislation related to the subject under assessment. Governance involves assessing a company's or organization's commitment to different sustainability principles in relation to the subject being evaluated. The boundaries between the different dimensions are not strictly defined, as certain aspects may relate to more than one dimension, leading to areas of overlap. This further demonstrates how the dimensions of sustainability are interconnected.
The tools were designed for three broad categories of main users: “regulators and policy-makers”, “innovators” and “enterprises/industries of any dimensions/any kind of organization/or sustainability appliers”. The obtained results are reported in Fig. 3.
The AI tools tested for the categorisation of the collected indicators included ChatGPT, DeepSeek (locally deployed using the open-source configuration with 7B parameters), and GitHub Copilot. After an initial attempt to provide the Excel file directly to the AI, it became evident that converting the file into a CSV format would facilitate the categorisation process, as CSV files are easier for the AI to process. All three tools were initially evaluated for their ability to generate thematic categories directly from the CSV representations of the indicator dataset.
For the three AIs, the prompt, developed after several iterations in collaboration with an IT expert, was: “Act as an expert in (environmental/social/economic/…) sustainability. This is a list of (environmental/social/economic/…) sustainability indicators. They are a collection of indicators taken from multiple papers. Some of them have the same meaning but are written with different phrasing. I am going to attach a file; each row represents a different indicator. Can you provide a list of indicators that group together the similar indicators that I have?”
In this way the AI was instructed to easily analyse the data contained in the previously developed Excel file and generate an initial indicator categorisation, grouping multiple indicators from the various tools into single categories when they assess the same or similar sustainability features, even across different sustainability dimensions, through a labelling process that applies “common names” to each identified category.
Among them, DeepSeek produced the most coherent and relevant category structures and was therefore selected to generate the preliminary set of categories. The local instance of DeepSeek proved suitable for identifying and labelling recurring sustainability concepts, but insufficiently robust to consistently assign nearly one thousand indicators to categories without loss of contextual coherence. Therefore, once the preliminary categories had been created, GitHub Copilot in VS Code was then used to assign the indicators to the most appropriate categories. This step required the simultaneous inspection of the full CSV file, the complete list of categories, and their textual definitions, a task for which the code-editor–based interaction of Copilot proved more reliable than the DeepSeek model instance. In particular because GitHub Copilot in VS Code's key strength lies in its ability to infer context directly from the source file, without requiring the user to explicitly describe the problem in a separate interaction.
To minimise the risk of inconsistencies between category generation and category assignment, the category labels and definitions generated with DeepSeek were fixed prior to the assignment phase and reused verbatim during the Copilot-assisted categorisation. GitHub Copilot was therefore not tasked with redefining or interpreting the categories, but only with assigning indicators to an already established and human-validated category set.
All AI-generated outputs were thoroughly reviewed. In cases where the indicators' category” identified by the AI was deemed inappropriate, inaccurate, incorrectly labelled or insufficiently precise by the research team, manual corrections, adjustments, or label selection were performed. In cases where gaps were present, the indicators' category was carefully and directly selected by the research team. Additionally, when an indicator category was considered too specific or too broad with respect to the aims of this study, it was manually reassigned to a different indicators' category or a new indicators' category was developed by the research team. Examples of such intervention by the human research team are provided in Table A3 in SI. Considering that human expert intervention was applied to all AI outputs, and only human expert knowledge could ensure the consistency of indicator categories with the objectives of the study, no different levels of granularity were requested to be developed by the AI.
The initial 986 indicators originate from 21 tools, rather than 29, because, as already explained, tools structured as framework or guideline were excluded, as were those that address only safety (i.e., tool 5). Moreover, the Licara InnovationSCAN (tool 16) questions were not included, as they represent a simpler and more qualitative version of those in the Licara nanoSCAN (tool 17).
From the original set of 986 indicators, the categorisation process resulted in 92 broader categories, called “indicators' categories”, each labelled with a name representing a more generic sustainability indicator. As one indicators' category was labelled “Functionality”, the research team carried out an additional categorisation process without the use of AI, resulting in the development of 11 indicators' categories specific for assessing the functionality dimension, bringing the total to 103 categories of indicators. The majority of the indicators in this category were originally designed as social or economic sustainability indicators, according to the tool from which they originate. The indicators' categories are reported in the tables referenced in the following section.
Both evaluations are reported in Table A4 in the SI, where the ranking of the indicators' categories differs when considering the two evaluations. For example, “Corporate Social Responsibility” is the most frequently mentioned indicators' category, with 57 occurrences. However, as 32 of these are counted within a single tool (tool number 28), it ranks 20th position when considering the number of tools that use it, which are only six out of twenty-one. In contrast, “Waste Production and Management”, “Employment” and “Innovation and R&D” are the indicators' categories adopted by the greatest number of tools: they appear in 12 different tools, with a total of 37, 34 and 22 occurrences respectively. The first nineteen indicators' categories used at least once by the highest number of different tools are the following: “Waste production and management”, “Employment”, “Innovation and R&D”, “Emissions”, “Market dimension and Application Potential”, “Resource Efficiency”, “Energy Consumption”, “Supply chain traceability”, “Impacts on local communities”, “Circular Economy”, “Work Fairness”, “Workplace Conditions”, “Water Consumption”, “Consumer Benefits”, “Climate Change”, “Critical materials”, “Data management and Transparency”, “Social improvement” and, “Wages and Salaries”.
Table A5 in SI identifies the dimension(s) to which each category of indicators belongs to and their relative occurrences. The occurrence of each indicators' category within a specific dimension depends on the explicit inclusion of the original indicators in that dimension by the tools.
However, as highlighted in the literature review, some tools are specifically designed to assess a single dimension, and this is particularly true for the social dimension. Consequently, these tools often replicate the same indicators' categories multiple times, leading to a disproportionate increase in the occurrences of indicators' categories associated with a particular dimension. As a result, the previously presented statistics do not reflect an evenly distributed relevance among the indicators used to evaluate the different dimensions. In particular, if we were to rely solely on these data, indicators' categories pertaining to the economic dimension would be excluded from those considered the most relevant ones. Therefore, the same statistical analysis (total occurrences of the indicators' category in the tools and number of tools per indicators' category) was conducted for each dimension. The results are visible in Fig. 4 for the environmental dimension, in Fig. 5 for the social dimension and in Fig. 6 for the economic dimension. For the environmental dimension, the three indicators' categories used by the highest number of tools are: “Emissions”, “Waste Production and Management” and “Resource Efficiency”, with eleven tools. Following, “Energy Consumption” is mentioned in ten different tools. For the social dimension, the indicators' category used by the highest number of tools is “Employment”, mentioned in eleven different tools. This is followed by “Impacts on Local Communities” mentioned in nine different tools, “Work Fairness” and “Workplace Conditions”, both mentioned in eight different tools. However, “Work fairness” shows a total of 28 occurrences compared with the 21 recorded for “Workplace Conditions”. For the economic dimension, the indicators' categories considered in the highest number of tools are: “Market Dimension and Application Potential” and “Innovation and R&D”, mentioned in eight and seven different tools respectively. Following “Other costs” ties with “Manufacturing costs”, both mentioned in six different tools. However, the first indicators' category shows a total of 29 occurrences, whereas the second one accounts for 19 occurrences.
![]() | ||
| Fig. 4 Total occurrences and number of tools per indicators' categories addressing the environmental sustainability dimension. | ||
![]() | ||
| Fig. 5 Total occurrences and number of tools per indicators' category addressing the social sustainability dimension. | ||
![]() | ||
| Fig. 6 Total occurrences and number of tools per indicators' category addressing the economic sustainability dimension. | ||
Concerning the functionality dimension relative statistical data are reported in Table A6 in SI and in Fig. 7. Among the “Functionality” indicators' categories, only two, “Durability” and “Consumer Needs”, are used by more than three different tools.
![]() | ||
| Fig. 7 Total occurrences and number of tools per “Functionality” indicators' category addressing the functionality dimension. | ||
With respect to the regulation and governance dimensions, the statistical data analysis is presented in Fig. 8 and 9, respectively.
![]() | ||
| Fig. 8 Total occurrences and number of tools per indicators' category for all indicators' categories addressing the regulation dimension. | ||
![]() | ||
| Fig. 9 Total occurrences and number of tools per indicators' category addressing the governance dimension. | ||
The indicators' category “Chemicals/Materials within the Scope of Available Legislation” also appears in the environmental and social sustainability dimensions while “Data Management and Transparency” also appears in both social and economic dimensions. Similarly, in the governance dimension, the indicators' categories “Conflict Management”, “Other Costs”, and “Stakeholder Engagement” are also present in the other two sustainability dimensions.
From the analysis, it is evident that governance is the dimension least considered in the tools, followed by functionality and then regulation. As a matter of fact, the governance and regulation dimensions are not mentioned in the JRC-SSbD Framework and Methodological Guidance. SSbD as a whole, however is form of prevention-based risk governance, while aligning SSbD approaches with regulatory requirements is essential for ensuring their uptake and application by the industries. Functionality analysis is also not explicitly required in the JRC-SSbD framework, but in the JRC-SSbD Methodological Guidance it is suggested to consider it in the preliminary identification of ‘hotspots of concern’ along the life cycle of the chemical, material or product under assessment.16
Functionality is explicitly addressed in SUNSHINE Tier 1 (tool number 27), in the “AdMa_overview” (tool number 1) and it is also discussed in terms of challenges, progress, and opportunities of nanomaterials in “Green Chemistry Nanotechnology” (tool number 11). The JRC-SSbD framework (tool number 15) explicitly states that functionality needs to be considered. Consequently, these are the four tools for which Table A2 in SI does not display the colour red in the “Functionality” column. However, among these, the SUNSHINE Tier 1 tool is the only one that treats functionality as an individual dimension and has developed specific “functionality indicators” to assess it. In contrast, the other tools explicitly incorporate functionality assessment within the broader context of the three main dimensions of sustainability. In some of the remaining tools, functionality is implicitly assessed through indicators associated with the three main dimensions of sustainability.
With regard to main users, it came out that the majority of the tools were developed to help “enterprises/industries of any dimensions/any kind of organization/or sustainability appliers” in decision-making processes. This underscores the need for companies to develop greater awareness of the concept of sustainability and of their sustainability impacts, enhancing safe and sustainable production processes with strong competitive potential. The following most addressed target users are “innovators” and “regulators and policymakers”, respectively.
It is interesting to know that, according to the life cycle-stages considered in the assessment by the tools, the “cradle-to-grave” timeline is the most commonly recommended perspective, aligning with a life cycle thinking approach. However, only the following tools explicitly differentiate the assessment according to specific stages of the material's or product's life cycle: “Early4AdMa” (tool 6), “ivl_LCBROM” (tool 14), “SUNSHINE Tier 1” (tool 27), “Licara NanoSCAN” (tool 17), “Licara InnovationSCAN” (tool 16), “Screening MCDA NANORIGO” (tool 21) and “SUNRISE_WP3” (tool 26). A further analysis could involve differentiating the indicators according to life-cycle stages, clarifying the point at which each indicator should be considered, thereby facilitating their interpretation and application in tools considering the entire lifecycle.
An important issue emerged during this work: the various tools analysed rely on a wide range of approaches and elements (ranging from aspects and criteria to parameters, questions, and guideline) to guide the assessment process. However, a lack of agreement regarding the meaning and use of the term aspects, criteria, indicators and parameters across different tools was highlighted. An example of this inconsistency is illustrated by the differing use of the term criteria in two tools: tool 13 and tool 21. Tool 13 defines the terms environmental, economic, and safety as their three criteria. For each of this criteria, sub-criteria are identified (e.g. global warming potential, capital and flammability). Tool 21 uses the term criteria to indicate more detailed elements (e.g., greenhouse gas contribution, emissions), similar to what tool 13 call sub-criteria. Conversely, tool 23 requires the consideration of “resource efficiency”, “resource criticality” and “dissipation and release” and calls them aspects. At the same time, tool 6, tool 16, tool 27 and tool 29 consider the aspects of tool 23 as indicators, using them to formulate their specific questions to perform the sustainability assessment. Despite this point of concern, all aspects, indicators, criteria, parameters, and questions used for assessment by the tools were collected during the literature review. The issue related to terminology was addressed through a categorisation process during the creation of the portfolio of indicators. However, this challenge highlights that standardisation of terms used in the application of the JRC-SSbD framework would be beneficial for its correct operationalisation, thereby encouraging the SSbD community to move in this direction.
This study organised the identified 986 indicators in 103 categories. During the categorisation process, some challenges emerged from the exploitation of the AI. In most cases, its categorisation was inappropriate, as it failed to grasp the true meaning of the original indicators used by the tools. For instance, the AI was not always able to accurately distinguish whether an indicator referred to “Circular Economy” or to “Waste Production and Management”, a nuance that may be subtle for non-experts. Similarly, some indicators that were incorrectly categorised under the label “Climate Change” were reallocated to “Ozone Depletion”, as they refer to distinct (both related to emission to air) environmental issues. Another example is that indicators related to “Persistent, Bioaccumulative, and Toxic” substances were often misclassified under “Eco-Toxicity”, despite the fact that both categories were developed and labelled by the AI itself. Additionally, the AI tended to generate overly specific categories for certain indicators, which were subsequently consolidated into broader categories by the research team. For instance, indicators such as “Impacts on Local Communities”, “Product Accessibility”, “Food Security”, “Local Water Access”, and “Local Health and Safety Improvement” were grouped together under “Impacts on Local Communities”, allowing for a reduction in the number of indicators' categories where appropriate. Conversely, other indicators were assigned to overly generic categories by the AI. For example, the class “Direct Costs” was further refined by the research team into more specific categories such as “Capital Costs”, “Maintenance Costs”, “Manufacturing Costs”, “Material Costs”, “Personnel Costs”, “Revenues”, “Transportation Costs”, “Use Costs”, “Waste Management Costs”, and “Other Costs”. In this case, the broad category “Direct Costs” was deemed too generic and therefore inadequate for a comprehensive sustainability assessment. Furthermore, some category names were deemed unclear and were therefore revised by experts to more accurately reflect their content. For example, “Corporate Governance” was renamed “Corporate Social Responsibility” to also encompass elements that the AI classified as “Ethical Practices”. Similarly, “Regulatory Scope” was redefined as “Chemicals/Materials Within the Scope of Available Legislation” to provide a clearer description. Moreover, in a few rare cases, the AI left certain gaps in the categorization process. These gaps were subsequently filled by the research team, who reviewed the original indicators and assigned them to the most appropriate categories developed during the whole process. Considering the deep expert-based assessment of each indicator category, the reported statistical results should not be affected by the specificity of the indicator categories (e.g., the risk that specific categories exhibit lower relevance values solely due to their specificity). Indeed, if such categories were developed, it means they relate to features that could not be appropriately grouped within other indicator categories (e.g., water acidification is not a simple water pollution). Consequently, if fewer tools include them, this consistently reflects their lower relevance according to the methodological approach adopted in this study.
Indeed, while the AI proved useful in the initial stages, relying solely on its output would have resulted in numerous inaccuracies. The vast majority of the work required the intervention of the research team, demonstrating the essential role of human expertise in ensuring accurate and meaningful categorisation, necessary for this task.
Findings from the statistical data analysis reveal that, among the resulting 103 indicators' categories, there are some that do not fall within a single, well-defined dimension, as they span across multiple dimensions. For instance, “Animal Welfare” is predominantly associated with the social sustainability dimension, although it also appears within the environmental sustainability dimension. Conversely, “Waste Production and Management” and “Water Consumption” are primarily addressed under the environmental sustainability dimension but are also referenced in the social sustainability dimension. It is worth noting that three indicators' categories fall under the regulatory dimension: “Chemical/material within the scope of available legislations,” “Data management and Transparency,” and “Recognised techniques for characterisation and exposure estimation.” Among these, the first two are also considered in other sustainability dimensions. Specifically, “Chemical/material within the scope of available legislations” is also considered within the environmental and social sustainability dimensions, while “Data management and Transparency” is considered within both the social and economic sustainability dimensions. Similarly, four indicator categories fall under the governance dimension: “Conflicts Management,” “Other Costs,” “Corporate Social Responsibility,” and “Stakeholder Engagement.” With the exception of “Corporate Social Responsibility,” all these indicators' categories are primarily associated with other sustainability dimensions, specifically the social or economic ones. Therefore, nearly all indicator categories within the regulation and governance dimensions are also addressed within the main sustainability dimensions. Since the majority of the indicators' categories in these two dimensions are already represented within the core sustainability dimensions, they may not be considered relevant for a preliminary qualitative assessment in the early stages of product development.
The statistical data analysis generated a ranking of the relevance of each indicator based on a numerical score, facilitating the selection of the most relevant indicators for development of new methods. This is particularly important for simplified tools for the early stages of innovation which need to consider fewer higher-level indicators. However, since some tools are based on or reference other existing tools, the results obtained in this study may be influenced by redundancies that could introduce systemic bias.
The proposed portfolio of sustainability and functionality indicators for SSbD serves as a valuable inventory for developing new simplified and cost-effective assessment methods and tools, which is much needed, especially for the early stages of product development. The portfolio can help users to effectively understand the sustainability and functionality indicators used in impact assessment, and to facilitate the selection of relevant indicators depending on the specific objectives of their assessment. This is further supported by the results from the statistical data analysis of indicators' categories within the portfolio, which can help to prioritise indicators for developing assessment tools with different levels of data-requirements and tools that also incorporate the regulation and governance dimensions.
Furthermore, the portfolio has important contribution for supporting the operationalisation of the JRC-SSbD framework, particularly in guiding its application across different assessment tiers and informing which methodological gaps need to be addressed. Such gaps are particularly the lack of clearly defined indicators for assessing social and economic sustainability, and the practical limitations in conducting comprehensive assessments such as LCA, Social Life Cycle Assessment (S-LCA), and Life Cycle Costing (LCC) when the available datasets are limited and/or fragmented. Finally, the portfolio can substantially help companies, especially SMEs, to determine which information they need to collect, when dealing with sustainability and functionality assessment of their products, already in the early stages of product development, which can reduce their R&D&I costs and increase their competitiveness in the transition towards a greener economy.
| This journal is © The Royal Society of Chemistry 2026 |