An interview with Prof. Gregory Mentzas

NEWSLETTER  >

Why is FAITH considered a straightforward approach to trustworthy AI?

AI is a key driver fοr digital transformation, but regulatory and societal scrutiny is required to address and mitigate related risks and guarantee trustworthiness of AI models and applications. While traditional AI evaluation focuses on accuracy, trustworthiness also depends on transparency, security, fairness, and privacy. Ensuring AI trustworthiness requires a lifecycle approach, from data preparation to deployment and monitoring. Key risks include issues like bias, outdated datasets, model unpredictability and others, requiring a balance between privacy, explainability, and fairness.

FAITH will introduce a human-centric Trustworthiness Assessment Framework (FAITH AI_TAF), that integrates regulatory requirements, risk management, and real-world validation. Large-scale pilots across seven critical sectors (e.g., healthcare, media, robotics) will refine AI trustworthiness measures and promote cross-domain standardization. By extending risk management to socio-technical AI challenges, FAITH will enhance AI governance, supporting the development of ethical, legally compliant, and trustworthy AI systems.

 

What makes FAITH different from other AI trustworthiness initiatives in Europe?

FAITH brings two key innovations beyond the current state of the art. First, each large-scale pilot will enhance existing methods at

both technical and organizational levels within the human-centric, risk-management-driven FAITH AI_TAF framework. Technical advancements will improve the performance of ML methods, such as real-time fleet assessment in the transport pilot. Organizational improvements will involve defining trustworthy AI requirements, categorizing threats, implementing risk mitigation measures, and leveraging FAITH tools like the AI Model Passport. This will enable the delivery of robust, trustworthy AI systems.

Second, the FAITH AI_TAF itself will evolve through the integration of innovative tools, including the AI Model Passport and System Trust Modeller (STM). Pilot projects will refine the FAITH framework by providing domain-specific insights into trustworthiness needs and threats. By incorporating these insights into a structured risk analysis approach, FAITH AI_TAF will provide an actionable framework of tools and guidelines, supporting the development of transparent, traceable, and trustworthy AI across critical sectors.

 

How will FAITH’s solution be useful for other research and innovation projects?

The risk-based FAITH AI_TAF could serve as a foundational tool for future research projects across various domains. By establishing a structured approach to assessing risks related to AI reliability, fairness, transparency, and security, this framework can be adapted for different industries. Future research can build upon this framework to develop domain-specific trust assessment methodologies, integrating industry standards and regulatory requirements to ensure compliance and ethical deployment of AI technologies. Additionally, the framework can be expanded to facilitate interdisciplinary research on AI ethics and governance. Researchers studying human-AI interactions, algorithmic bias, or explainability can leverage this structured process to quantify and compare trustworthiness metrics across different AI models. This can lead to more standardized benchmarks for evaluating AI safety and effectiveness, supporting collaborative efforts between academia, industry, and policymakers. By incorporating evolving risk factors, such as adversarial attacks or data shifts, future studies can refine and enhance AI trust validation techniques to address emerging threats. Beyond technical validation, the framework can be instrumental in researching how public trust in AI can be fostered. Researchers in social sciences and behavioral studies can use it to analyze how different risk mitigation strategies impact user perceptions of AI systems. This can contribute to developing guidelines for responsible AI communication and user-centered design. Moreover, as AI regulations continue to evolve globally, the framework can support research in legal and compliance domain

ensuring that AI trustworthiness assessments align with emerging policies and standards. This multidisciplinary applicability ensures that the framework could remain a valuable asset in the pursuit of responsible and ethical AI development.

 

Example, how will the LSP in education be benefited by the FAITH AI_TAF?

Current science education reforms emphasize key competencies and inquiry-based learning, but high-stakes exams still dominate, reinforcing traditional teaching and assessment methods. These methods fail to evaluate students’ ability to think scientifically, solve complex problems, and engage in authentic STEM learning. AI-driven assessment tools offer a potential solution, continuously monitoring student progress, providing feedback, and supporting personalized learning paths.

The education Large Scale Pilot (LSP) of FAITH will evaluate the trustworthiness of AI-based student assessment in inquiry-based laboratory work. Our team at the Information Management unit of ICCS together with the Ellinogermaniki Agogi school, have developed an AI Learning Companion that provides real-time guidance, corrects misconceptions, and adapts to students’ needs, allowing teachers to act as facilitators rather than traditional instructors.

Within the education LSP we will be using and adapting the FAITH AI_TAF to ensure the trustworthiness of the AI Learning Companion and address concerns about efficacy, explainability, ethical implications, and the impact on teachers. This pilot will address these challenges through large-scale implementation and adoption of the AI_TAF. Our pilot will involve 40 teachers and 1,000 students (ages 12–15) across multiple schools. Ultimately, this use case aims to validate the trustworthiness of AI-driven assessment as a sustainable, effective alternative to traditional student evaluation methods in STEM education.

 

 

Author(s)

Gregory Mentzas

[ National Technical University of Athens ]
Gregoris Mentzas is full Professor of Management Information Systems, School of Electrical and Computer Engineering, National Technical University of Athens (NTUA) and Founder and Director of the Information Management Unit (IMU) at the Institute of Communication and Computer Systems (ICCS)