Progress summary

NEWSLETTER  >

During its first reporting period (M1–M18), the FAITH project made significant and well-coordinated progress, establishing a strong foundation for developing and validating a cross-sectoral framework for AI trustworthiness. The first review confirms that the project is fully aligned with its Description of Action. All the 16 planned deliverables were submitted on time and at a high
standard, demonstrating strong internal coordination, effective quality control, and robust scientific and technical contributions across all Work Packages. A central achievement of this period is the delivery of the first version of the FAITH AI Trustworthiness Assessment Framework (FAITH AI_TAF). This framework represents a key conceptual and methodological  milestone, providing a multidimensional structure to evaluate the trustworthiness of AI systems across technical, ethical, societal, and human-centric dimensions. The framework integrates a three-layered structure and extends it to capture broader vulnerabilities related to fairness, bias, explainability, robustness, and uncertainty. Complementing this, the project released the first version of the Data Management Plan, the Legal & Ethical Impact Assessment, and methodological foundations supporting the pilots, ensuring compliance with emerging regulatory, governance, and open science requirements.
The Review acknowledges FAITH’s strong performance in project management, risk monitoring, and stakeholder engagement, highlighting in particular the External Ethics Advisory Board’s
active role and the systematic dissemination and communication activities. With more than 25 public presentations, multiple scientific articles, and a well-maintained web presence, the project demonstrates significant outreach. A major accomplishment of this period is the successful initiation of the first phase of the seven Large-Scale Pilots, which operationalize FAITH’s
trustworthiness framework across diverse, high-impact domains: media, transportation, education, drones, industrial processes, healthcare, and active ageing. Each pilot has completed its
preparatory phase, including stakeholder analysis, data collection planning, initial AI system development, and integration of ethical and privacy-by-design principles.

Across pilots, early insights demonstrate the applicability and relevance of trustworthiness dimensions such as fairness, explainability, robustness, usability, and accountability. The pilots are not only technically aligned but also socially grounded, particularly in sensitive domains like media and healthcare, where responsible AI deployment is essential. Initial AI models have been delivered in several pilots, and pilot partners have established clear pathways for the upcoming replication phase. The consortium’s cross-pilot coordination mechanisms ensure consistency, comparability, and efficient knowledge transfer, which are crucial for achieving FAITH’s objective of generalizable, domain-independent AI trustworthiness methodologies. Overall, the project is progressing very satisfactorily and is well-positioned to meet its scientific, technological, and societal objectives. FAITH has moved from framework development to pilot activities, showing significant potential for impactful outcomes in the next phase.

Author(s)

Dimitrios Fotiadis

[ Project Coordinator ]
FIEEE, FEAMBES, FIAMBE

Prof. of Biomedical Engineering,
University of Ioannina

FORTH - Head of the
Unit of Medical Technology and
Intelligent Information Systems

Editor in Chief IEEE
Journal of Biomedical & Health Informatics