FAITH’s vision
NEWSLETTER >
The FAITH Project¹ (Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains, GA: 101135932) aims to address the awareness and cooperation of multi-disciplinary stakeholders who work on different aspects (e.g. technical, societal, legal, business, standards) of trustworthiness and different stages of the AI system’s lifecycle. It aims to provide the practitioners and stakeholders of AI systems, not only with a comprehensive analysis of the foundations of AI trustworthiness, but also with an operational playbook for how to continuously assess, refine and build trustworthy AI systems. More specifically, the FAITH AI_TAF (trustworthiness assessment framework) develops a trustworthiness management methodology that identifies trustworthiness threats and vulnerabilities (not only technical but also social and human); evaluates risks and selects mitigation actions. The FAITH AI_TAF methodology follows a multiphase approach (6 phases), including: (i) cartography, (ii) threat analysis, (iii) impact assessment, (iv) vulnerability analysis, (v) risk analysis, and (vi) countermeasures.

Figure 1. The vision of FAITH
To this end several other tools have been developed that support this methodology:
- the FAITH AI TrustGuard which is a checklist based risk assessment for AI-based systems in isolation,
- the FAITH AI TrustSense which focuses on profiling the trustworthiness of the AI participant, and lastly
- the FAITH AI Model Hub which serves as a metadata collection repository for AI models and datasets integrating the notion of AI model passport and data passport.
The relevance, scalability, and impact of the FAITH AI_TAF will be assessed in seven Large-Scale Pilots (LSPs). These pilots were chosen to reflect technological, societal, and regulatory challenges in AI deployment across the following critical domains:
- LSP1 – Media (AI-driven intelligent coaching application that will automatically detect disinformation and hate speech),
- LSP2 – Transportation (AI-driven monitoring of the effectiveness of public transportation and the safety and security of passengers on board and in stations),
- LSP3 – Education (Plato, an AI Learning Companion for STEM education, which supports students by providing automated guidance and feedback in STEM laboratory courses),
- LSP4 – Robotics/Drones (AI driven maintenance of port infrastructure on data from underwater drones),
- LSP5 – Industrial Processes (Hybrid AI models for wastewater treatment),
- LSP6 – Healthcare (AI-based automated prostate and zonal segmentation, visualization of the MR examination and the segmentations as an overlap, batch processing of the examinations), and
- LSP7 – Active Ageing (AI-driven detection of behavioral patterns of elderly individuals from sensor data in their homes, alerting families about unusual bathroom duration times, indicating potential falls).
The FAITH AI_TAF will be evaluated through the proposed AI-based systems on the selected LSPs across different domains. Each LSP will produce domain-specific risk profiles of the FAITH AI_TAF.
Author(s)
