Abstract

The integration of Artificial Intelligence (AI) systems into critical decision-making processes necessitates robust mechanisms to ensure trustworthiness, ethical compliance, and human oversight. This paper introduces trustSense, a novel assessment framework and tool designed to evaluate the maturity of human oversight practices in AI governance. Building upon principles from trustworthy AI, cybersecurity readiness, and privacy-by-design, trustSense employs a structured questionnaire-based approach to capture an organisation’s oversight capabilities across multiple dimensions. The tool supports diverse user roles and provides tailored feedback to guide risk mitigation strategies. Its calculation module synthesises responses to generate maturity scores, enabling organisations to benchmark their practices and identify improvement pathways. The design and implementation of trustSense are grounded in user-centred methodologies, with defined personas, user flows, and a privacy-preserving architecture. Security considerations and data protection are integrated into all stages of development, ensuring compliance with relevant regulations. Validation results demonstrate the tool’s effectiveness in providing actionable insights for enhancing AI oversight maturity. By combining measurement, guidance, and privacy-aware design, trustSense offers a practical solution for organisations seeking to operationalise trust in AI systems. This work contributes to the discourse on governance of trustworthy AI systems by providing a scalable, transparent, and empirically validated human maturity assessment tool.