The evolving landscape of Artificial Intelligence (AI) seeks to emulate human behaviour
within socio-technical systems, emphasizing AI engineering to supplant human
decision-making. However, an excessive focus on AI system autonomy raises con-
cerns such as bias and ethical lapses, eroding trust and diminishing performance.
Such a lack of human integration into the AI decision-making loop, may in turn leave
organisations open to more cyber risk than its tools and techniques hope to mitigate.
Efforts to address these challenges involve incorporating ethical considerations, lever-
aging tools like IBM’s Fairness 360 and Google’s What-If tool to enhance fairness.
Trust in AI technology is complex, involving human acceptance, performance, and
empowerment. Trustworthiness is scrutinized in relation to legal, moral, and ethical
principles, aligning with human behavioural patterns and organizational responsibili-
ties. The proposed framework integrates research from diverse disciplines to ensure
the trustworthiness of AI-driven decision support systems, accommodating both the
needs of human users and their own perceptions of trust. It extends the NIST AI Risk
Management Framework by considering users’ social attitudes and values as well as
business objectives throughout the risk management cycle. The framework advocates
co-creation and human experiment processes at all stages, fostering continuous trust-
worthiness improvement to establish ‘trustworthy’ AI systems that are ultimately and
optimally by users.