The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here:

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Default user image.

Bertil Rolandsson

Visiting research fellow

Default user image.

Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology


  • Magnus Bergquist
  • Bertil Rolandsson
  • Emilia Gryska
  • Mats Laesser
  • Nickoleta Hoefling
  • Rolf Heckemann
  • Justin F. Schneiderman
  • Isabella M. Björkman-Burtscher

Summary, in English

Objectives: To define requirements that condition trust in artificial intelligence (AI) as clinical decision support in radiology from the perspective of various stakeholders and to explore ways to fulfil these requirements. Methods: Semi-structured interviews were conducted with twenty-five respondents—nineteen directly involved in the development, implementation, or use of AI applications in radiology and six working with AI in other areas of healthcare. We designed the questions to explore three themes: development and use of AI, professional decision-making, and management and organizational procedures connected to AI. The transcribed interviews were analysed in an iterative coding process from open coding to theoretically informed thematic coding. Results: We identified four aspects of trust that relate to reliability, transparency, quality verification, and inter-organizational compatibility. These aspects fall under the categories of substantial and procedural requirements. Conclusions: Development of appropriate levels of trust in AI in healthcare is complex and encompasses multiple dimensions of requirements. Various stakeholders will have to be involved in developing AI solutions for healthcare and radiology to fulfil these requirements. Clinical relevance statement: For AI to achieve advances in radiology, it must be given the opportunity to support, rather than replace, human expertise. Support requires trust. Identification of aspects and conditions for trust allows developing AI implementation strategies that facilitate advancing the field. Key Points: • Dimensions of procedural and substantial demands that need to be fulfilled to foster appropriate levels of trust in AI in healthcare are conditioned on aspects related to reliability, transparency, quality verification, and inter-organizational compatibility. •Creating the conditions for trust to emerge requires the involvement of various stakeholders, who will have to compensate the problem’s inherent complexity by finding and promoting well-defined solutions.


  • Sociology

Publishing year





European Radiology

Document type

Journal article




  • Health Care Service and Management, Health Policy and Services and Health Economy


  • Artificial intelligence
  • Clinical decision support systems
  • Organizations
  • Radiology
  • Trust




  • ISSN: 0938-7994