AI IN CLINICAL DECISION MAKING
Discussion Points ▪ Distinction between AI for clinical decision support versus AI making clinical decisions ▪ Clinicians should maintain final accountability for decisions, with AI serving as a tool ▪ Concerns about potential deskilling of healthcare workforce if over-reliant on AI ▪ Data quality, privacy, and sovereignty issues when training AI models ▪ Need for proper guardrails and frameworks for AI use in healthcare ▪ Challenges with explaining "black box" AI decision-making processes ▪ Varying levels of trust in AI across different clinical applications ▪ Potential for AI to both address and worsen health inequities Key Actions ▪ Develop education programmes on effective AI prompt creation and critical assessment ▪ Create clear policies on AI use that maintain clinician accountability ▪ Establish frameworks for measuring AI impact on clinical outcomes ▪ Consider sustainable funding models for AI implementation in healthcare ▪ Explore options for New Zealand-specific data training while maintaining privacy ▪ Develop strategies to improve health literacy alongside AI literacy Additional Notes ▪ Different types of AI have different applications (rule-based algorithms vs large language models) ▪ Patients are already using AI tools for health information regardless of clinical guidance ▪ Cultural considerations around data use differ between individual and community-focused societies ▪ Economic sustainability of AI in healthcare remains a significant challenge ▪ Existing regulatory frameworks may be adaptable rather than creating entirely new ones ▪ Offline AI systems may offer better privacy protection than cloud-based options
Table sponsored by
Supported by NAHSTIG
- 7 -
Powered by FlippingBook