Futurecasting in Medicine — Where is AI Taking Us?

Philip Edgcumbe
4 min readOct 19, 2023

I recently teamed up with Dr. Ricky Hu (MD, MASc) to explore some of the future applications of AI in critical care medicine. We shared our thoughts and findings via a talk at The Hospitalist & The Resuscitationist 2023 (HR2023) conference in Montreal, Canada. Our key messages are highlighted in this article. You can also see a full recording of our talk here.

Left: Talk title slide. Right: Talk intro slide explaining what Dr. Wiskar asked us to cover in our talk.

Our objectives for our talk were to:

  1. Demonstrate uses of AI in critical care.
  2. Review large language models (LLMs) such as ChatGPT and Med-PaLM2.
  3. Develop strategies for assessing and determining whether or not a clinical AI tool is safe and reliable. => Specifically, we want to give physicians a basic framework they can use when assessing a clinical AI tool that they might consider using in their hospital or clinic.
Left: We asked DALLE (text-to-image model) to generate some art that represents doctors peering into the future. Right: We used DALLE to generate some art to remind ourselves that AI can help get patients out of the hospital and into their homes.

Some of the key takeaways from our talk are below:

  • In critical care, AI can help predict patient instability, capture good echocardiography images, analyze the echocardiography images and even suggest a diagnosis and treatment plan.
  • GPT-4, a large language model, can diagnose exceedingly rare conditions. However, it cannot be used without supervision due to its tendency to hallucinate and make up information. Some quotes, from a physician that has studied GPT-4 extensively, are below:
  • The best way to engage with large language models (such as ChatGPT4) is to: 1) try it out, 2) develop expertise in prompt engineering, 3) and help to find the limitations of the large language models. As shown, below, one example of that was to compare the responses of physicians and a large language model chatbot.

Strategies for analyzing AI healthcare tools include:

  • When analyzing AI tools, you should consider the following: Input data, output, validation technique, strength and limitations of the tool.
  • When assessing an AI healthcare study or tool, we recomment the PROBAST framework. It helps to identify applicability for your clinical scenario as well as the risk of bias.

Concluding thoughts:

  • We can have fun with AI as well. The images below were generated by AI! Which specialty do you resonate with?
  • We agree with what has been said many times before. Namely, physicians who use AI will replace those who don’t. Furthermore, AI will improve patient understanding, access to care and the quality of care.

Our slides and a recording of our talk are available!

  • Click here to see a copy of our slides.
  • Click here to see a video recording of our presentation. Or, watch it via the embedded video below:

Highlights of preparing this talk included conversations with:

We are incredibly grateful to Ross, Vivek, and Katie for sharing their time and insights with us!

Thanks for your interest! We look forward to your feedback!

~ Philip Edgcumbe and Ricky Hu

Left: Philip Edgcumbe and Ricky Hu at Saint Paul’s Hospital in Vancouver, BC. Right: Philip Edgcumbe, Philippe Rola, and Ross Prager at the 2023 H+R conference.



Philip Edgcumbe

UBC Radiology Resident (MD, PhD) | Singularity University Faculty | Futurist | Entrepreneur