LLMs won’t see much use in the medical field

  • LLMs give generic responses and they are oftentimes useless. Patients should be given pointed and personalized advice and qualified opinions
  • The main job of doctors is to apply rigid processes to personalized scenarios and identify rare outliers. LLMs struggle with this because they are NLP systems - They predict what the next most likely word is
  • LLMs are prone to hallucination (fabricating data or saying stuff that doesn’t exist at all, and claiming it to be true). There’s no room for error in healthcare
  • LLMs are not explainable. The same can be said for most AI models. Nobody knows the exact algorithm or process an AI model goes through. They are black boxes. Should anything go wrong, AI isn’t in the position to be held accountable. AI cannot explain the process and no participant will be able to deduce what’s happening
  • Medical data is a lot more sparse compared to other datasets

However, they may have some relevance here. LLMs have shown some promise in clinical note-taking and patient medical record-keeping.