Dr. AJ Kadhim-Saleh is a community family physician and clinic owner in Toronto. Dr. Kadhim-Saleh developed Pippen AI so that family physicians could claim back their time, feel supported, and have a great day at work. His mission is to help physicians discover the wonder of AI, which is poised to revolutionize how physicians deliver care.

Dr. Kadhim-Saleh, we’re hearing from all corners that AI is poised to significantly impact physician practice. Where do you think the biggest changes are going to take place?
The greatest change will be a drastic reduction in administrative tasks. Having spent 12 years in university and over $250,000 in tuition fees, I thought my time as a physician would be primarily focused on direct patient care. But the truth is, I spend a third of my time on paperwork. This is extremely inefficient, and we need to address this problem. Thankfully, AI technology is already changing this. Our company, Pippen AI, is one of several companies leveraging this technology to help physicians reduce their administrative burden through ambient AI scribe.
Beyond charting, there are many areas of the paperwork problem that remain to be solved. From referrals and forms to inbox management, doctors need more support to eliminate non-clinical work. Powered by large language models, algorithms and agentic workflows, AI is bound to solve this problem.
Looking past admin work, AI is also poised to make a big impact on clinical decision support, personalized medicine, drug discovery, and more. This is exciting because the technology is supercharging Research and Development, enabling increased productivity and breakthroughs that were not imaginable a decade ago.
We know that administrative burden is challenging physicians across the country in different ways. What are going to be the most immediate changes and opportunities?
The most immediate changes and opportunities will focus on reducing administrative tasks. These are tedious tasks that can be done better, faster, and more accurately with AI.
Think of the steady stream of faxes that doctors’ offices receive. These faxes need to be analyzed, categorized, and added to patient charts. This task is incredibly tedious and prone to human error. In our clinic, we spend many hours on this task — hours we would much rather spend directly on patient care. This task could be done automatically with the right AI-powered technology.
Similarly, the doctor’s inbox, especially in family medicine, is a massive drain on time and energy — and a huge contributor to burnout. By sorting messages, identifying duplicates, and determining which messages are non-actionable, AI could help doctors summarize and triage their inbox to eliminate noise and highlight critical messages.
Are there some specialties that will be more affected than others?
I believe AI is already affecting all specialties — almost all industries, in fact. Whenever I obtain consent from my patients to use our AI scribe, they often tell me that they themselves also use AI in their personal and business activities. Given this, my advice is that every doctor should prioritize learning about AI and incorporating it into their daily lives.
In your opinion, what are the biggest downside risks to the widespread use of AI tools?
Like most revolutionary technologies, AI can have serious negative outcomes alongside the positive. In medicine, of course, there could be misinformation and inappropriate use of this technology that could impact patient care in a negative way. There needs to be safeguards, disclaimers, and informed consent around the use of AI. But most importantly, we need to be educated about AI and become experts at using it. We need to remember to exercise our critical appraisal and judgement, which I believe is our most important asset.
What should we be doing as physicians now to prepare ourselves for the emergence of AI tools in healthcare practice?
Education, exposure, and utilization. We need to learn about AI, and we need to use AI. This will allow us to wield the technology in a positive way to protect ourselves and our patients. AI may seem like a mystery, but it can be demystified through exposure.
Like the internet, AI arrived suddenly and is already everywhere. Not just in medicine, such as with the use of an AI scribe, but also in the algorithms that power technology in all areas of society. Banks use AI to detect fraud. Insurance companies use it to decide on premiums. Pharmaceutical companies are using it for drug discoveries. Once we demystify AI, then we can learn about it through experience and application.
Of course, the best way for physicians to learn about AI is to use it. Whether using it in their clinical or non-clinical work, or even in their personal life, experience is the best teacher.
How are you using AI currently in your own personal practice and what application has been the most helpful?
We’ve been using AI consistently for the last 2 years, and I am becoming an expert in its application. Specifically, I use our AI Scribe to chart, draft referral letters, and help me fill out forms. I also use AI to help me learn about medical conditions — whether to generate a wide differential diagnosis for complex cases or to learn about medical conditions and their management. It’s important to emphasize that AI can make mistakes, so it is critical to verify the information and, again, to retain and exercise our clinical judgement as physicians.
How concerned should I be about using tools that are new to the market in terms of data security, patient consent, confidentiality, reliability and accuracy?
It is incredibly important to use tools that have been verified. If you are in Ontario, there is a list of verified solutions as part of the OntarioMD vendor of record. Outside Ontario, organizations such as Canada Health Infoway have been providing vital leadership by vetting vendors for privacy, security, and reliability.
If solutions are not on these lists, then the physician must do their own research to verify how data is handled. For example, where is the data stored and how is it managed? Is the solution training on patient data? In terms of patient consent, organizations such as the Canadian Medical Protection Association (CMPA) provide excellent guidance on how to obtain consent from patients. I think choosing a certified vendor would provide medico-legal protections. Following advice from CMPA is also critical. Aside from that, this technology is incredibly helpful for both physicians and patients. Speaking to colleagues, it’s clearly making a huge positive impact.
How do you feel about the ethics of AI potentially making differential diagnoses and treatment plans? Are there any potential blind spots we should be concerned about?
Like any tool, AI can have positive and negative aspects. Think of social media as an example — it has been incredibly helpful, but there are also negative impacts.
Let’s consider a complicated differential diagnosis. What tools do you have at your disposal to broaden your differential diagnosis? Google is an option, though it can yield non-relevant results and potentially misleading information. Reputable sources, such as UpToDate, are great because they provide articles to review, but as of now, do not provide information specifically relevant to a case. Some of my colleagues post on Facebook and WhatsApp (with patient’s permission) to ask their peers, and although this option can be helpful, my observation is that we get inconsistent information — and of course there are concerns about patient privacy.
So, none of the tools we have are perfect. Similarly, AI is an imperfect tool — like all the rest. It has its limitations but can be incredibly helpful. What is important is that we verify and check sources on critical items (e.g. medication dose). In addition, we must always exercise our judgement as physicians. After all, we are the ones seeing the patient in front of our eyes. AI is just a tool to help us, but like any tool, should be used appropriately.
Are AI robots going to take over anyone’s job?
I do not think so. My argument is that AI and humans have complementary skills. I am a true believer that a team including physicians and AI working together is superior to a team of just AI. As physicians, we have capabilities not accessible to AI. Take our clinical judgement — to my knowledge, no AI has been found to have this. On an emotional level, we have the compassion, empathy, and human touch that are foundational to patient care.
On a final note, never underestimate human ingenuity, creativity, and drive.