Dr. Divyang Patel explores novel pathways of cardiovascular monitoring technologies and how artificial intelligence plays a role.
Uh, for those that don't know me, I'm Div Patel. I'm one of the cardiac electrophysiologists here, and I'm just gonna touch upon digital health and AI and cardiology. I think as a field, cardiology is in the forefront of how to deliver care because we have a limited supply of labor force, whether it's advanced practice, uh, providers, whether it's nurses, whether it's physicians, um, and because of our limited labor force, I think it's gonna be important to. Innovate and um transform our field into more of a digital health and AI and so I'll go over some topics um and please ask questions at the end if you have any. So I have no disclosures. So just to give a brief outline, the first thing I'm gonna be talking about is diagnosis and treatment of arrhythmias and how that's evolved from more of an EKG aspect to digital health, whether it's Apple Watch or Cario Mobile, um, and then designing novel care pathways and clinics of the future. So I'll touch upon, um, two studies at the Brigham which I think are important about how to deliver care when you're actually not seeing patients in the clinic. And the third, I think I'm gonna touch upon how AI, artificial intelligence can help cardiology, and I've included aspects of all parts of cardiology from intervention to imaging to EP to even cardiac surgery on how AI can help. So if we think about How medical innovation has evolved in the era of digital health. If you look on the top, this is how. Traditional studies were done, so you have physicians and scientists at the Brigham or Johns Hopkins or academic medical centers designing medications, tests and devices, um, and then you go through clinical trial phase and then the FDA approves it. Then we go through committees. Health care providers say whether the drug is worth it, whether new products are worth it, and then finally it's delivered to patients. And then the clinical trial parts is where we've now sort of evolved to what I would say AI comes into play. So artificial intelligence of what I call it is trying to make a computer more humanlike and to think like a human. And so that aspect comes into all of these tests, medical device and medications. If we look at AI it can find drugs that humans could not find before. But now if you look in the bottom, direct to consumer products, we're skipping some of the clinical trials. So when we talk about wearables, whether it's a Garmin watch or an Apple watch or if we look at Caria Mobile, we're doing small studies. Some might have FDA clearance, but they don't, and so it's hard to really screen what's the signal from the noise, uh, but it's evolving and AI will play a role in both of these, whether it's medical devices, whether it's medications tests or even wearables. So if we look at atrial fibrillation, it remains the most common arrhythmia. It's difficult to detect unless symptomatic. We know that there's a lot of afib that we haven't even captured because a lot of patients truly don't present with palpitations or get EKG screening. There's expensive treatments including cardiac ablation. Uh, which we're, we're all familiar with and the consequences if left untreated can lead to atrialyopathy, strokes down the line, and large morbidity and mortality. So the gold standard is the EKG, so patients present to the ER with palpitations, um, and that you could see the order for an EKG. It's an expensive test. Uh, Doctor Chuufo told me the charge for his EKG was hundreds of dollars that insurance didn't cover. Um, so EKG is an expensive test, and I think that'll evolve as time goes on. The US Preventative Task Force says right now the data is insufficient to screen for atrial fibrillation. So if we look at should we get EKGs on everyone over the age of 50 because we might find occult afib, there's not great evidence for it and so it's incomplete, but if you look at the bottom, and this is taken from the Annals of Internal Medicine. Understanding the risk of stroke associated with subclinical AF or AF detected with the use of consumer devices, that benefit might play a role and so it's important to realize that the future we might be detecting stuff with Apple watches or Cardi Mobis. And so this was the first device really introduced to us as cardiologists before the Apple Watch came out and it was founded in 2010 as the first iPhone application. And it's a single lead EKG for detection of atrial fibrillation. And unlike other companies, they led the way in proving that their data could show atrial fibrillation with high sensitivity and specificity. So this is the Apple Watch that a lot of people, patients and providers are familiar with. It was introduced in 2014 by Apple. They added an EKG feature in Series 4, and it's not approved for patients with underlying Afib and that's the important part of what I tell patients. So if you have afib, you get a successful ablation. The Apple Watch detects a lot of PACs and it's not as good. So I tell them the studies are shown mostly for detecting asymptomatic afib in patients older than 50 that haven't been diagnosed. And so this was the landmark Apple Heart study published in New England Journal of Medicine in 2019, and you could see what they found was in patients with asymptomatic Afib greater than the age of 65, 3 to 4% of patients were notified by their Apple Watch that they had afib. Um, and as the age increases and as the risk factors increase, that prevalence of detection goes up, and you could see in the notification group that did a good job. 43% had a new diagnosis of afib, um, and those with a new diagnosis had higher rates of stroke, TIA, heart failure, myocardial infarction, bleeding, um, and those patients were eventually put on anticoagulation. So with a good specificity and sensitivity, it was able to detect afib. This is a patient that came to our clinic, uh, who had an embolic stroke of unknown significance. So we do a thorough workup for patients with strokes, including TE imaging to rule out a PFO. The patient wore a Holter monitor, didn't show any Afib, and they were basically being referred to EP for a loop loop uh recorder. However, the patient was smart enough and got an Apple Watch, and the Apple Watch clearly you can see by the irregularity. Of the CRS is that the patient self-diagnosed them with afib and so they were able to skip the loop recorder phase and so it's an important aspect of how I deal with a lot of patients with borderline C Chad's vascular. So I have, I, I saw a patient clinic yesterday with the CADS Vaskov one and so if you look at the guidelines, Chadzovaca one is you may consider anticoagulation or not. So I tell patients if you've had a successful ablation. I tell them, listen, the bleeding and stroke risks are pretty similar. What I would do is just get a cardiomo and take a look, record yourself every day, and if you're not in afib, I think you're OK being off of anticoagulation. And so these are some of the strategies people are using. We just don't have great data and I'm hoping the future provides that data. So this is a trial that Rob Pasman is doing at Northwestern looking at using exactly that. So using an Apple Watch to basically tell you when you should take your anticoagulation. So let's say you do fine for 3 to 4 months even with an elevated Chad's VAT score, you're fine being off of anticoagulation. Let's say you go into afib, the watch tells you, hey, you're an afib. You start taking it for 30 days and then come off of it as that Afib episode is short and so these are some of the strategies that if patients who have bleeding risks or bruising or issues, um, you could use, we just don't have the data right now to inform us on that currently but I think in the next 2 or 3 years we'll have more data and this may decrease the number of watchmans that are being implanted. So this is a clinic we designed using. Um, Apple Watch and Cardi Mobile. So the question was, we see a lot of patients who show up for cardioversions, um, in our clinic or in our care unit in sinus rhythm. And so it's a waste for scheduling, it's a waste for hospital resources patients get their labs drawn. They may get all kinds of other issues. And so what we did was basically we encouraged patients to get a cardio mobile or Apple Watch and if they were in sinus rhythm. You didn't have to come in for your cardioversion. You basically self diagnosed yourself, OK, I'm not an afib. And so these are some of the process improvements where digital health can play a role in the future, encouraging your patients to get cardio mobiles or Apple watches where they can self say, OK, I converted out of afib, I don't need to present for cardioversion. This now the talks I'm just gonna talk about a couple ways of how to design clinics or how to deliver care that's not traditional patient to provider in the office so we all know offices are booked out 6 months, 9 months sometimes and so these are two important studies at the Brigham which use models of delivering care without seeing them in the office. So this one comes from Aki Bot at the Brigham which uses virtual optimization of guideline directed medical therapy in hospitalized patients with reduced ejection fraction. So in this study patients presented for other things except heart failure exacerbation so they might have presented for a lap coli they might have presented with foot pain to the hospital they might have, uh, presented with a pneumonia and. Basically what the EPIC team did is notified a pharmacist and a clinician that hey this patient has a low EF. Can you titrate their medicines electronically without ever seeing them? And so in this patient 118 patients, 29 were in the usual care arm and 89 were in the inhospital GDMT arm with the virtual pharmacist and cardiologist and what you can see from this is there were no changes in the medicines when you didn't involve cardiology no one changed their GDMT so even if you try to promote. And teach hospital medicine, other parts of the care team when they're not coming in for heart failure, no one's changing their guideline directed medical therapy but when you institute virtual cardiologists with the pharmacy whose job is to review GDMT, they actually got more GDMT on board and at higher doses and we know in patients with reduced ejection fraction. Increased GDMT leads to decreased hospitalization leads to decreased morbidity and mortality, and so I think it's important that this care was delivered virtually no one ever saw the patient. They've reviewed labs every day and blood pressure and vitals, and they up titrated GDMT. And so these are ways care is being delivered, uh, that's changing instead of seeing patients and rounding on them and examining them, people are trying to deliver care. Um, at a population level. And then this was another study similarly uh by the Brigham by Mark Peffer's group, where they remotely delivered hypertension lipid program in 10,000 patients across Massachusetts. So what they did was they said, you know, access to care sucks. But how do we reduce disparities in care so we know that minorities receive worse care because they don't have access to care, they don't have the ride to the office they just don't have doctors availabilities. Can we do some kind of remote program. And so the remote program delivered a reduction in blood pressure reduction at LDL and similar enrollment and reductions were seen across diverse racial, ethnic, and language groups and so this is how whether it's blood pressure monitoring at barber shops or churches, care is being changed from out of the hospital into the community because it's hard to deliver care in the hospital. And so the last part of the talk, and I'm gonna show examples of all of cardiology and AI is about artificial intelligence. So when I talk about artificial intelligence, what is it? And in simple terms, artificial intelligence is making a computer seem like a human. So the great clinicians of the past, we may not have in 10 to 15 years. The physical exam skills are dying. There's all this bedside because the amount of patients we're treating is just increasing. And so can we have computers who don't go through the rigors of residency training or fellowship training that clinicians back in the day did sleeping in the hospital for multiple days on end? Can we have computers think like humans? And so that's what artificial intelligence is trying to do and so it uses algorithms and rules within computers to learn about and solve problems that are fed into it. So you basically teach a pattern recognition and it's able to learn patterns and see patterns that you and I may not see and give information based on those patterns and so if we think about the field and this is a nice Venn diagram of computer science within computer science there's AI. Within AI there's different types of machine learning, deep learning, and it's all based on some mathematical and statistics. It's based on regression models and looking at patterns and based on that you can get information out of it and I, I put this picture of Jensen Huang because if everyone's been following Nvidia. It's become the most valuable company in the world because AI has evolved, and AI is every aspect of your life whether you're on Facebook and the ads that you're looking at are based on AI, whether it's driving a self-driving car, but I think in the next 10 to 15 years we'll see AI play an important role in clinical delivery of care. AI can be unsupervised or supervised, and the meaning of unsupervised is you let a model go out and you don't refine that model. So if I'm feeding something into an AI program, the AI says this is what the results are. I don't really change anything based on that. A supervised AI model, I give it feedback. I say no, this echo is wrong just like you would give feedback to a sonographer just like you would give feedback to an EKG saying this is wrong. The AI learns, OK, this LV measurement is wrong. I'm gonna change it. And next time I'm getting a better look that this was artifact. Um, the main strength of AI is the pattern recognition, right? Instead of us, I may read in my lifetime, uh, 50,000 echoes, maybe less, but AI is able to look at millions of echoes and so that is the advantage of AI. It's experience and based on millions of data points. And its ability to adapt as more data is fed into the system is an important thing. So as you feed more and more, the millions become billions, and as our computing power increases, it's able to better, uh, become. It's better able to spit out an algorithm. So the first um AI group and study was done at the Mayo Clinic. So Mayo, everyone that comes in has to sign a research waiver. So as soon as a patient walks in the Mayo Clinic, you sign a research waiver, all of your data belongs to Mayo Clinic. And so that data is fed into Google neural network. They have a neural network and data science place where they just do research constantly. And so Paul Friedman's group was the first to look at EKGs and so they've done a lot of studies on EKGs. They can detect. Uh, how old you are, they can detect your race. They can detect whether you'll develop a fib in your lifetime just based on EKG. And so one of the first things they looked at was using QT measurements. We know Tikerson loading were very specific, and a lot of times general cardiologists don't feel comfortable looking at QTCs for Tikerson loading. And so what they said is can AI look at QT and correctly measure QT intervals um better than humans. And so they, this is why Mayo has so much data. They looked at patients from 93 to 2017, so about 670,000 patients with over 2 million EKGs. And then they developed the model, they refined the model because they had a validation data set, so they trained it, tested it, and then validated it with hundreds of thousands of EKGs. And as you can see from the AUC, it's an area under the curve over 0.9, and anything over 0.7 is great. 0.8 is really good. 0.9, you're getting into this model is close to perfect. Now no model is going to be perfect, but you're getting into this model is amazing. And you can see from the editorial from Rosenberg, and this is 2021, so now we've evolved four years from that. It seems almost magical to see a computer interpreter in EKG, a technique that for a human requires many years of training as well as continued practice in less than a second. So for if you can imagine T and loading QTCs and EP might spend a couple minutes looking at it, this could save time because the computer tells you you don't need to look at it. This is what the QT is. And so these are some of the things where the future of health care is gonna evolve that may allow an EP more time around it may allow more time to talk to a family, it may allow more time to do an ablation or add a case on because you're not spending time looking at stuff that you don't need to spend your time on because the computer can measure a QT better than humans. So this is from um. Doctor Kara's group at Yale and I know Doctor Brush does a lot of collaborations for him. It's ECG GPT. So what, what does this mean? This means you're able to using his website, you can upload an EKG image and using millions of EKGs that can give you an interpretation that's better than an EKG machine. Basically the Phillips or Siemens EKG machines don't give us a valid interpretation. That's why it requires the physician overread. Uh, but if you submit an EKG, this is able to tell you with good accuracy, uh, what the EKG interpretation is. And you could see this is what it does so this paper is not on not published yet, but you could see on the right is you put in an EKG image, a cardiologist tells it it's sinus rhythm with PACs with LVH. It the model codes it and processes it and so it's able to go into the model and using all of these computer science terms basically using different kinds of networks, it gives you a diagnosis and so it learns based on what a cardiologist feeds into it. So if you feed, we have 5 million EKGs from Centera cardiology, these are what the physicians interpret it as. You could feed it into the model and then if you give it a random EKG it's able to use what 5 million cardiology EKGs has said it's able to interpret it. And you can see from the left the posit predictive value is very high across the spectrum. The it struggles where the same place humans struggle, right? So atrial flutter. A lot of times we struggle with is this courses a fib or atrial flutter and the machine struggles with the same stuff and so it's not to say the machine is gonna outlearn humans and be 100% accurate, it's just to say this is what pattern recognition does it creates just as close as we do. This is another part. So for Echo, so this comes from the Cedars Sinai Group. David Oyang does a lot of work on AI and Echo and basically he's saying that our LV measurements are inaccurate, LVEF measurements, and because that varies with how you do your tracings, uh, for Simpson's biplane, it varies on what cycle. Of the cardiac motion you're looking at the LVEF and where your borders are and so his model AI is able to use multiple slices of. Let me just go back for a sec, so he's able to use multiple slices of the LV and he puts it into his model and it's able to segment it so it gives a much more accurate measurement of ejection fraction. And so using this envision precision for AI echo, you can see here the ejection fraction measurements are pretty accurate, um, 0.97, so close to that one with 100% accuracy, and it's able to predict heart failure from non-heart failure. And so here's a video of how his model does it. On the left is a nice echo image, right? And so on the left you can see that it's able to trace it well, but even when your echo images suck, where we would need definity, the AI platform can better is better able to tell what the LVEF measurement is. So even though on the left you have a great picture, anyone could say the LVF is close to normal on the right the images are suck where we would need definity the AI program can better is better able to tell what the LVEF measurement is. But here's the problem or I think the things we need with AI and which his group is working on. We need to validate everything we say is right, right? So how do we know the AI platform we've built or the algorithm is correct? We need to do a randomized control trial with standard of care. And so basically this is the EOET RCT. They blinded sonographers versus AI an assessment of cardiac function. So they basically had 200-300 echoes, and they said, OK, sonographers can annotate, but you randomize them to AI says this is the LVEF. You say a sonographer says this is the LVEF and then a blinded cardiologist reads it and says this is I'm gonna make changes. So Doctor Robertson or Doctor Chuo saying, you know, I don't think it's this. Uh, and then he looked at the accuracy, so which one did the cardiologist change more of? Was it the sonographer or the AI? And so you could see from this you're right, so the sonographer changes were higher, so 27.2% of LVEF measurements were changed for the sonographer, uh, which was substantial changes, more than 5%, um, 16.8% for the AI program, and so you could see the P value is less than 0.01%, so much higher changes for a sonograph for red LVEF. And that just goes to show you how well the model is performing under against standard of care, which is a sonographer inputting the LVF. And then the secondary safety outcome is they looked at historical cardiologists assessment and more changes were made in the sonographer arm compared to AI. And so I think all of the models that are being developed need some kind of robust against standard of care measurement, and then you need to test it in diverse populations. So what may be true at Cedars-Sinai might not be true at the east coast or may not be true in Germany. So having diverse patient populations to test all of these AI models will be more and more important. And so this is what he's developing now I think it's a really nice program so they've input all parts of an echo, so whether you have Doppler measurements, color measurements, LVEF RV function, a lot of times when we're looking at echoes, a lot of times we don't look at the RV. Uh, we just call it normal RV systolic function when it, it, the RV looks dilated and the function's not great just because we don't think of it as a primary reporting tool. But in when you train AI, it has to look at everything and it applies time to everything. So even if you're focused on the mitral valve and you miss some of the TR or you're focused on the aortic stenosis and you may miss the left atrial dilatation, it looks at everything and it doesn't get fatigued like a human does. So if you read an echo on call at 1 a.m., your read's not gonna be as good as a 9 a.m. echo. And the sonographer is gonna be tired. AI is able to get fatigue out of the equation, so he's developed this program with his programmers where you put in all of the slices of echoes and it interprets the whole echo from piece to piece and you could see it'll run through each part as the program is able to produce all of it. So here it gives you the LV function, then it moves on. It gives the RV segmental analysis. Then it looks at the function. And so, Having programs like this. It gives the aortic valve, mitral valve. And so having programs like this saves the sonographer time, right? So it's not to say that this program will replace human interpretation, it'll replace sonographer scanning, but if your sonographer's only able to do 3 or 4 patients a day because the report takes forever to type up, this can allow them to do 7 or 9 scans a day and so that decreases the backlog of patients who need echoes. And so this is where AI, just like DAX AI in our clinical practice with EPI is gonna change how care is delivered, it's gonna assist the human more than replace the human. To deliver increased number of throughput. So this is a study again by the Yale Group and Rohan Kera where they published this in the European Heart Journal. They were able to detect severe erric stenosis without Doppler, so they basically fed into it 5 years of echo data and they looked at Dopplers and. All kinds of measurements. And they said, can we get a pattern where we can detect off one slice, just the perister long axis, look at the aortic valve. Can we tell how stenosed it is from that? Is it severely, is it moderately severe moderate, or is it mild AS and basically they developed a program where just on one image. You could tell whether this patient has severe AS, so you don't need 60 slices. You don't need Doppler, you don't need anything else to tell, OK, does this patient have severe AS? You just feed it one cycle in perternal lung axis, and it can predict whether the patient has severe AS and the predictability was good. It was a 0.942, so very high. AUC and you could see they took that New England cohort and what the Yale group has been really good about is testing their models in different areas. So if you said I wanna test your model in Centerra they'll just give you the model they don't wanna have the data. They have their own data. They don't want your data to run it. They basically say, here's the model. Why don't you just run it with your data? And so the Cedars Sinai Group collaborated with them and the model worked for their cohort as well. So you have two geographically distinct cohorts at different time periods and the model is able to predict whether you have severe AS. From a screening perspective this is huge, right? You don't need 600 clips of echoes. You don't need Doppler measurements. You don't need trained sonographers to get multiple views. You just put one image on the peristternal long axis. You get 11 cycle and it's able to tell you this patient has severe AS and needs surgical eval, um, for aortic stenosis, so I think. Care is gonna be delivered faster, better, um, and with better accuracy. So this is a nice study. So I've, I've evolved from EKG echo this is gonna be cardiac CT so we get a lot of cardiac CTs. If you go into our emergency room, you'll probably the probability is pretty high. You'll get a you'll get some kind of CT scan, right, whether it's looking for pulmonary embolism, whether it's looking for dissection, if you say I have chest pain and shortness of breath, probably is how you'll get a CT. So what the Stanford Group said is we get all these CTs, can we use that data to help our patients? And so on the left you see a coronary calcium scan and that's better able to tell patients whether they should be on a statin or not on a statin, right? So if your coronary calcium score is high, you should be on a statin and we could do meds to reduce your risk but can we use data from non coronary calcium scans to better help patients and reduce disparities of care. So in the middle and on the right is CT scans not meant not gated for coronary calcium, but they found calcium on it. So if you get a pulmonary embolism scan and you see calcium in your coronary arteries, what can we do about changing it? And so they fed it into an AI program and the AI program said, OK, based on the coronary calcium CTs, I can tell you whether this patient has coronary disease and needs to be on a statin. So it's using non coronary calcium score CT is to. Better help patients and better inform patients. And they took that data from the non-gated chest CTs with no previous coronary artery disease and no statin prescription. They fed it to the AI algorithm and they randomized to usual care, which is whatever the PCP does whatever they want with that data, so. Or as a cardiologist you do whatever you want with your PE scan that doesn't show anything or they said, you know, we're gonna feed it into the AI algorithm and we're gonna tell people you should be on a statin. And you can see statin prescription at 6 months post randomization was much higher for the AI group compared to usual care and so downstream what that means is more patients who you haven't screened for LDL or you're missing in the ER can get statins that'll reduce your disparities of care, um, and decrease your morbidity and mortality as time goes on. So this AI program is able to use non coronary CTs to guide care. Um, which is important. So in EP. We do a pulmonary vein isolation if we think about. When we were doing afibs back since '95, Hasaera is one of the pioneers of EP. They put up catheters in different parts of the heart and it turned out in a seminal New England Journal paper that 85 to 90% of Afib is triggered by the pulmonary veins. So for a patient with paroxysmal afib, the idea is to isolate the pulmonary veins, and that's been the strategy for a long period of time. The difficulty becomes in patients with persistent Afib or long standing persistent Afib. People have tried to isolate other areas of the heart to decrease the burden of afib. They've tried doing posterior wall. They've tried isolating the SVC. It turns out that the data is not great. So beyond pulmonary veins, we don't have great data, and that's because we don't know what we don't know. And so Volta was a company and this was an exciting um. Exciting paper that came out during HRS they used AI guided assessment of complex Afibs. So you trigger afib and you start mapping and they're able to look at dispersion of electrograms. So they fed into it tons of EP data, tons of ablations, and they said, OK, this is where Afib originates from, and this dispersion, this is where you should ablate and target to get rid of your patients afib. And so you can see here this is the program, this is an HD grid catheter. It's a mapping catheter. These are what electrograms look like in a fib and you could see that you're basically using this mapping software and the computer's telling you where to late. So instead of Doctor Gist or Doctor Kiel saying I'm gonna bla here because these signals look good or I think it triggered off of this with isopril, this is able to tell you better than you could say uh better than humans are able to. And so this is what they found they did a randomized trial. So again, people are testing their AI softwares against what standard of care. And so in persistent Afib population, which is difficult to treat, most patients, most clinicians 60 to 70% success rate with persistent Afib, much lower than paroxysmol, which is about 80% success rate. But you can see here anatomical ablation, 70% of patients were Afib free. If we did tailored AI guided ablation, 88% of patients were Afib free. And so AI beat uh what the clinician thought using anatomy that this is where they should ablate based on scarring patterns or where they thought it triggered. And so these kinds of things are important because it guides our care. And this is the last part so from I've talked about how it can be used in EKG learning, echo learning, EP, structural hard. This is how you can train a robot to do surgery and so this is where AI will go. This is the last step, I think in the next 10 to 15 years where you could train a da Vinci robot. You put the ports in and it's able to suture, it's able to learn what to suture, what to cut. Same thing with gathers you put it in and the robot can basically do what a human can do. Let me just show you this video and so this comes from Johns Hopkins, it's able to learn all this. It corrects that it missed the needle. They'll try to throw at distractions and the robot's able to tell the distractions. Patients secure Pop Patient messages on Epic. Security To AI those are. So what's lastparroscopy here? See more patients in clinic. Oh The I AI to do that? The human touch. So they basically programmed the da Vinci to do all of these techniques. It's able to tell different environments. You can throw distractions. It's able to tell this is a distraction. I have to tie the knot and suture this clothes and so you could give it tasks that you think are too easy for a human to do and not worth the time that the robot could do. So this is a nice editorial I think about the future. We need more randomized clinical trials of AI, so we need more AI programs, but we also need to test it in diverse populations and we need to test it against usual care because just like devices are approved um in medicine just like pulse field ablation or if you think about the newest valve, the newest stent, you need data you need randomized data. You for drugs you need it against placebo. I think for AI the important part before we use it and roll it out is tested against standard of care. And so the conclusions are digital health is rapidly evolving. Cardiology is in the forefront. I think care delivery is changing. It's going to evolve from us seeing patients back every 3 months or 4 months, being like the computer or someone virtually is going uptitrate your meds. You don't need to come into the office. Um, and it's gonna, AI is gonna be aiding us. It's not gonna be replacing us in the care delivery of patients in a resource and labor pool constrained world, and it's gonna reduce human error. So human error happens because people are fatigued. And AI is not able to be fatigued and we have our own biases when we walk into a room so we have biases based on previous experience, previous patients. AI is able to learn but it doesn't have biases. But it depends on what you program into it so if you program data that's biased, it's also gonna have the same biases that humans have and so that's the conundrum of AI and then further trials are needed to validate AI tools, especially in diverse populations.
Related Presenters