3.12.146.111
dgid:
enl:
npi:0
-Advertisement-
Conference Roundup
Retina
Video

Advancements in Artificial Intelligence: Insights From ARVO 2024

Posted on

Introduction:

Advancements in Artificial Intelligence: Insights From ARVO 2024. At ARVO, several studies highlighted the transformative potential of artificial intelligence in ophthalmology, spanning from patient education and clinician acceptance to diagnostic applications and its role in managing rare diseases. Here, we delve into some of the most compelling findings.

Christina Weng, MD, MBA:

Hi, I’m Christina Weng. I’m a surgical retina specialist on faculty at the Baylor College of Medicine, where I also serve as the fellowship program director.

Overview:

Abstract one: Artificial intelligence analysis of baseline fundus photos to predict effects of aflibercept treatment on best-corrected visual acuity.

Traditionally, obtaining precise BCVA measurements before initiating anti-VEGF therapy has been challenging. However, recent advancements in AI technology have paved the way for a new era of predictive analysis. This first study not only sheds light on the potential of AI but also offers a glimpse into a future where personalized treatment decisions may be made with greater precision and efficiency.

Question:

Can you please provide a brief overview of the study and results?

Christina Weng, MD, MBA:

You’re right that AI has incredible potential in ophthalmology. Thanks to the great research efforts that are out there from our colleagues, AI is already beginning to play a role in the management of diseases for diabetic retinopathy, for example. Where we need to go next as a field, in my opinion, is twofold.

First, we need to continue to develop and refine AI algorithms so that we can ensure that they are accurate and reproducible. Second, we need to continue to push the envelope in terms of expanding the applications of AI, not only to other diseases but to do things that go beyond our current capabilities, such as earlier detection of disease and prognostication of outcomes.

This first abstract we’re going to discuss today focuses on exactly that. This is an abstract presented by Jun Kong and senior author Neil Bressler that explores how AI analysis of baseline fundus photos can predict effects of aflibercept treatment on best-corrected visual acuity in patients with diabetic macular edema. The premise of the study rests on recent studies that showed that AI could estimate BCVA from fundus photos in eyes with DME.

This investigation took things one step further to see if AI could predict changes in best-corrected visual acuity at 1, 2, and 3 years after initiating aflibercept for diabetic macular edema. Color fundus images of study eyes from the VISTA study were used to train AI algorithms to detect changes of 10 or more letters 1, 2, and 3 years after baseline. Then, the investigators analyzed 164 participants whose baseline BCVA ranged from approximately 20/40 to 23/20 and evaluated 3 AI algorithms’ ability to predict visual outcomes based on a baseline image-only method, baseline BCVA-only method, or a method combining both baseline image and BCVA.

The findings were really interesting. They found that the image-only method resulted in accuracies, sensitivities, and F1 scores that were similar to values obtained with baseline BCVA-only. There was little improvement when combining both. The authors concluded that AI analysis of fundus images can predict the likelihood of a change in best-corrected visual acuity of at least 10 letters from baseline at 1, 2, and 3  years after initiating aflibercept for DME similar to that obtained by using baseline best-corrected visual acuity from a protocol refraction but without necessitating an actual protocol refraction.

Question:

How might the findings of this study influence the decision-making process for initiating aflibercept therapy in patients with diabetic macular edema?

Christina Weng, MD, MBA:

These findings are exciting for many reasons. First, as you alluded to, it’s challenging to obtain protocol refractions in a busy clinic. It takes time. It’s also somewhat subjective; to be able to predict a patient’s outcome in a more efficient and objective way is certainly appealing. Then, the other reason, of course, is that having an idea of how a patient might respond to therapy could allow us to better personalize treatment regimens.

Let’s just say you have a patient, for example, who’s predicted to do well. It might motivate patient adherence while allowing the retina specialist to feel more confident in their treatment plan. Then, on the flip side, let’s say you have a patient who’s predicted to not do as well as you’d hoped. It might allow the retina specialist to consider a different drug or a different treatment schedule or maybe even combination therapy with steroids. Of course, this would all be further enhanced if there were predictive models for other types of drugs and treatment schedules apart from what was used in the VISTA study, which the study is based upon. But that’s the general idea and potential of AI in this scenario.

Question:

How do you envision ophthalmologists incorporating AI-derived BCVA predictions into their discussions with patients regarding treatment expectations and prognosis?

Christina Weng, MD, MBA:

As I mentioned, being able to predict how a patient will do in terms of visual outcomes is very powerful. Level-setting expectations is a huge part of our roles as retina specialists when we counsel a patient for treatment, especially in patients with diabetes who really face unique challenges with treatment adherence. Having this crystal ball, if you will, into how someone might do could motivate patients to stick with their injection treatment schedule. Similarly, it will help us as the providers to avoid over-promising results.

Overview:

Abstract two: Estimating visual acuity with habitual correction in clinical practice settings from fundus photos using artificial intelligence in eyes with diabetic macular edema.

The fusion of artificial intelligence and clinical practice has the potential to redefine the way we assess visual acuity in patients with diabetic macular edema. The next study sets out to explore the potential of AI evaluations of fundus photographs in estimating VA with habitual correction on an ETDRS chart right within the clinical practice setting.

Question:

Can you please provide a brief overview of the study and results?

Christina Weng, MD, MBA:

In this study, conducted by Ashley Zhou and Neil Bressler’s team, the authors assessed whether AI could be used to estimate habitual correction in eyes with DME. The first study we talked about looked at leveraging AI to predict changes in best-corrected visual acuity in response to treatment. Here, they actually look to see if AI could estimate what the person’s visual acuity was based on a fundus photo. Just for clarification, habitual correction is essentially the vision that a patient goes about their daily life with. We know that this is often lower than BCVA determined by a protocol refraction. But it is what patients are experiencing in their day-to-day and also what we typically measure in a clinical setting.

This was a retrospective study using fundus photographs matched to habitual visual acuity amongst patients with a history of diabetic macular edema, and at least 2 visits within the past 1 to 6 months. The investigators used a previously developed AI algorithm for determining best-corrected visual acuity from fundus photographs and compared patients’ AI-determined habitual visual acuity to their actual habitual visual acuity for 141 patients with various combinations of NPDR, PDR, and DME.

What they were looking at was the mean difference between AI-predicted and actual habitual visual acuity and found that it ranged anywhere between 0.97 to 1.92 ETDRS lines. If you were only looking at eyes with the habitual visual acuity of 2080 or better, this error margin, if you will, narrowed to one and a half ETDRS lines.

Question:

With the successful estimation of habitual visual acuity, VA, using AI evaluations of fundus photographs, how do you envision integrating this technology into routine clinical practice for managing diabetic macular edema, DME? What are the potential benefits in terms of time efficiency, cost savings, and patient convenience?

Christina Weng, MD, MBA:

I think there are some really important learnings from this study, but we need to remember that we’re looking at averages here. I’d really be interested, first, in seeing this replicated in a larger study that includes ranges of the mean absolute error to get a better idea of the reproducibility of an algorithm like this. But if AI did allow us to accurately predict visual acuity based on fundus photos alone, I think it would be a game changer in several ways because if you ask any ophthalmic technician, they’ll tell you that measuring VA is one of the most labor-intensive parts of the patient workup.

Certainly, there are potential advantages when it comes to operational efficiency, labor cost savings, and even patient burden. But I think where it gets even more exciting is how this might be applied to remote monitoring and tele-retinal screening. For example, we have a large tele-retinal screening program for diabetic retinopathy here in Harris County, where I live. There are fundus cameras set up in the primary care setting where patients have their images captured and interpreted. Then, they’re brought in if their photos show any concerning findings. However, there are no personnel in those locations to measure visual acuity. Having an AI algorithm that could estimate the patient’s visual acuity would be a very valuable part in triaging next steps for patients in that screening program.

Question:

Given the potential impact of AI evaluations on simplifying home monitoring of habitual VA for patients with DME, what considerations should be taken into account to ensure the accuracy and reliability of these remote assessments? How might patient education and engagement play a role in facilitating the implementation of such technology-enabled monitoring strategies?

Christina Weng, MD, MBA:

Accuracy and reliability of any AI algorithm are paramount. Ensuring these with many validation studies is certainly an absolute must before these types of remote assessments should be used in the real world. But I can imagine a day when we’ll be able to tailor a patient’s management plan much more precisely than we’re currently able to do.

The DRCR Retina Network actually just kicked off a very important randomized trial called Protocol AO, which will utilize home OCT to guide the management of treatment-naïve wet AMD patients. How great would it be to be able to also remotely assess vision in addition to anatomy? Even though it’s a different disease state from our focus here, we’ll learn a lot of applicable pearls from protocol AO in terms of patient education and engagement, which will become increasingly important when it comes to remote monitoring strategies. My experience in that study has been that I find that patients are really motivated to play a more proactive role in their own care.

Overview:

Abstract three: Can AI large language models and AI assistants help educate our retinitis pigmentosa patients?

The challenge of educating the public, particularly about rare diseases like retinitis pigmentosa, in the face of limited health literacy represents a critical disparity in healthcare. This study unravels the potential of AI in bridging the gap in health literacy and potentially paving the way for enhanced patient education in the digital age.

Question:

Can you please provide a brief overview of the study and results?

Christina Weng, MD, MBA:

We were just talking about the importance of patient education. This third abstract comes from Gloria Wu and David Lee’s research team and focuses on exactly that. In this study, the investigators assess whether AI can help educate patients with retinitis pigmentosa or RP, which is the most common inherited retinal dystrophy that affects 1 in 3,000 to 4,000 people.

The premise of the study is that there are several robust AI language models or assistants that have become available in recent years, like ChatGPT, Amazon Alexa, and Google Bard. Many of you have heard of all of these. Given that the majority of the American population do not possess a high level of health literacy, the investigators wondered whether these AI technologies might be able to help provide patient education for rare diseases like RP.

To study this, they developed 5 questions related to RP. What is a retina? What is retinitis pigmentosa? What is the current treatment for RP? How do I diagnose RP? Last, if my mom has RP, how likely is it that I will get it? The questions were asked to various AI technologies included in the study. They included a variety of metrics for readability to assess the quality of the answers provided.

The investigators found that there was no statistically significant difference in readability metrics between the various technologies and concluded that both AI language models and AI assistance can be useful to provide informative responses to educate patients. However, many of the responses still included verbose text and complex sentences that were at a high school or college level of syntax. That might need to be addressed to increase the generalizability to society and our general population.

Question:

Given that only 12% of Americans possess proficient health literacy, how can AI-powered tools be optimized to communicate complex medical concepts effectively to a broader audience?

Christina Weng, MD, MBA:

Knowledge is power. It’s not only empowering to patients to understand the conditions that might affect them but educating them about how to manage these diseases can really mean the difference between vision preservation versus blindness. If AI can be used to guide the creation of patient resources, public awareness campaigns, or even research study consents, we might be able to expand our reach as retina specialists and ophthalmologists. However, as the authors alluded to, and as I briefly mentioned, we need to be sure that the syntax used is at a level understood by all, and further work should keep this goal in mind.

Question:

How can AI assistance and large language models contribute to improving early detection and access to resources for patients living in remote areas or lacking access to specialized medical centers?

Christina Weng, MD, MBA:

If you look at almost any study looking at delays in diagnosis and treatment, there are 2 risk factors that will always emerge. Those are geographical distance from a specialty care center and lack of patient education. Let’s just step aside from retinitis pigmentosa for a moment and look at a common disease like diabetic retinopathy. Patients who live in rural areas or who have completed less than a high school education more often present with advanced stages of diseases like diabetic retinopathy compared to others. Many of those patients, I find, didn’t know that diabetes could lead to eye disease, or they didn’t know that diabetic retinopathy could be asymptomatic until late stages. These are the gaps in health literacy that would benefit tremendously from AI technologies like the ones discussed here. Educating patients on the disease, emphasizing the need for timely care, and sharing where they can turn to get that care could really move the needle in a meaningful way.

-Advertisement-
-Advertisement-
-Advertisement-
-Advertisement-
-Advertisement-