false
Catalog
AOHC Encore 2024
304 Navigating the AI Frontier in Occupational Med ...
304 Navigating the AI Frontier in Occupational Medicine: Opportunities and Challenges for Clinical Practice, Workplace Management, and Program Administration
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hey everybody, welcome to session 304, Navigating the AI Frontier in Occupational Medicine. A couple housekeeping items. Be sure to silence your cell phone and other devices, pretty please. You can evaluate and claim credit by navigating to the session in the event app. Look for the neon green link toward the bottom left at the end of the session. If you have not already, please download the AOHC 2024 event app located within the AECOM events app or swap card. It'll tell you everything you need to know about the conference. If you need assistance, please visit the registration desk or the AECOM membership booth in Pre-Function West across from the exhibit hall. Or look for AECOM staff with the Red Rant lanyards reading Team AECOM. And with that being said, I will allow my esteemed colleague to introduce herself first. Morning. So those of you who are just coming in, this is the AI flight. So check your tickets, come on in. So this is Navigating the AI Frontier in Occupational and Environmental Medicine. I'm Denise Clement, CEO and founder of AscendMD, and I do AI consulting. You know, CEO in a startup stands for Chief Everything Officer. And my colleague Zeke here. Hey everybody, I'm Zeke McKinney. I'm the Program Director for the Health Partners Occupational Medicine Program in Minnesota, where I'm also faculty in the School of Public Health, a researcher. And I mentioned my own company for which I'm the VP and CTO of VIV, Inc., doing some NLP research. Anyway, let's proceed onward. Disclosures. So we don't specifically have affiliations other than with our own companies that we mentioned, though we're not selling anything related to those in this talk. All right. Why would I start off with a slide about intro to occupational medicine to a group of occupational medicine experts? Because we have to tell those outside of our sphere what occupational medicine is. I mean, my own mother used to ask me, what do you do? You know, and that's the common question we get when you say, I do occupational environmental medicine. So this is just a reminder of some of the things that we do that we can tell people about our field. We do prevention, we do diagnosis and treatment, we do evaluation, we do trend monitoring, we do population medicine and individual medicine, because we have to understand what's going on with the populations and then bring it down to the N of one. Because when that person comes into your office or your clinic, they don't care about the population, they only care about themselves. Well, in terms of AI, you know, context matters. And so if your AI engine doesn't know who we are either, that's gonna be a big, big limitation to getting anything done. I'd like to introduce you to some of the most famous occupational environmental medicine doctors in the world. We've got Dr. Leonard McCoy, Leonard McCoy, known as Bones, we've got Dr. Beverly Crusher, and then the last one is just called the doctor. So he is the Holographic Emergency Medical System. So. Stay calm, you are all right. Now listen to me. You're gonna be fine. I need you to do as I say. Now listen. Is the EMH program still online? It should be, take it easy. That's it, that's a good one. That's why I never use one of these. Computer, activate the EMH program. Please state the nature of the medical emergency. 20 more are about to break through that door. We need time to get out of here. Create a diversion. This isn't part of my program. I'm a doctor, not a doorstop. Oh, that's all right, you can say it. So this clip, I think, covers sort of the spectrum of where we are. We have doctors who know something about what's going on. We have the patient is represented by Afri Woodard's character, who has no idea what's going on. And we have something beating at our door in terms of technology. And then we have AI, who promptly says, this is not part of my programming. So. Well, and the doctor there is what, you know, people want to think of as a future artificial generalized intelligence, an AI that will suddenly answer all your questions, all your problems, but when you give it something unexpected. So we will talk about how to train your AI in this presentation today. I just wanted to point out the explosive use of generative AI since 2022. You know, we're not even talking two years since, you know, the broad availability or access to this information was made. And this also talks about who's adopting when. So when technology first comes out, you got the tech enthusiasts and the really, really early adopters. And then it goes on to your early adopters and then it progresses. So where we are right now in 2024, I forgot to ask if this works. Yeah, it does. You hit that guy for the laser. Hit the guy for the laser. All right. Here we are. This is just 2024. So you look at the slope of this curve. I think ChatGPT was the fastest growing software in the history of the world with a hundred million users in less than a year. So this is May, 2024. Now I will tell you, so let me ask, if you had to create this graph, how long would it take you? You know, you gotta research the sources. You gotta find the data. Then you have to struggle with the graph to get all your data points and labels in the right place. This graph was made with AI. It took me less than two minutes, including, including, oh, the wrong one, including the information at the bottom. So remember when you're in math class in elementary school, they used to say, show your work. You get this from AI, but you can ask AI to show its work. Where did it get that information? How did it build this graph? What were the components for the chart? And so you don't just get the chart, you get everything behind the chart, so. And that show your work point is actually a really important point to AI. We'll talk about that later, about the black box effect, because if you can't describe to people how these are working, how can you actually trust that it's gonna be reliable and reproducible for what you need in the future? All right, so the importance of AI. AI is a game changer in a lot of ways. It can speed up your productivity, like I just showed you. So if you get that four o'clock on Friday request for data, AI can help you out. So you're not spending your entire weekend trying to pull all that information for the seven o'clock Monday morning meeting. Or the 8.30 Tuesday morning meeting. So it can do large scale data analysis. Data analysis that would take you months or weeks to do in the past, you can do very quickly with AI. It's very good at looking at large data sets, aggregating that information, and giving you a summary of what's there. Large language models, we're gonna talk about that a little bit more, but I just wanna bring out that there's sort of two components of AI. There's what I call the macro AI, which is the infrastructure, where the data center's gonna be. How much power is going to be needed for those data centers and where's all that energy coming from? Who's gonna host those data centers? Because we've got international privacy laws that are different in different parts of the world. This is not talking about the macro, we're talking about the micro. Bringing it down to the individual level and what we as individuals can do with AI. Predictive capabilities, you've got your large data sets, you do the analysis, and now you can do trends. But you can do trends in a much more efficient manner because you don't have to go and keep pulling that data to get those trends. You can automate that process. And then our ability to personalize care as we collect more and more data with our electronic health records, AI can process that stuff quickly and potentially, and even has, found patterns, let's say for detecting sepsis that we would never even expect correlating different biomarkers than, for example, clinical elements in a way we would have never seen. And by the way, talking about the macro scale, one thing to consider as occupational environmental medicine doctors is what is the environmental impact of all this increased computing power, much like we saw when people were running tons of server farms with GPUs trying to compute blockchain from data mining. That actually had a huge environmental impact. And this probably will too, or probably already is. But we have challenges. We certainly have data privacy challenges. In occupational medicine, we have more controls over confidentiality than the bigger medicine pool. So if you're working in a hospital, you're only concerned about HIPAA. You're not concerned about the ADA, the EEOC, GINA, OSHA. That's what occupational medicine has. We have all of these constraints on how we use data that have to be accounted for as these policies are developed. PHI is sensitive. Who has access to that data? When do they get access to that data? Occupational medicine's a little different because OSHA says employees have the right to their data, employees and their representatives. So how do you ensure that only the information for that individual is provided? But there's another thing that most people don't remember or realize is that OSHA also says if there are studies done with employees' data, employees have the right to access those results. One challenge of AI is that even in de-identifying data, AI can be, quote, smart enough to actually re-identify de-identified records. And so some of the things y'all might have heard me talk about in 2018, where we need to really have access control lists and think about who has access to this data and where are we sharing it? Or, for example, whether or not you're running this stuff out in the world or maybe locally within your own institution. We'll maybe talk about that more. But there are a lot of ethical considerations too, aside from data privacy, about making sure to validate what assertions are being made by these probabilistic models that are definitely not human beings. Right, and the other thing is for all AI, whether it's your internal systems or the public systems, you have to read the terms and conditions. The fine print in there will get you in terms of who owns the copyright, who owns the data, who has access to data, how can you use the data? I looked at a new AI that was to help you generate images and in there it said you cannot use the images you generate for commercial purposes. That means you could not take that image and put it on your marketing brochure for your hospital or your clinic. But they could use whatever you generate for whatever purpose they wanted. Now probably one heck is you could have the AI read you the terms and conditions or summarize it to, you know, excuse me AI, am I soling my soul by signing this terms and conditions? So there actually is a website, I think it's called Did Not Read Terms and Conditions. And on there they list what a lot of these are. But AI is progressing so fast they don't have all the programs that are out there. But that is a good place because they will also let you put in a URL and if they haven't looked at their terms and condition they will do that and then add it to their database. So there are a lot of tools out there. And of course ethical considerations. Just because you can doesn't mean you should. True that. All right, challenges, biases and errors. We all know that the AI data are built on data sets that contain biases. Thing is, how do you recognize them? How do you control for them? And what are the impact of the errors that are coming from those biases? You've got to consider that as well. Right, because, you know, so much of our data has been skewed by bias around, you know, socioeconomic, sociopolitical inequalities that can be reflected in your data and then come out in your AI models too. So definitely validating what you're doing matters too. And then of course, when you get further along and actually are trying to make assertions, again, this is a computer just working on probabilities. And so one of the things everybody likes to talk about is AI hallucinating, which I don't think is the best word, maybe fabricating or confabulating because it's taking these probabilities and ending up at a conclusion that actually is not realistic. And so people have like put in, you know, their research to an AI generator and said, hey, make me a paper. And then AI will come up with a pretty legitimate sounding paper and even have references. But if you dig deeper, and this is that validation point, you'll see some references that don't even exist. It just sounded like this would be a good reference if that thing existed in reality. Somebody should make that paper. It's actually being written. There's actually several of those, believe it or not. I'll send you some. So one of the solutions is actually to diversify your data sets. And in occupational medicine, we really have to do that because we've got the medical component, we've got the industrial hygiene component, we've got the safety component, we've got the regulatory component. So our data set needs to be much broader because of all the things that we cover in occupational environmental medicine. And probably one thing we're not gonna say a lot explicitly in these slides, but I'll tell you implicitly, I mean, AI is very much context dependent. And so I think there's always a push pull between, okay, let me just let this AI run generally on a variety of different contexts, or I have a very specific use case and AI is gonna do a lot better. And people are trying to really do well to meet in the middle, which is where you get probably your best effect. So anyone who's been to a Zeke McKinney informatics talk at AOHC has probably seen this slide, but it's really probably more relevant now than ever. Everyone's scared, gee, AI is gonna take over our job and replace us and we're not gonna be able to work as physicians anymore. But to be honest, historically in informatics and even certainly now in AI, oh, thank you. It's not as much about computers replacing us, which I think is the problem that we always think about. It's about computers supporting us. So historically in informatics, people talked about, well, gee, computers are gonna take these symptoms, come up with a diagnosis or make these decisions for us. And that's not how it's supposed to be. It's supposed to be that together with you, the computer can support you in ways that you couldn't do things yourself, such as analyzing these huge data sets over weeks and months, for example. Anything you wanna add? No, go ahead. All right, challenges, regulations. So as I sort of mentioned before, OEM has a slew of regulations that other medical entities do not have to deal with. And then we also have specific guidelines. So as we are looking at developing policies and using AI, we have to consider all of these regulations. For example, standard of care and internal medicine for family practice, surgery, is you take a family history. In occupational medicine, there are very limited situations where you can take a family history, one of them being FMLA. But what a lot of people don't know is, who is a first degree relative under Gina? Okay, it extends to cousins, mother-in-law. So if you've got people writing notes that Mr. Jones is depressed because his wife, Sarah Jones, has breast cancer, you've now violated Gina, because you can't put Sarah Jones' medical information in Mr. Jones' note. And what happens with occupational medicine, we have a lot of people who come in from family practice and internal medicine. They're used to their standard of care, not realizing that you've just violated Gina by constructing that note the way you did. And one of the things AI could do is review those notes, flag those notes that have Gina information that needs to be redacted. So that's one way to use AI efficiently versus having someone review every single note and then go back to the provider. AI can do it instantly on the spot and say, hey, you can't use that wording in this note. So potentially your compliance folks are gonna love it. Yes. So in a rare example, this is actually from my startup in our NLP engine, and I wouldn't have shown this except it was just published open access last week. And essentially all this is doing is taking, for example, an imaging report and creating ICD-10 diagnoses out of it. If you think about it, imaging is such an interesting place because we have these very discrete anatomic findings, but how much discrete data do we have in there? So I always joke and say, well, one example of where we could try to do this in terms of quality is I say, gee, Dr. Lowe sees occupational medicine patients who have knee injuries, and so do I. And it seems like, gee, every time you order an MRI for somebody who has a meniscal tear, you seem to find them. But for me, it doesn't seem like that much. And we actually look at the data, and gee, at 90% of the time, he's looking for a meniscal tear, orders an MRI, finds it. For me, it's 50% of the time. So the world would say, well, how do we make Zeke more like James Lowe? But let me take you a step back. How do you find those meniscal tears in the first place? And without discrete data, we can't even do those kinds of things. And so this isn't exactly an AI solution. This is natural language processing. But it's talking about the challenge of how English language, which is so much of what we do in our electronic medical records, is so hard to process. And so ultimately, we did a study on this after running this engine, and showed, this is what was published, again, just last week, that having four physician reviewers try to code an imaging report like that for just one primary, like what is the most important finding out of this, they honestly did not agree very much. As you can see, this is like pairwise and multiple comparisons. But compared to this engine, at different lengths of ICD-10 codes, were very sensitive and pretty specific. And you can download this paper and look at it. But the point is, the AI application here is, as you are giving users these diagnoses, for example, as they're dictating imaging reports or as they're reviewing them, users can validate that data and give you that training feedback. And so part of AI, again, really very much involves human validation of data and the push and pull of training those models so they are more correct in terms of the context of what we are thinking as, for example, occupational medicine physicians or physicians in general. So we do have the opportunities to improve diagnostics like Zeke just talked about, enhance our accuracy, and it's already being used in like oncology to improve the specificity of reviewing radiographs for tumors. Early detection. If you can find it earlier, you can treat it earlier and often have better outcomes by doing that. And then continuous monitoring and intervention. So if we go back and look at personalized care, you've got three individuals who are going into your heat monitoring program. Should they be monitored the same? Your first person is brand new to the job, was recently diagnosed with hypothyroidism, just started treatment. Do you need to monitor that individual the same as the third individual who's a marathon runner and runs through the desert all the time? Should they be monitored the same? Their reactions are not going to be the same. And now you can come down, monitor them individually, identify individual trends. Is this individual hypothyroid? As they do this job, do they become better at managing their heat stress levels? Or do they not? You can then go back to their primary care physician with data and say, look, this individual does this job and here is their reaction to increased heat. And as a medical director, you guys have reviewed audiometry and spirometry and trying to look at those subtle threshold shifts and changes in spirometry over time. I mean, it's not that hard and it's not impossible, but imagine AI is doing all that for you and presenting up to you, here's your list of employees you might need to be concerned about or you might want to do more surveillance on. And so those types of applications are where suddenly this is going to be really helpful to us, for example, in occupational medicine moving forward. Opportunities, enhanced risk assessment. So we've been sort of talking about trends, but if you've got an AI bot that is going through your data on a regular basis that can spot trends before you ever did because you didn't have time to look at that data and you can automate that process. The AI bot can give you a report every week, every month, give you a summary over the year, and you're not using your time to go do data pull or someone else's time to do that data pull, the AI is doing it for you. And then you can do what I call human directed AI use to go look at that information and see, all right, we're beginning to see this trend in this group of employees who are doing this new application that you probably would have missed because it would have been months or years before you went back to look at their data. And as regulatory policy changes, whether it's through OSHA or other entities, depending on the particular industry you're in, AI can process that stuff for you and again, try to make assessments of, are we adhering to what we need to? What do we need to look out for? And again, it can even be sifting the web as new regulation, new literature, new standards come out and actually keep you up to date on what's going on. We sort of talked about personalized medicine, but just a show of hands, how many people have a smart watch? Okay, so a good portion of the audience is already using AI. Now, whether or not you're looking at your own trends is another story, but your watch is telling you what's going on with your personal health. Now, if we extend that to, like I said, with individuals going into your heat monitoring program, you now have that information that you can relate back to the individuals like, hey, this is how you are responding to your heat stress levels, you know, and ask them, you know, are you monitoring with your smart watch? I see you're wearing it, you know, what do your rings look like if you got an Apple smart watch, you know? Are you using those applications that you actually have available to you to improve your health from that standpoint? And then you can also tie it into your wellness programs in your company. So if somebody's trying to get their 10,000 steps or their 20,000 steps, they forgot their watch, you know, but you can tell them that when you're doing this activity, here's your data and allow them to be a participant in the management of their work life and their personal life, because for the individual, there's only one life. Right now, we don't have, I think, a ton of use cases of people linking personalized health record data, such as what's coming off your wearable Apple watch Fitbit with your electronic health record data, but that's not very far away and your ability to give access to those systems, by the way, looking at the terms and conditions, is gonna be pretty relevant. All right, so what can AI do? This is today, this is not the future. AI can read. You can go from text to image. So you can type in, create an image of a meeting among occupational environmental medicine professionals sitting at a conference, and it will do that. You can go text to video. You can say, I need a promotional video for my clinic. Can you show a video of people coming into our office, signing in, being seen by the providers, and then reading their results? It will create that video for you. It can go text to music. So if I wanted to create a song about ACOEM, I could do it. I could talk about occupational medicine. I could talk about environmental medicine. I could talk about aerospace medicine. It would all be in the lyrics of the song, and the song would sound good. That's the other thing. It might not be a hit, but it would sound good. AI can read data files. So we've talked about large language sets. You can upload a data file to AI for it to review and analyze. It can also read documents and text files, and we'll talk about how you use those documents and text files to help train your AI. It can hear, and you can talk to AI in real time. You can have a conversation with AI, and some of the latest models also understand emotions, and they can say, hey, you sound a little stressed. Are you giving a presentation? Well, you know, in actual clinical medicine, the real, like, super future of AI is integrating all these things together so that in my exam room, there are cameras, there are microphones, and as I'm talking to the patient and examining the patient, it's generating all your clinical documentation for you, context-specific to what you're doing, seeing what your exam findings are. Maybe you're giving it some context clues. Oh, the patient's range of motion was 50 degrees, and, you know, forward flexion, whatever, and putting all that together for you so you don't have to do that. So what we've seen over the last 30 years, all these increased administrative and documentation requirements of clinical medicine, you put all these features together, and we're only a couple, a few years away from having some of that stuff available for us. Actually, some of it's already happening now, just not as good as I described holistically. All right, so I want to give you something practical, and so this is called how to train your AI because you've heard of the term garbage in, garbage out? Garbage in, garbage out? Well, with AI, it's plain information in, plain information out. So if you want a specific result, you have to train your AI. Oh, sorry, sorry. You have to train your AI, and you train the AI in multiple ways. You train the AI by the questions that you ask. You train the AI by who do you want the AI to emulate? So I've got sort of a recipe here for how to improve your prompting. I do something I call prompt stacking. So it's sort of a Socratic method for the AI, which is an iterative process with questions. So I will give the AI a scenario, and then I will, first of all, ask it, do you understand? Because what you think you told the AI and what the AI understands may not be the same thing. So you ask, do you understand? And then the AI says, yes, I understand. You want to do A, B, and C, and I'm going to give you X, Y, Z. And then you ask the AI, what else do you need from me to complete this task? And the AI will say, well, you referenced this regulation. I don't know where to find it. Tell me where to go. All right? Because all AI is not at the same point in time in terms of their references. When Chat GPT first came out, it only had information up to 2021. So if you wanted the latest paper that just came out in 2022, it would say, I don't have that information. So you always ask the AI, what else do you need from me to complete that task? So the prompt model is who or what role do you want the AI to emulate, what do you want the AI to do? And you need to be specific about what you want the AI to do. For example, if you went to a shoemaker and said, I need a pair of shoes, that's all you told the shoemaker. You come back in a week, shoemaker's got a pair of shoes, size two, and you wear size eight. You did not give the shoemaker enough information to create what you wanted to create. So you have to tell the AI, what is it you want to do? You have to tell the AI, how do you want that information presented? Do you want a list? Do you want a table? Do you want a graph? Do you want an infographic? So you have to tell the AI, what is it, how do you want that information to come back to you? Do you want all of them? It can do that. And when do you want this information? Do you want it now? Do you want the AI to do it every week, every month, every two days, every three days? So you have to tell it when. So you have to give the AI much more information than people generally give it. When you say who, who do you want the AI to emulate? You can say, you are an occupational medicine expert with 20 years of experience, and you've been tasked to create a presentation on AI. And then here's my audience. Here are the people I'm speaking to. So the AI will put it in the language level for your audience. If you say, I'm giving a presentation to third graders about occupational medicine, it's not going to use the same language, you know. You say, help me explain to third graders what an occupational medicine physician or nurse practitioner or PA does. It will put it at their level. So once again, you've got to train the AI. You can train the AI to sound like you in voice, style, writing style by uploading your styles of writing previous papers. It will review that and tell you, oh, you sound very conversational. You sound very authoritative. And so the writings coming out from the AI can sound like you. You can also train it in your voice with an avatar that looks like you, sounds like you, uses your language, your voice. So if you're interacting with a patient and it's 2 a.m. on Saturday and this patient is scheduled for surgery, but they're having anxiety about going to surgery on Monday morning, it's 2 a.m. They don't have to call you if you've trained your avatar to answer their questions that they most frequently have about their upcoming surgery on Monday morning. It's a way to multiply yourself. So you give the patient access to information that's based on your protocols, based on your style of thinking. And the patient knows and trusts you. So they're going to be more familiar, more comfortable with that information. And they're less likely to go to Dr. Google, you know, and not show up Monday morning. So these are some ways that you can use AI. And then the output, like I talked about, what do you want, how the request is to be delivered, and when do you want the results? So there's your model that you can use today to improve your prompting with AI. But to distill this all down to kind of what I said at the beginning, context matters. And so if you don't tell your AI the context of what you're trying to do, who you are, who you want it to be, and how you're trying to do it, it will not most likely give you the result you're desiring. So large language models are basically what we are thinking of or talking about as, you know, AI and generative AI nowadays. These are models that have analyzed English or other languages in great detail. But the importance for OEM is, you know, we need to make sure these models are trained on the language we speak, the context that we know as occupational medicine physicians. And so if we think about the things that are relevant to us, think about our regulatory sources, OSHA, the ADA, Work Comp, GINA, about the standards we know about from ACOM, NIOSH, the WHO, the CDC. And again, the point about whether or not these are gonna be things that you can upload to the cloud, because again, there's a bit of a risk there when you're, for example, gonna be uploading employee or private personalized health information, PHI, up to a cloud. But nowadays, there are opportunities and applications where you can run these things locally on your own computers, and your own computers have enough power to actually do it. So we've really reached the point where some of these things can be really right in your own pocket. Right, so just last week, OpenAI announced that chat GPT features that had been behind a $20-a-month paywall are gonna be available to everybody. And so what that means is you can create your own custom GPTs that everybody will have access to. In the past, it was that you created a customized GPT, somebody had to have the $20-a-month subscription in order to access it. All that changed last week. And so chat GPT 4.0, not to be confused with 4.0, gives that availability to the masses, and that small o actually stands for Omni. So the thing about the LLMs is we are at the point in time when it's important for OEM to be at the lead. We need to create our own large language models. It's easier to do today than it was six months ago because of these changes that have come out. As Zeke said, you can run these large language models on a local server. So you can train it with your data. And then the results that come out are much more pertinent to your population because it's data that's internal to you, but then it's also much more secure. So as OEM practitioners, what we're gonna have to do, though, is we're gonna have to ask our IT people if we build this on our own servers, is it secure enough? So if you're at a hospital system, and they go, oh, LLM, sounds great. We'll put it in there, and then we'll mix the employee work data with the employee patient data. Boom, we got a LLM. Well, no, there's a firewall that needs to be in between that information. But we're the ones who understand that. The IT people don't. So we're gonna have to be in the room when these things are created and discussed to tell them what are the features that we need in OEM to make sure that that information is safe and secure, but also that we need the ability, when somebody requests those records, we got 21 days to produce that information. So, you know, our requirements are broader than what a lot of the IT people are looking for. But now is the time for us to get involved with that. I've already started looking at what are the components we need to build an OEM LLM. Everybody okay? All right. That's how you know you're in a room full of doctors. Everybody move. I think it was a spilled drink. I was worried somebody was down, but I mean, okay. Everybody okay? Yeah, we're good. All right. So that's where we are right now. This is today. We can start working on large language models today. The thing is, if we wait, we will end up and be in the same situations that we are with electronic medical records that don't understand occupational environmental medicine, don't have the features that we need, don't communicate in a way that we need, don't have the pertinent firewalls that we need. So we have to be on the forefront of this. So now is the time. So, how many have done research in this room? How many people were actually research assistants? You had to go down to the library, pull the references, scan them, and all of that. How many days did you wish you had your own research assistant? All right. All right, so hopefully this is gonna work. We're gonna try and do this live. I wanna show you a tool that will give you a research assistant that will help you out. So, yeah. All right, we're gonna flip this over one second and hopefully get it right here. Where's my mouse? There you go. All right. You just gotta look over your shoulder. Yeah, I gotta look over my shoulder here. So, this is a program called Perplexity.ai. And it is, if you've got ChatGPT over here, and you've got Google Search over here, and you put them together, you get Perplexity, which is an AI-driven search engine. So. Let's see if we can get you in that box here. Okay. So, when you go into Perplexity, it looks just like this. And you're like, all right, so what do I do now? It's kind of blank, you know? But, so if we say, excuse my typing here, because it's hard to see at an angle. Actually cannot see. Please. Please, all right. Is there a space in there, or no? All right. Okay. So, the question is, please give me a list of the emerging infectious diseases, and what workers and workplaces are impacted by those diseases. And now I lost my place. That's okay. Thanks, Gene. Impact on workers, just put impact on workers. What? Yeah, we got it. Put impact on workers. All right. Thank you. All right, so that's the question, but I'm gonna say, please put this on a table. And by the way, it'll probably be fine with the typos too. All right. So there's our prompt. So the first thing that you notice up there is that you've got references. If you go to put that same question in the chat GPT right now, it doesn't give you references unless you specifically ask for it. With perplexity, you always get the references. Okay. Here's your answer right here. Influenza, common cold. It's already started building this table. Let's see if we can go more. Yes. We got, what is that one? I can't even see that third one. Gastroenteritis, COVID-19, other respiratory infections, zoonotic infections. But the point is, it's giving you some potentially relevant information. And as you go through this, you can give it more context. Say, no, no, no, I need you to be more specific. Tell me about specific gastrointestinal diseases affecting, for example, South America or wherever, for example, your workers may travel. But you see how quickly that information came up whereas if you had to go look for each individual reference, review the reference, and then compile the reference, and then build a table, in a matter of seconds, we've already started on our table. And you can always refine your search. You can add more information. And down here towards the bottom, you can ask additional questions. No, I didn't really mean that. I really only want it for the United States or I really only want it for Europe, South America. And so you can refine your search. You still have all this information up here. It doesn't go away when you ask another question. So you can have a history of your search and your references. You can download these references and you can get abstracts. You can actually go to the individual references and look at the abstracts. So it's all in one place. You're not looking at multiple databases to get those references. And so one thing that's good here is it will show its work. And so again, on the opposite side of the black box problem, how do these things work and how are they functioning? What we really are shooting for in general is explainable AI, what they call XAI, an AI where it's like, okay, I can see what's inside the box, and I can reproduce it, and I can let people know that this is valid information with relevance. And so perplexity.ai is free. It's not behind the firewall. Now, it does have some features that are behind the firewall. And it's a feature called Copilot. You hear about Copilot from Microsoft Bing and others. So Copilot is a general term that's being used to talk about advanced features. But you can see what you can get for free from perplexity.ai. And that's available today. Just got a time, so people can ask questions. We got 15 minutes. All right, let's see if the slides come back up here. There we go. And so then you saw the demonstration of perplexity. And again, we just showed you the example of making a table of infectious diseases. This was a preview we did beforehand, but a little bit more clean. So one interesting thing I saw in the literature, and this is just from this very year, was they made a chat bot to answer occupational medicine questions. They took 12 docs, of which eight were residents, and they created questions and answers about occupational medicine, and wanted to find out how well did a chat GPT bot perform against the physicians on a five-point Likert scale. And they did it in terms of looking at accuracy, precision, completeness, usability, and relevance. And so this star model here shows you kind of how it ended up there, where the blue was the chat GPT bot, the orange was the chat GPT bot, given some specific context about Occ Med, and then the, what color is that? Green, sorry, I'm green colorblind, was the physicians giving their answers. I know I should have done better there. That's bad on me. But what you can see down here was the physicians versus the chat GPT bot, both for completeness and accuracy, did way better than the bot. Similarly, putting the chat GPT bot versus itself, once you gave it context, did better. But in terms of completeness, the chat GPT bot with context versus the physicians actually was not statistically significantly different, whereas it still was for accuracy. So for now, we are still more accurate, but our chat GPT bots, if you give them context, can give you as complete an answer as a doctor, I guess. Good paper. All right, so just to sort of summarize, we've talked about challenges and opportunity for AI and occupational environmental medicine. The vision for the future is we have to build our own, and we can't wait for anybody else to build it for us. That's the future. And where you integrate AI and OEM, you have a chance to improve outcomes for workers, monitoring of workers, personal health for workers, by marrying those two together. So what we can do right now, you can train all of your AIs. So the model I gave you works for chat GPT, Google Gemini, Anthropics Cloud, Perplexity. It works for all of those. So it's a generic model that works across the board. But explore and adopt AI technologies. You already got your smartwatch, keep going. And like I said, we have to build our own large language model. It's not gonna be just one model. We need multiple models. We're not gonna put everything and say, well, this is the large language model, the be all and end all. It's not gonna be that way. Because each clinic in the future can have their own large language model built on their data. Each hospital can have its own for the medical, general medical, but also for the occupational medicine. So if you're in a combined program there, you should have separate large language models based on what you need. And so you know, everyone remembers the famous Shakespeare quote, brevity is the soul of wit. So when it comes to AI, context matters. Thank you all for attending. Yes. Thank you. Oh, and if you download the slides, you'll see we have some tools and resources for you at the end here and references that you can check out. And if you don't mind, we wanted everybody to scan this QR code just to ask a couple questions about your familiarity with AI already. Yes, and if you scan the QR code, you give us our email, we will also send you information about AI as it comes up. I will send you a guide for how to build your own AI research assistant that I created for something else, but I'll send that to you. So scan the QR code, if the QR code does not work, you can use that very large, ugly link below. And we will not sell your data, we will not spam you, have to give you that contact. No terms and conditions. Yeah, the terms and condition is that you are asking for information about AI and that's what we will give you. So we do have time for questions. Yeah, let's open it up. Yes, oh, so we will have to repeat your question because we don't. Have a mic to pass around. Sir. One comment and one question. So one of the biggest challenges with bringing technologies into larger workspaces, especially large institutions, is institutional buy-in, especially in terms of operations and purchasing level, and you can break that in terms of the big stakeholders, the CCP people, the ops people, the IT people, or even interoperability when you talk about EMR systems and so forth, patents, so it's great to talk about AI in a bubble in terms of what these nice tools can do when they kind of just exist on their own. But I'd like to maybe just have you comment a little bit on how we bridge that gap of, we have this great tool, how do we take that into the clinic and scale it up? And then secondly, you addressed it a little bit with your small-scale study, AI, especially in people spaces, as in the news, or hallucinations, I think sometimes these tools can be over-relied upon, and so how do we address the need to train clinicians to not come over the age of reliance on these kinds of tools? Like some of us can lead with things like data that's scaled bigger and bigger. I'm gonna be your chat bot and summarize your question, and then I want you to nod and let everybody know I did that right, and then I'll let Denise maybe start. So he asked a two-part question. Question one was, it's great about talking about AI in a bubble, but really in real life, how do you get your institution to buy into that given regulatory factors, purchasing, patents, et cetera, legal entities that may matter? That's a good summary of your first question, sir? Second part was, despite all that, we see physicians rely on electronic tools like clinical decision support or other tools in the EHR or up-to-date, and maybe sometimes come to the wrong conclusion when they're looking at summarized data. Is that a good, and so how do we prevent that, or how do we address that? So your early adopters are already using it. They're just not telling anybody. Well, they're not early adopters anymore. Because, so if you think about the use of spreadsheets, initially, you only had a handful of people who knew how to build a spreadsheet. And then it got to the point where everybody's supposed to know how to make a spreadsheet. You know, you're getting to that. But your early adopters are already creating ways to use AI to automate some repetitive tasks that they do. They're not asking for permission to build that, to automate what they're doing. So the challenge is gonna be, how do you make sure that what they're building is safe and compliant with all of the laws and regulations? And so that's one thing you have to go back to, you know, the higher-ups in your organizations to get them to understand, is these things are being built. And how do we make sure that we are compliant as they're being built internally? And the other thing you have are some millennials and organizations are already asking, why aren't you using ChatGPT to create this and that? Because it's more efficient. So you have the two extremes of, you know, the folks who are like running off and doing it already. The people are asking, why aren't you doing it already? But we have to make sure things are compliant. Yeah, and just to chime in on that. So I mean, the buy-in is there, but you know, some of this is an education matter, whether it's in the level of medical school. For example, you know, we are training medical students now at the University of Minnesota about communications and social media and how to address bias and misinformation. And this is yet another tool like that where we are gonna have to train our future physicians or our future healthcare administrators to be prepared to address these items exactly in the way you're asking and not to over-rely on them because they're the ones who have to be the content experts to provide that context. In the middle, sir. I've been using AI for several months now. I find it's very useful for the things that most cause burnout amongst physicians. So things like technical writing, responding to patient, like normal lab results, things like that. So you can really use chat GPT and other AI to simplify the laborious tasks that you have in your clinical practice to reduce burnout. It's very good for technical writing. It's not very good for creative writing, in my opinion. I do a little bit of research and I wouldn't want AI to write my research paper, but I can write a protocol for blood draws in an outpatient medical clinic with quality insurance in about 10 seconds. And you can do endless permutations of that because that's how I use it. And so I think it's really well-suited for that. Use it for what it's meant for, not what it's not. It's kind of like a dishwasher. You load up 130 dishes and maybe have a spot checked out. Before I summarize, just to clarify, that's a comment, not a question, right? No, I just, I want to make sure to answer your question. Anyway, so the comment was, thank you. No, and I appreciate it. So his comment was, you know, AI is really good for a lot of technical things, technical writing, writing protocols, you know, answering normal lab results. You know, let AI be good at what it's good at. It's maybe not as good yet at creative writing, applying things in research. But again, the context matters, you know. If you try to get AI to do everything, it's never going to be good at that. But if you give it guardrails and put it within a certain framework, a certain context, where you can be the overseer of it, it actually will do quite well. Dr. O'Neill, the famous Dr. O'Neill Meyer. Oh, wait, hold on a second. Denise, I'm so sorry. So let me address the creative factor, because I have created stories and images with AI. Some of it, again, comes down to the prompt and what you give it, what information, how do you want it to produce that creative content. But it is impressive what it can create if you give it the right context and prompts. I've created digital images that you, I've shown them to people, is this real or is it AI? They can't tell the difference. And I'm also a photographer, so I'm looking at those images. And if I tell the AI that I want an image of an ice cream sundae shot with a Nikon D500 with a 50 millimeter lens, that's what it's gonna give me back as opposed to give me a picture of an ice cream sundae. So again, it goes back to how do you train your AI to get the results that you want? So I find that it's good for training materials. So you can put in there, give me a vignette of a patient with gastrointestinal pain or abdominal pain, you know, in an occupational outpatient clinic and for a nurse case manager to assess. It'll give you good clinical vignettes like that. So I agree with you. I disagree that it gives you good images, but that's it. The commenter most, the commenter noted that it is good for giving, for example, clear clinical vignettes for training, maybe not as good for images yet. But Denise asserts that maybe it's better than we think. The famous Dr. O'Neill, my original program director, the one who brought me into this field from being a baby fledgling doctor, sir. Thank you. On the creative, comment on the creative. In my other life, I'm a screenwriter and I've named my patients too much. And they now have what's called COVERIT, which is a script analysis. And there's a program out, I used it last month, and it was like $45 and a half an hour later, I got the best script analysis on one of my screenplays I've had in 20 years of paying people $100 to $200 and getting a result back the next week. And so I don't know what the process is. They probably loaded in all sorts of stuff from award-winning screenplays, but I literally got stuff that no one had picked up on, on character development, story, arts, character arts. It's there in the creative world. Thank you, Dr. O'Neill. So Dr. O'Neill mentioned that in his other life, he's a screenwriter. He's paid people $100 to $200 in the past to analyze his screenplays and to give them some information on it. He paid $45 for an AI to do the same thing. And he says he's gotten feedback he's never gotten over the past 20 years. So a good example of it's being able to automate more cheaply tasks that historically have been extremely difficult and maybe not as accurate, relevant, complete as he wanted it to be. Thank you, sir. Dr. Drury in the back. Oh. I'm wondering, is there a push or action in getting a medical language? And I ask the question, if I'm doing a search on a big file and I put in back, I can have call back, I can have back of hand, I can have hand in my back, I can have take me back in time. And so why do we have the complexity of our English language in our medical files? I mean, we are scientists and as this gentleman indicated for technical filing, it's just, it's chaos. And so I'm asking you, you're in the midst of this. It makes sense that if I'm talking about back pain, I don't want anything else other than back pain. And if I'm in the morning, a.m., I don't want to be the bird. M. And so. So Dr. Drury said, essentially, if we're searching English language for, looking for back pain in the electronic medical record and you search for the word back, well, the word back, of course, is used in a million different contexts. And how do we contextualize that? And he's right, that is difficult. I'm not a neurolinguist, so I probably can't do a great job there. Maybe go read Snow Crash by Neil Stevenson a little bit. But what I will say is, saying back pain and giving it that context is gonna certainly do better. Speaking to the NLP algorithm I was describing, being able to use a codified framework, like for example, ICD-10, SNOMED, or some other hierarchical or standardized set for discrete elements may make that easier too. Unfortunately, human beings don't really think like that, or at least not yet. And that is why AI has been helpful to try to bridge that gap between how would something that moves very linearly and logically, a computer program, come to conclusions versus human beings who actually are really good at making a lot of different connections that may or may not actually necessarily have content relevance to each other. I didn't mention it, but you can have negative phrases that you put in your prompt for the search. So you would say, I want all the cases with back pain not call back, not come back, not talk back. Ha ha ha. As a way to narrow the parameters. So again, like Zeke said, human beings don't always write in a succinct manner or the same manner. It depends on where you're trained. And so where I was trained, it was OB-GYN. Where other people are trained is OB-GYN. You know, so it's, that's just the human nature of things, but you can refine your search based on excluding things as well as including things. All right, last question. And speaking of talk back. Oh, I was gonna get Dr. Nabeel over here, the AI expert himself. Get these last two. Okay. Yes. I want to thank Dr. Kinney for his presentation. And the question about creativity, I just want to let people know, there's a guy named Tyler Perry. Do you know who Tyler Perry is? Yes. I think people know who Tyler Perry is. Ha ha ha. He wanted to set up a studio in Atlanta. He was gonna spend $8 billion to create a studio to create his stuff and movie production. Then he sold the track EP and all this stuff. He canceled the whole project. He said, I'm not gonna spend $8 billion. So with your 50 millimeter lens and photography, you can create anything with this. If you know how to set the swing lens. You're right, absolutely right. That's just a comment. Thank you, sir. About creativity, it's incredible. Just to summarize what he said. Hold on this, man. You're next. So he's saying about creativity, Tyler apparently canceled his $8 billion movie studio production center in Atlanta because he saw that putting the right context around this, you can essentially make whatever it is you want for a lot cheaper. Thank you. Dr. Nabeel, last but not least, the AI expert himself in AECOM. Thanks. I appreciate your amazing presentation on this. Great. I haven't thought, I haven't been responding to people talking about that. I've done research in AI and... And in low back pain. And in low back pain. I call myself an expert. What has happened now with MLM's is we've enlarged bandwidth options. That question that we struggled with, funding the computer and the ASAP, is no longer a challenge anymore. So you can write it any way you want. I can find it. That's where we are. They are learning like we are. They're executing like we are. They are mimicking us as a physician. So for me, if I want to see a low back pain case, and I want to see if it's acute, acute or chronic, or chronic case, it takes me four minutes or three seconds depending on reading through the history. That's where we are at with our patients. They are deciphering low back pains or any other conditions in a way that they have never seen before. So I urge you, I built my own MLM. So that's why. Just to summarize Dr. Nabeel's comment, he's saying he's done research in AI and low back pain, and he's published it, and he's built his own LLMs, and I've seen this stuff. He was saying, to Dr. Drury's point about back being used in so many contexts, you know, AI historically, I mean, historically in the last few years, wherever he's been using it, has been very sensitive, but it's now much more specific and able to parse out some of those subtle context differences in ways we've not seen, and it's only getting better, and if you wait half a year to a year, it'll be even better than before. With that being said, it's 9.34, we're after time, Dr. Clement and I will be happy to chat with you all, but thank you so much for your engagement. Dr. Clement.
Video Summary
In this session, the speakers discussed navigating the AI frontier in occupational medicine. They covered housekeeping items, introduced themselves, and dove into the importance of understanding and using AI in occupational and environmental medicine. They highlighted the need for AI training, ethical considerations, data privacy challenges, and the potential benefits of AI in improving diagnostics, enhancing accuracy, and personalizing care. They also showcased a tool called Perplexity.ai for AI-driven search engines and emphasized the importance of building specific large language models for occupational medicine. Audience members shared their experiences with AI, including its usefulness in automating technical tasks, creative writing, and improving efficiency in medical research and patient care. Dr. Nabil also shared insights on AI's evolving capabilities and its ability to interpret complex medical data like low back pain cases more accurately and efficiently.
Keywords
AI frontier
occupational medicine
housekeeping items
importance of AI
AI training
ethical considerations
data privacy challenges
benefits of AI
Perplexity.ai
large language models
×
Please select your language
1
English