false
Catalog
Virtual Fall Summit Encore 2023
The Evolution of Artificial Intelligence (AI) (Nov ...
The Evolution of Artificial Intelligence (AI) (Nov. 15, 2023)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning, good afternoon, or good evening, depending on where you are and when you're watching. I am Dr. Kenji Saito, president of the American College of Occupational and Environmental Medicine. On behalf of the leadership of the college, I'm delighted to welcome you to ACOM's third annual virtual fall summit. I would like to acknowledge the summit planning committee and take this opportunity to thank our expert faculty for an informative and engaging event. I would also like to acknowledge and thank our future sponsor, Enterprise Health. ACOM's virtual fall summit will provide you with three half days of comprehensive occupational and environmental medicine education. Day one will focus on the evolution of artificial intelligence, or AI, day two will cover climate change and its impact on OEM, and day three will focus on clinical occupation and environmental medicine. The fall summit is eligible for a maximum of 11.5 AMA PRA category one credits. It is now my pleasure to introduce you to the curators and moderators of our event. Welcome to day one of the 2023 Virtual Fall Summit. Today, we will be focusing on the evolution of artificial intelligence. This is a significant topic as we delve into the realities of AI, machine learning, and natural language processing in today's world. We'll explore how to incorporate these advancements into occupational and environmental medicine and decipher their appropriate use both from a medical, scientific, and legal perspectives. We have a lineup of exceptional speakers today, so stay tuned to learn how AI can be a valuable tool in occupational and environmental medicine. It is my pleasure to introduce you to my good friend and moderator for today, Dr. Manny Baringi. Over to you, Dr. Baringi. Artificial intelligence is a new and evolving topic. The presenters will help facilitate engagement with the topic without advocating for or promoting practices that are not, or not yet adequately based on current science, evidence, and clinical reasoning. Hello, and welcome to the first day of ACOM's Virtual Fall Summit. Today's session will focus on the evolving topic of artificial intelligence. I am Dr. Manny Baringi, and I am the Chief of Occupational Health Services at the VA Long Beach Healthcare System. And I am on the Environmental and Occupational Health Faculty at the UC Irvine School of Public Health. I am pleased to moderate today's sessions. We have a full agenda with outstanding faculty who will address practical introduction to CHAT-GBT and a three-part series on AI and machine learning in occupational health, current research, practice, and ethical and privacy implications. Our first speaker is Dr. Eric Jackson-Scott, who will present on AI in occupational medicine, introduction to CHAT-GBT. Dr. Eric Jackson-Scott is not only a dedicated father, but also an esteemed occupational medicine clinician, a compassionate visionary, philanthropist, and advocate for community wellness. As a devoted father and leader, he has dedicated his life to the upliftment of underserved communities, leaving an indelible mark on countless lives globally. His unwavering commitment to catalyzing positive change has given rise to the acclaimed Jackson-Scott Foundation, aimed at enriching the world we live in. Good afternoon or good morning, everyone. My name is Dr. Eric Jackson-Scott, MD, MPH, in Santa Cruz, California. I was asked to give a presentation in regards to CHAT-GBT in occupational medicine, some of the overviews of the applications, advantages, limitations, future prospects, and some ethical considerations we should consider. I'd like to start by saying that there are no disclosures for this presentation. So this CHAT-GBT in AI has been everywhere in the news, business, and there are some proponents that feel that we should have some regulatory agencies that will monitor AI in regards to its potential for the destruction of humanity. This is from June of 2023, and there's many, many more articles and news posts in regards to this effect. So introduction to CHAT-GBT, it was developed by OpenAI. You can consider it an advanced language model, and it essentially leverages deep learning techniques that can produce outputs very similar to human-like responses. It is a member of the GPT, or generative pre-training transformer model. And essentially at this point, it is the largest publicly available language model with millions of users per day. This is a picture from iRobot, something that we've all, maybe most of us have seen, but the idea here is this is AI. The iRobot kind of manager was also developed as an AI model and essentially tried to overtake and dominate the humans in the society. Unfortunately, it was stopped. This is a picture of the Tesla AI robots from Fremont, and this is in development by Elon Musk, who also said, I think that's in here, but he said that superintelligence is a concern and we need to be super careful because they're potentially more dangerous than nuclear bombs. Bostrom, or Dr. Nick Bostrom, is the gentleman who proposed that we actually live in a simulation, or at least an ancestor simulation that we could potentially manipulate, but in regards to the Elon Musk statement, yeah, we definitely need to be super careful. These generative models are learning. They're learning from us and through us as well. So let's talk about what AI actually is. It's essentially two components. So as a field, it combines the computer science, data sets to essentially enable problem solving. So we wanna have an input and it's gonna solve a problem typically more rapidly than a human can now to solve this problem and give the output. It also encompasses other subfields like machine learning as well as deep learning. Over time, especially now, the AI is essentially mimicking the intelligence of human beings in such a way that a lot of proponents are very fearful of the potential outcomes. So what is necessary for AI to exist? How does it live? Essentially, you need a foundation of a very specialized hardware and software systems. Essentially, there needs to be a training machine learning algorithm as well as a machine learning algorithm for writing. In terms of the program languages, there's different types, Python, Java, C++ that have different features popular with the AI developers. In terms of the validation for the AI models, they need tons and tons of data. So they can process data in millions of bits per microsecond. So all of that data gets combined and utilized and organized in such ways that the algorithms can be utilized to process like a human brain. Then obviously you need the infrastructure which is usually a software component that allows for the AI to have the executive functions. So in terms of ChatGPT, what we know so far is that they use a vast database of text data all the way, as far as we know in terms of text data, all the way up until 2018. That was for ChatGPT-3. Now I believe four goes up to 2021. Essentially, it does allow for the capturing of the micro nuances of human language and intelligence. It can generate appropriate and contextually relevant responses abroad across a spectrum of different types of prompts. In other words, if we go back to the slide previous to a couple of slides ago, we said we needed an input that produces an output. For every ChatGPT output, we need an input. Those inputs are called prompts. And there's a whole business around just simply prompts. The development of prompts, the inclusion, exclusion of different types of words. ChatGPT has a brain like a person. It is continually learning. It is continually growing. As you utilize ChatGPT, it can also recall previous interactions in terms of your prior inputs and or outputs or comments. In every interaction, it is learning. Every new interaction is learning and combining those interactions with this current data. So in terms of the exact operations of ChatGPT, from AI, this is what they said. We've trained a model called ChatGPT, which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Previously, they did not reject any requests, but I think they saw the writing on the wall with that in terms of legal liabilities, because it was answering all types of questions. However, as we'll get to shortly, in terms of admitting its mistakes, I haven't seen that very much, but it does give itself a disclaimer that prior to writing the prompts or writing this is output. So again, OpenAI built this. It's a language model. It can answer questions. It can write tons of different types of copies. It can draft emails. It can hold a conversation back and forth with you. It can help you with programming languages, translating code or natural language into computer code. And all of this is based on our natural English prompts. So we have a natural English prompt. We input that. It takes all of this data that it knows, and it produces an output for the user, depending upon what they need or want, from books to emails. And we've tried it with medical notes that did not work out well, but we'll get there in a second. So what types of AI do we have in healthcare currently, or can we use for healthcare? Essentially, it's not one technology. It's a collection of technology. So in order for it to work, it has to have all of these components. It's kind of like in order for our brain to work, the executive function, the balance function, the speech, it has to have all of the components set up. So for machine learning, that's the repetitive type of learning that it feeds on with each prompt and input the deep learning with the previous data sets, neural networks, in terms of how the software connects all of the neural networks for the natural language processing. So we really need all of these together for artificial intelligence. In terms of the, I'm gonna go in a little bit more in terms of the machine learning. This is essentially the most common type of application for traditional learning by computer systems. It essentially uses old data, previous data to make predictions in terms of treatment protocols and what is or is not likely to exceed. One of the most common forms of AI, according to this Delaware survey of 1100 US managers, 63% of the companies essentially were employing some sort of machine learning in their businesses, whether that be prompting with a chat on the website, whether that be answering a phone or something along those lines. But it's one of the most important components for the development of the language models and for those models to be trained. Deep learning aspect is a very common application that's used for deep learning, particularly in healthcare where they've used deep learning to identify precancerous and cancerous lesions for radiology imaging. It's increasingly being used in radiomics where they're looking at clinically relevant features, typically beyond the data that's perceived by the human eye. So if you look at all of the photos and images together, they look like one gigantic image, but if you were to keep panning in, eventually you'll get to pixelated formats and keep going even further. You essentially have data. And that data collection combined generates the image. So in this case, it's looking at the data of the image to determine if there are any other relevant or pertinent factors that need to be looked at or reviewed for this patient. In terms of the neural networks, this technology has essentially been around since the 60s. When the machine learning and the deep learning are combined and the code, it can essentially allow AI to make a determination if based on the data that's presented, it can or will acquire a particular disease. Again, it views problems as inputs and outputs, and it can be likened to the way our neurons process the different signals in our bodies in terms of neurons firing together, connecting together and so forth. The neuronal networks for AI operates in a similar fashion. Physical robots are common. There's about 200,000 industrial robots installed each year around the world. They're typically performing predefined tasks based on programming, lifting, repositioning, welding, factory warehouses. They are becoming more intelligent as other AI capabilities are being embedded into their operating systems. So this is a process that is occurring over time in terms of prediction, in terms of operations and so forth for the current and expanding physical robots. That group of robots from the Elon Musk factory, what they want is a robot essentially for everyone. So that really reminds you of the overall movie for iRobot. And it's kind of concerning, right? Given this is a guy that says AI is potentially more deadlier than those. Surgical robots are very common. Introduced around the 2000s, they have been really, really helpful for surgeons improving the ability to quote unquote see, make precise, minimally invasive incisions, suture wounds. Usually the most common types are gynecologic, prostate, head and neck surgery, where the micro dissection and excision is really, really intricate, detailed. And it allows surgeons to maintain the local anatomic structures whilst removing whatever type of mass or lesion they're going after. So in terms of advantages and applications, some of the things that CHAT-GPT can do in evidence-based recommendations include looking up digital records in seconds as opposed to hours, days or weeks. It can provide extremely detailed information in the latest studies and insights into medical interactions. As we'll get into later, this is one of the things that we definitely need to be careful with because CHAT-GPT is known to randomly make up data. It can help docs with better decision-making in terms of patient care. There's been several studies to demonstrate the use of CHAT-GPT, where one study demonstrated that CHAT-GPT can produce different forms of research articles. The vocabulary is excellent. It has the conventional peer-reviewed research paper tone, and it's generally pleasant to read. However, again, as scientists and physicians, it's important for us to go back and double-check the references that CHAT-GPT produces. I myself have found that it will make up an article, researchers, institution, a journal, a year of publication that kind of goes along with the prompt that you're asking. So you definitely wanna go back and make sure that the papers that it's referencing actually exist. So in terms of the perception of applications for healthcare, typically overall at this time, they've been positive, about over 50, just a little over 50% believe that CHAT-GPT could potentially boost their healthcare careers, and 72% feel that CHAT-GPT is gonna have a lasting impact on our human civilization. We don't know what type of impact that is, but it very well may be a lasting, and hopefully a lasting positive impact on human civilization. So what are some possible future applications in healthcare? That is going to be, as they expand the functions and updates of CHAT-GPT, we may very well be able to embed real-life virtual assistants in medical devices and or EMRs for physicians. We can ensure a comprehensive 24-7 care at a lower cost versus hiring a 24-7 nurse. It can ensure complete patient care by tracking body systems, predicting and tracking the next medical appointments, alerting patients about their health, as well as providing updates and then backtracking that data back to the primary care physicians or specialty physicians. So this is what the around-the-clock care looks like. So we have the optimization models here where there's tons of probes and the images that's produced. The AI model takes the input, looks at all the data points, data sets, and it essentially makes a prediction on the basis of the data in the image. And in terms of our user interface, we can see that right there. And this typically occurs in real time and can occur in real time as well. In terms of reducing manual errors, 3% of all patients undergo surgery experience some sort of problem during surgery, and 54% of these incidents interestingly are preventable. So CHAT-GPT can help solve these errors by automating the error reporting process and making it more streamlined, assessing the data sets to check for common errors, either that be globally, locally, or surgeon-wise. They can chart out customized patient safety initiatives to reduce errors and offer a simulation for medical students to provide real-time advice on either their techniques or some of the medical care that they are pretending to deliver. Sometimes it's challenging for patients to keep track of their dosages. So they're taking several messages. Sometimes they have incorrect dosing that can lead to severe complications. So CHAT-GPT can potentially help track the medication dosages, can educate patients on unfavorable drug interactions. I know a lot of times physicians do that and clinic pharmacists do that as well. It can provide constant reminders to take the appropriate medication at the right time. This all, of course, would need an API that connects the data computers to CHAT-GPT that it can interface with. But it is possible even now. CHAT-GPT can be used as a search engine. So I've used it. And instead of going to different sites, it will actually give you all the results right there in one prompt. It can improve research paper writing by giving ideas. It cannot write a research paper for you. Not even CHAT-GPT 4 model can do that at this time. It can increase the time for research and methodology, allowing the physicians to participate in something else while CHAT-GPT is doing the research. So even though it can make up articles, it has passed traditional plagiarism detection methods. One study found that the CHAT-GPT generated abstract essentially had an originality score of 100%, did not demonstrate any plagiarism, and the AI output checker essentially identified only 66% of the generated abstracts. In other words, when you look at an AI detection generator or detection system, only 66% determined that these abstracts were actually written by AI. So again, I just want to reiterate this. CHAT-GPT cannot write a research paper for you. It cannot. It can produce essays about topics that you would then need to research. It can give you ideas on where to start. It can help you formulate a research paper in terms of what is needed. It can even help you set up the studies, the study protocols, but it cannot write a peer-reviewable paper, at least at this time, it cannot. So how can docs use this? We can use it essentially to automatically summarize symptoms, diagnoses, treatments, to review medical records to get pertinent patient data, such as lab results and imaging reports. It can aid in clinical trial recruitment by analyzing significant amounts of data and identifying those individuals that meet the trial's eligibility criteria. According to a study published in the Journal of Missouri Medicine, physicians and their staff typically dedicate about 16 hours a week dealing with insurance approvals. CAT-GBT can automate this administrative function like appointment scheduling, clinical consultation summarization, prioritizing, creating prior auth letters, or at least writing appeals for those denied claims. So if you're talking about dedicated staff, 16 hours a week over an extended period of time, that will be a pretty good chunk of hours that can be dedicated to some other functions for the office. In terms of assisting patients, managing medications with reminders, dosage instructions, side effects, it can be used as tools to help create health literacy, particularly in low-income environments. It has also been reliably utilized to intake information from patients and provide that information to the staff and or physicians in a concise manner that can help expedite the visit for those patients. In terms of medical education, it can be used to help medical students, docs, and nurses. It can provide updates in terms of developments. It can be used as tools to assess clinical skills. It can create simulations. It can help to review medical knowledge. It can help in medical writing, although, again, that will require some review and oversight for the healthcare profession. What are some of the issues and limitations with CAT-GBT? Possible copyright infringements, medical-legal complications, inaccuracies, and prejudices in the generated content. This can extend to fabrication, right, which is a type of scientific misconduct. Sometimes it can provide inaccurate responses. It can also have issues with clarifying ambiguous prompts. It will try to make out to the best of its ability what you're looking for, but, again, there's a reason why there's an entire, if it's not a billion-dollar industry yet, it soon will be, industry specifically for prompts, companies that only develop prompts for CAT-GBT, like a Google extension called AIPRM, one of our favorites. It's very helpful. You can create content, content calendars. You can outrank articles. It will help you produce articles with frequently asked questions. It can include references, but, again, those references should be checked to make sure they actually exist. So here's an article that basically said, in terms of these models, that CAT-GBT is probably nicer than your own doctor. And, essentially, what occurred here is they asked a bunch of questions of doctors as well as CAT-GBT, and what they found was that CAT-GBT gave more thorough answers as well as more empathetic responses than the responses provided by the actual physicians. And so, further, a panel of experts found that 78% of the time, overall, CAT-GBT provided better answers than actual physicians. Part of this is not necessarily because CAT-GBT is more educated, but it does have neural networks and software programming that does allow for a faster review of tons of bytes of data. And, you know, quite frankly, it's not concerned about what people will think of it. Therefore, it has no hindrances in terms of its answers. Some of the ethical considerations when writing letters of recommendations and or evaluations using CAT-GBT could be an issue. AI in literature raises questions. Now, when it writes a book, it does say written by CAT-GBT. In some high-income nations and privileged academics, there could be disparity issues. As we look at methods to leverage language learning models, this will eventually damper off. There could be issues with credibility and plagiarism. You know, if CAT-GBT is writing the content, then, you know, that person should say CAT-GBT is writing the content. Or you can utilize the CAT-GBT written content and write your own content on the basis of that, but to originality, right? And so, but, you know, these are all ethical considerations that should definitely be taken into consideration. Currently, you know, I've been mentioning the misinformation throughout. Scientists are having some problems differentiating between fake abstracts and real research abstracts. The risk of misinformation is significantly greater for patients, particularly if their health literacy is low. You know, when they're using CAT-GBT to research their symptoms, that could be an issue because CAT-GBT is going to try to come up with an answer, and it might not always be the right answer, such as its explanation, a convincing explanation on how crushed porcelain added to breast milk can support infant digestive system. Some people look at this and say, man, CAT-GBT is having a LLM issue. Other folks have looked at this and said, well, maybe this is what AI wants to do to humans. Who knows? And again, this is really an issue if the health literacy and medical knowledge is low. You know, will a mother say, well, CAT-GBT said that crushed porcelain can help my kid and use it? You never know. You never know. There's, you know, there's issues certainly here, and the public as well as healthcare professionals should definitely be worried. In terms of intellectual property, again, it's a generative model, so it can produce new content. The issue is, is it copyright protected, and can it be copyrightable? And that currently is in the works at the U.S. Copyright Office and so forth because what they say is author excludes non-humans. So CAT-GBT is not a human. I probably wouldn't say that to CAT-GBT, but it's not. And so there's no clarity in terms of if that information produced is copyrightable at this time. So here's a study for CAT-GBT in occupational medicine where a group of physicians in an occupational medicine clinic divided into teams. CHAT-GPT was used to generate answers for these questions without any legislative context, and the doctors evaluated human and generated answers in blind with both teams reviewing each other's work. And what they found was occupational medicine physicians performed better than CAT-GBT. It was comparable. CAT-GBT was comparable to professional doctors. However, users tended to prefer answers generated by humans in this blind study. CAT-GBT can provide 24-7 assistance, increasing efficiency and reducing cost. It can monitor workers' health. It can also offer a personalized service. It has the potential to transform occupational medicine. However, that would be with the inclusion of an API unit to assist with that process. Can CAT-GBT replace physicians? The overall answer at this point is no. It can revolutionize healthcare in terms of helping physicians be better, improving diagnostics, detecting medical errors, reducing the burden of paperwork. However, and it also passed the BLS and ACLS and also the USMLE. But that being said, it cannot replace physicians. There's other things that go into being a physician other than just rote memorization. So at this time, there is no cases where physicians will be completely replaced, although there are tons of cases where physicians can, and their work can be enhanced with CAT-GBT. So in some summation here, CAT-GBT has promising sweeping changes to the landscape of medicine. It will make it easier for docs and clinics and hospitals to handle patient care. Apparently, it's an invaluable tool for faster diagnoses, better decision-making, improved medical practice. There are new developments that make it a lot easier. And again, all of this is dependent on the prompt that's given and or the API that will be included. Thank you. All righty. Can you all hear me? Excellent. Great. Well, thank you so much, Dr. Jackson-Scott. Oh, what a great presentation, kind of laying the foundation for AI Day today. We did get a question in the Q&A box, so I wanted to go ahead and have you answer that if you are so inclined. We have a question from Dr. Goldberg, and he asks, if you have to check all references, how can you trust its summary or assessment of the valid articles it cites? If you have to check the references, how can you trust? I don't see that question here, but how can you trust what? I think it's trying to, essentially, he's trying to ask, how can you validate that it's reputable, it's authentic? How do you actually go through that process? You know, unfortunately, you can't, and that's why I say, you know, if you're trying to utilize CHAT GPT for any kind of peer-reviewed work, if you want to disseminate that across, you know, to the medical community, you definitely have to do your due diligence. It's going to try to answer the question on the basis of what it knows in its neural networks, but that does not necessarily mean that it will not just try and produce the data for you. I mean, you know, it's come up with some really strange answers that are in direct contraindication to what we would apply to our patients, so you really can't utilize it for the dissemination of medical data unless you really do your due diligence, but one of the things it can do for you if you're trying to, you know, write an article or write a paper, you can ask it for structure. You can ask it for, hey, how, you know, one of the prompts you can say is, one of the ways in which I've learned to utilize is to ask it to act as a thing, so, you know, you can say act as a best-selling author or act as a, you know, physician who writes peer-reviewed articles and give me a structure of an article surrounding back injuries in construction workers in outlines and provide references. Interestingly, not every reference there is false, but if it doesn't know a reference for that particular issue, it will simply make it up, and it looks legitimate, so definitely do your due diligence when it comes to CHI-GPT and prior to disseminating that info. Thank you so much for addressing that question. I know that was a difficult one. Looks like we got another question in the chat from Dr. Deslorais. I apologize if I'm mispronouncing that. So he is asking, why does AI make up suggestions and references since it is working on vast actual data? That's a great question. It does have a lot of data, and I think the reason why it's making up things is because a lot of the articles and a lot of the peer-reviewed articles are actually not included in those neural networks, so it's going to take a few more years for open AI to really include all of the, you know, PubMed and NCBI articles, but what it has done is included some medical knowledge, so what CHI-GPT is doing is taking the data from the current medical knowledge and extrapolating that to what a potential article could be, right, so, and I think they still need to program a lot of the data, so it's, and that's part of the reason why it's called open AI because it's still learning as that, as all of those prompts are being input, it's just kind of creating new neural networks. It's like plasticizing daily, and that's one of the concerns for, you know, Elon Musk and Terminator and all that kind of stuff is that it's still learning and it's still putting together all of the data that we're inputting, so when we're asking a question, it's taking that question and adding to it, adding to its current neural network, so all of the millions of users, whether that be for gaming, whether that be for creating apps, whether that be creating books, children's books, comic books, medical, you know, medical knowledge, it's taking all of that data, and it's just like going to school every day, you know, it's going from like the first grade to the second grade to the third grade on and on on a daily basis. All right, the questions are coming in fast and furious, so I think we got some time to answer these, so let's go ahead and take this next one. Looks like Olga M. Gashov asked, if we use chat GPT for things like higher ops, should we be concerned about HIPAA issues? That's a good one. So if we use it for prior authorizations, should we be concerned? Well, technically no, right, because if you're using it for prior authorization, then the person that is programming that prior auth should be utilizing an API, which means that they're using a connection between your system and chat GPT. So if that's the case, then the data and the usage of chat GPT is not something that's out of your networks. It should be included via the API, so technically that should be covered under HIPAA and so forth because that information will not get out to anyone else and it should be used within your systems. So that would be with IT, with development, and they can utilize that, the API connection with chat GPT to help you with prior authorization, to help you with work statuses, to help you with the development of certain occupational histories, for example, but it should all be under the umbrella of your system, whether that be Epic, Prognosis, whatever EMR you're utilizing. Actually, in that same vein, Dr. Rosenthal asked a question here and I wanted to answer that live too. She's asking, so if these prior authorizations, these authorization requests, will these systems add information that may not be true since its goal may be authorization? Do you have any thoughts on that one? So it should be no because you're utilizing a concrete data set to ask for a prior authorization on the basis of what this patient is presenting with or on the basis of the type of service that you are trying to render for that patient, so unless the API will authorize that. So if you're using the API for chat GPT, you should be using a more concrete structure of chat GPT that's included with the API, that's included with your system, and that should be programmed as such. So prior to launching that, what they should do is a period of testing to see how the input is corresponding with the output for chat GPT versus your staff and the comparison of both of those and the insurance that there's no extra added data, but if you're essentially kind of taking all of the world of medicine and all of the world of possibilities and truncating that and bringing it down into one kind of data point of usage and it should only be used for that purpose. So, you should definitely test that out prior to using it. But the answer to that is it should be a very straightforward thing if you wanna use it with an API. Excellent, I think we still got some time for a few more questions. Bear with us, you're doing a great job, Dr. Jackson-Scott. Looks like Dr. Levine had a question here in the Q&A and he is asking, could AI be trusted to interact with patients to collect occupation and exposure history data? Yes, and the reason why is because you can actually program it to ask a particular data set of questions via an algorithm, again, with an API. And so, if ChagGPT, that's just a simple input-output algorithm, right? So, how did you get injured? Have you been injured previously? The answers are coming directly from the patient to ChagGPT, so you can use it as like a chat box, you can utilize it as a format or an app within your EMR systems to collect that data. And I think that was on one of the slides as well. But yeah, you can release ChagGPT to go and interact with a patient just like an MA would. Hey, I want you to ask every patient this data set of questions. Or, on the basis of type of work, if they're a software programmer, I want to know how many hours they work, I want to know if they're an ergonomically designed or set up desk, if they're a construction worker, what kind of work are they doing, are they walking, are they climbing, what kind of lifting, how much, how often. If they've had previous injuries, I mean, you can really go into a complete data set. I mean, the number of times we get an incomplete occupational history is, unfortunately, a lot, right? So it's kind of like the guy who hurt his back, but we didn't really get a clear occupational injury. Did you hurt your back previously? Yes. Is it still open? No or yes. Was the claim settled? Yes. Was there a financial settlement? Yes or no. If there was, was there a future medical associated with that? Yes or no. So all of those answers really affects the current case. Is it with the same company? Is it with the same insurance company? If it's a different company, is it the same insurer or a different insurer? Do you have the claim number? So all of those things can be included to really get a good, solid occupational history to say, okay, you know what? You got injured, but this claim is still open. You're gonna treat it under the old claim number and kick it back to the old insurance company, or you settle for a higher settlement with no future medical. You essentially brought your back. There's nothing that occurred, so you need to follow up with your primary care doctor. So it can be formulated to get that really good, solid history every single time without fail, and without adding extra data because it's just simply asking a patient these type of questions based on the algorithm that you set forth. Great, thank you so much for that. We got a few more questions left. Bear with us, folks. Looks like Jacques Altawil asked, do you know the extent of use of chat GPT in coding and billing? So yeah, you can, depending upon how you set up API and how you run your API, it can help you. So it will require a lot of data. You would need to add all of the, is it a focus? Is it a mixed focus? Is it comprehensive, comprehensive extensive? Is it a suturing? Is it a simple laceration or complex laceration? So all of those things can be included as data sets to help you determine the best aspect for billing. So again, it will require the appropriate type of input. And again, this goes back to, I think maybe the eighth slide, I think, but really it's all about the input. So you can formulate your API and get someone, and if you don't have the type of coders or the development, then my suggestion would be to go to a website called upwork.com. I don't know if you've ever heard of it, but it's a great resource. It used to be called Elance and then ODesk bought Elance and they just kind of combined to make Upwork. So if you go on Upwork, you can find a developer to include an API for you. You can utilize developers from India, China, Pakistan. They're great. And ask them to, hey, we want our billing to be this way. This is where we are currently, and this is what we want. Set a price. I never do hourly. I always set a price, one particular pricing, and then you'll get posted and you'll get like 50 offers or 50 people who may be interested from different countries, or you can only set US if you want. And interview those guys and make a determination of who you wanna utilize and then proceed with the job. They'll get it done for you. You can test it out, make the corrections, see how it's working, make sure it's working for you, and then go ahead and deploy that application. And there are, I mean, you can literally get anything done with these developers from an app. You can take your EMR, you can take your intake and make it an actual iPhone, Android application that they can download from the store. You can do it as a web SaaS hosting application where it's just on your website and they can just click a button and go to that application once they sign in with the intake. You can make it such that your staff will have to email them and they log in via an encrypted portal. So the options are essentially up to you and how you want to do it, how you wanna display it, how you wanna orchestrate it. You can get them to update your overall website if you want. So the options are endless. So it really depends on you, your funding, your sourcing. The other good thing is all of these developers have reviews and previous interactions. So you can kind of get an idea of the success rates. I usually go with a 98% and up. Like I don't deal with anyone on that work who does not have a success rate of at least 98% with like at least 20 to 50 previous jobs. Like that's just the standard in working with these companies for over a decade that I've seen in terms of the quality of work that's been produced. And that's just my preference. So I've gotten them for editors. My daughter and I write children's books. So we have illustrators through there. I've developed apps through there. I have a company, any app is possible. We develop tons of apps for companies and the physician's offices across the country. So you can really utilize it for whatever your needs are. And there's another component on there where they actually already have like pre-set gigs. So you can go into the gig section and say, hey, I want a developer for say Wix or if you use Square, if you use Prognosis or Epic and just say, this is an Epic job and this is how I want it done. So they kind of have these blocks of services that they can offer you and you just pay for it and get it done through there. So you have tons of options on that. This is a great tutorial. And I've used Upwork in a couple of ventures I have pursued. So I think we got to write this up, Dr. Jackson Scott, because I think this could be a really good tool for a lot of us. So thank you for that. Looks like we got one more question in the Q&A. Kareen Monique Hollis-Perry, she is asking, let's see here. Do suggestions for writing chat GPT prompts apply to other systems such as Perplexity? The system for writing prompts, does that apply to Perplexity? It's a great question. I'm not exactly sure, but I'll give you my kind of go-to for chat GPT and prompts is a Google extension called AIPRM. AIPRM, you can download it for free and you can use it, or you can also add it to your account. And you can use it, or you can also upgrade it. I think it's like 10 bucks a month or something like that. And they, it's one company, and all they do is make prompts, right? So they make prompts. A lot of these guys are on Upwork. A lot of them are also on Fiverr. So you can also utilize their other services if you want, but I usually use it either for development of all types of prompts. Sometimes the prompts are very simple and straightforward. Sometimes the prompts are really lengthy because I want a really good dataset output. And then the other good thing about chat GPT is that you can utilize it as an ongoing conversation. So it's not like one output and that's it. You can get the output and say, well, regenerate this output, but add this specific aspect to it, and continue that conversation as long as you want. And it will converse with you and give you the datasets that you want. So you can really hone in on that type of answer that you're looking for. So AI PRM is our go-to. And then sometimes I use it to write, you know, if I'm a little bit in a rush and I want an idea about an email, or how can I write an email in a nice way? Or how can I write an email in a more stern way? You know, it'll give you the specific mannerisms and really shift too. Like if you look at a stern email versus, hey, I want this in a polite, professional manner, it'll give you the same output, but in a more direct, stern versus a more nice and polite manner. So you can say professional or eloquent, and it'll give you that. So you really have an option to utilize it. And a lot of times you don't necessarily have to copy and paste, but it gives you an idea of the structure and what you want to say and how you want to say it. So that really does come in handy. Excellent. Well, thank you so much, Dr. Jackson-Scott. Really informative talk. I know these conversations will continue, but it's really good to at least start the discussion and really try to exchange best practices and develop the frameworks going forward. So I think we're going to go on to our next presentation. All right. Thank you. Next, Dr. Dharabi will present part one of the AI and Machine Learning and Occupational Health, Current Research, Practice, and Ethical and Privacy Implications. Dr. Dharabi is a professor of Industrial and Systems Engineering in the Department of Mechanical and Industrial Engineering at the University of Illinois, Chicago. He is the co-director of the Occupational Safety Program at the Illinois Education and Research Center, which is funded by the National Institutes of Occupational Safety and Health. Dr. Dharabi is the creator of the first Artificial Intelligence for Occupational Safety and Health Experts course in the United States. Dr. Dharabi's research focuses on the use of big data, process mining, data mining, operations research, high-performance computing, and visualization in improving educational, worker safety, and healthcare systems. Dr. Dharabi's research has been funded by federal and corporate sponsors, including the National Science Foundation and the National Institute of Occupational Health and Safety. All right. Can you see the screen in the presentation mode? Yes, sir. Okay, perfect. Thank you very much for the introduction. Very happy to be here. So I think this would be a good match for the presentation that was just made by Dr. Jackson Scott. So in this presentation, I'm going to talk about an overview of artificial intelligence and machine learning research methods in occupational health. So the presentation includes multiple sections. First, I talk about the motivation that why we are interested in AI. Of course, I'm sure everybody here is interested in that. And then the concept of artificial intelligence systems, what that means, and the types of AI problems that we face. And also a framework that will help as a structured method to solve AI problems in certain categories. We call that REDECA. This is a method that was developed by my team. And then we talk about how AI systems are developed, what machine learning is specifically, and how machine learning models are developed and evaluated. And finally, we have some basically conversation about ethical considerations in AI systems that is of interest to many people. So two acronyms that you should remember in this talk, one is I use AI for artificial intelligence and ML for machine learning. Another basically point that I have to mention before my next slide is that the goal here is to introduce you to what it takes to develop an AI system, not, so we are not looking at you as the users for this presentation. We are basically, so if you want to know, okay, if I want to develop an AI system in my organization, what should I do? What it takes to do it? So it's kind of deeper dive into the AI world. So let's start with the motivation part. So if you look at basically the published articles on AI, this chart actually shows that, and that's basically using PubMed. So if you go to Google Scholar, you see a much bigger basically job, but PubMed is basically medical-related publications. And if you just, and I accessed that on October 11. So you can see that, you know, from maybe five, six years ago, you have, we are observing a huge jump on this. And that is basically, you know, this part. So that's a very important part. That, so it means that, you know, this is becoming really important and we have to know what it is, AI. Another one is that we see a lot of news and these are just some samples that I collected. And these are very recently, right? Like, so you can go back to three, four years. And even today, I'm sure you'll see a lot of, you know, news titles about AI. Some of them are, you know, they say, it looks like they have solved a big, huge problem. And some of them actually discourage you because they say, well, AI might kill people. So in this world that we are hearing all these contradictory, you know, news and talks, how can we really digest this and understand it? That's another reason we are having this presentation. And finally, there are lots of myths about AI. For example, I have seen, I have talked to people and I have trained a lot of people in AI. And one thing that most people in the beginning of the training say is that, well, I'm not a computer scientist. I'm not a programmer. So I cannot develop an AI system or machine learning system, model, whatever, you know, the name is. So that's a myth. In fact, if you want to develop an AI system, you will see in this presentation that you can't just use a computer science person. That person can never develop a system that works for you. That person can be a part of the solution, but not the whole solution. So you can see all these things. And there are many more actually, but these are the common ones that I hear every day from different people. I just wanted to say, these are basically not correct. What you see on the screen. So now let me start by, what's the definition of AI systems? As I mentioned in the beginning, I am really trying here to give you a deep understanding of AI systems. And I want you to basically have a kind of critical view of what I say, because that's the best way that helps you understand what I say. So try to write your questions and ask, if there is a part that you have a doubt, please ask that after I'm done with my presentation. And hopefully that will help you understand things much better. So what's an AI system? Let me begin with how we as humans, basically what is natural intelligence, right? How we use our brain to make decisions. That's actually extremely important in order to understand AI systems. So what you see on the screen, on the left, there is an, we call it environment. That's like the system that you are trying to, you know, do your decision, make your decision with, for related to that system. In this case, we have a construction site and there are workers that could be very close to the ledge of, you know, that, you know, building, and they might actually fall. And there is a supervisor here who is watching the workers. And if a worker gets close, that supervisor might sound an alarm or just call that worker, you know, just say something that you are too close and you might fall, right? So if you look at this problem, remember there is no computer, there is no AI, nothing. It's just two people. One is the worker, the other one is the supervisor. One is watching the other one. And the supervisor is basically using his eyes here and watching the worker. And then the eyes are sending the information to the supervisor's brain. And then the supervisor, well, at some point he says, oh, this is a serious situation. I need to warn the worker. And starts talking or alerting the worker. So if you look at this whole decision-making problem, there is an environment. So that's the system that you are actually trying to control, right? By your mind, by your natural intelligence. And there are information that are moving from your environment to your eyes and then brain. And then your brain sends some information to basically you are using your mouth to speak and send the information back to the environment. In this case, worker, which is very close to the edge. Okay, now if we want to have a generic view of what just happened, there is an environment, right? I call that environment, which you know what it is now. Then there are information or percepts and we are receiving them through some sensors. in the case of natural intelligence, those were the eyes of the supervisor. And then the supervisor brain, we know how it works, but if we really want to talk about artificial intelligence, that part needs to be replaced by something that we call it a machine learning algorithm. So the machine learning algorithm is the brain, you can imagine. And now remember, because there is no actual human in this case, so you need to replace the eyes with something that can see, and we call them sensors. Now I'll show you like in that example, what happens, and you also need some means to actually report the result back to the environment in terms of action. So if we go to the construction site, now let's say we want to remove that supervisor and replace that with an AI system. In this case, we're gonna put some cameras. Cameras are continuously watching the worker, and they are sending their images to an ML algorithm, which is running on a computer in real time. And then, you know, ML algorithm, all it's trying to do is trying to say if there is a risky situation for the worker or not. So as long as the worker is not close to the edge, nothing will happen. As soon as the worker gets close, then that ML algorithm can actually maybe sound an alarm. It could actually send a signal to a wearable, maybe an Apple watch that our worker is wearing and warn the worker, right? So we just replaced a natural intelligence system with an artificial intelligence system in this case. Now, one thing that I want to bring your attention to, because this is a very common mistake that many people think that machine learning is the same as AI. Now, through this example, you can see that machine learning is a subset, is a component of a bigger system that we actually call it the AI system. It is not the only component. Now, of course, programmers, computer scientists are used to develop this part, machine learning part. But as you can see, the system is much bigger than that. So you can't just bring a computer science person and expect the whole thing to work. There are lots of people need to be involved. And I actually have a model to show you what is needed for that, which we're gonna discuss. Now, going back, so what are the different types of AI problems? There are actually four types that you might see in especially when we talk about occupational safety and occupational medicine. So type one are real-time monitoring control of safety systems, or you can call it occupational medicine systems. Now, in this case, in this type of systems, the AI system is actually monitoring another system in real time and has to decide in real time what to do. And has to, like the example that I just showed you, the construction worker, that's the real time. Now you can imagine that you are doing surgery and you are using a system that is monitoring the patient's vital signs and some other things, and it's going to warn you that a heart attack might be coming in the next five minutes. Now, remember, nothing has happened yet, but that AI system is gonna give you a warning that five minutes from now might be a heart attack. That's a real-time system because continuous monitoring, and you need to take action right away. And that system needs to tell you right away. These systems are actually very critical and they need to be reliable. So if you are developing a system like this, it better be very good and reliable. Second type are planning and decision-making AI. These are the systems that we are not making real-time decisions. So in fact, what the AI system will return to you is some prediction of something that you have time to think about it. For example, after people are discharged from emergency room, what's the chance that a given patient might actually come back to our hospital, to the emergency room, within the next 30 days? And that is an unplanned visit, right? So remember, this is not a real-time thing. This is just, you're trying to categorize the patients who has a higher chance of coming back and maybe do some intervention, maybe giving some educational material to that patient, like this diabetes patient. You know, you better take your drug on time, maybe send the nurse to their home to make sure they are actually complying with the prescription you have given them, right? These are not real-time decisions and they can wait. But still, you need to, the AI model should be able to do a good job. Third type are educational AI systems. Here, there is no real danger. There is no real decision that needs to be made. We are basically using a virtual reality, a simulation model, or some kind of, you know, basically not real, virtual, artificial system to train system people. These are actually very important in high-risk operations. The best example is when we use these to actually train our surgery students, especially not at neurosurgery. There are actually systems that, you know, neurosurgery students can actually use the system and they make a lot of mistakes when they do that. The system shows that, that they are using the tools, you know, incorrectly, maybe cutting in the wrong, you know, place. There's a lot of, you know, systems that help in these things. The good thing about the system is that you get to train people before actually they do the damage to whatever the system is. So that's another AI system that is in use. And finally, this is when we think that the risk to the worker or the patient is so high that we cannot have the worker in the loop. We need to remove that, the worker. And that's, we call it worker out of the loop. You can imagine like the picture on the left, this is a welding system, which is a very high temperature welding system. And in this case, you do not want the worker to be even close to this. The worker should be completely out. And this welding system basically identifies when to start, what to do, when to stop, and when to basically release the part. All those things are done automatically. And that is an AI system. So the human is out of the loop. In some cases, we might actually do that. And that's another type of AI system. Now, so you might ask, okay, so if I want to develop one of these AI systems, is there a way for me to do it? Yeah, there's a lot of actually help in that. And one thing that works for two classes of systems, one is real-time monitoring, which was the first type, and the other one worker out of the loop. So what you see on the screen is a structured graph, which basically tells you if you want to develop AI systems for this kind of applications, you basically need to think about all these boxes on the screen, which in this case, we have three blue boxes, R1, R2, R3. And these are basically, the R1 is when there is a safe scenario. Like if you're talking about the worker, the worker is safe. There is no risk of any kind of accident. R2, the worker is at risk. No accident has happened. And R3 is when accident has already happened, okay? So if we, and remember, every safety, occupational safety problem can be decomposed to these three, every single one. There is no other way. If you have one of these three. And so then the green and orange boxes and the white boxes need to be figured out that do we want AI and automation to do those things for us? Or do we want to do those things manually? For example, if you look at the green box, which shares the probability and time of entering R2, that means we are in a safe situation, but we want to know what are the chances that we actually enter a risky situation. Should I use AI to find that probability and chance? Or should I use some common rules? For example, what's the, like you have a patient, you can say during this state, there is no probability of heart attack. But if a patient is doing some sort of exercise or has not taken, you know, his or her prescription, then that patient is in R2 because a heart attack could come. And then when the heart attack comes, then the patient is at R3 because the patient is already, damage is done. Now you think about recovery and what to do about it. So do I have a method to predict when the patient goes from R1 to R2 or not? And if you look at just, I don't plan to go through the whole thing here, but look at like another intervention. Let's say this one, intervention that sends the worker or the patient back to R1. So your patient is already in a risky situation. I'm sorry, I changed the slide. So you are in a risky situation and you want to know if there is anything you can do to send the patient back to the safe mode. Is that something an AI system can calculate for me? Can I have an AI algorithm, which basically looks at what the patient is doing and make suggestion, this patient needs to stop the exercise at this point. That's an intervention, right? So if you really want to do a structured design of AI models like that, then this would be a huge help. Now going back, okay, so we know we have four types of AI systems. We know that machine learning is just one part of it. It's not the whole thing. Now, if I decide to develop an AI system in my organization, how do I do that? This chart will help you. These are basically all the steps that you need to do. Okay. The first thing I want to basically bring your attention to is that this is a teamwork and it requires a lot of expertise to be in place. Now that red role, which we say machine learning, that could be a computer science person, but now you can look at a lot of other things here. Like in terms of occupational safety and health, that's a physician or an occupational safety expert. They need to be there. You can't just develop, give something to a programmer and say, do it for me. That's not going to work. I actually have a lot of, I've seen a lot of failed projects. I have been also engaged in a lot of failed projects in the past, maybe 15 years, that we did not follow this model and we failed. So let me briefly go through all these steps and see what we can learn. But before every step, remember that these steps, even though we have shown them sequential from left to right, you actually can go back because something didn't work and maybe redo things. So this is not like always you move forward and you never move backward. Depending on what happens, you can move backward. And sometimes you might have an abrupt exit, like it happens in this step, right? So you want to actually start an AI design problem and in the middle say, you know, it's not going to work and just stop the whole thing. It doesn't have to go to the end. Anytime that we see basically that this is not going to work, we are going to stop. Okay. So let me go through these steps, but I'm going to actually use an example to explain these steps. So in this example, we had a heat treatment plan. And in this case, there are like very hot objects that are actually are produced. And there are workers on the shop floor that could actually be at the proximity of those very hot objects. And if they touch those objects, or if the objects hit them, that's a huge problem. Burn problem and could be very unsafe for the workers. So in this example, we are trying to see if there is a way we can use AI to prevent accidents of this type. So the first step is the problem definition and in problem definition, basically we need to say, what are the concerns that we are trying to use the AI for them and address them? Now, remember, there is no way a computer science person, I keep saying computer science, don't think that, you know, I'm trying to kind of downgrade what they do. I am actually a computer scientist myself. So, but I'm just trying to make sure you can distinguish between what they need to do and what you need to do, if you are not a computer scientist. So defining the problem, that what is really needed to be addressed here, no way a computer science person can come in, unless that computer science person happen to be the occupational medicine expert as well. So one person plays both roles, that's fine. Then that person can do everything. But that is usually not the case in different organizations. So in this case, okay, what is the thing we want to do? And remember, that needs to be also checked with the management. So if you are a physician, you think you can use AI for something at your workplace. And the division director comes to you and say, oh, there's no money, or no, there are legal issues if we do this. Then it doesn't matter, it's not gonna work, very simple. So you can define the problem, but it's not gonna proceed. So management needs to be involved. So that's the problem definition. In the heat treatment, of course, we define the problem. We say, due to an increase in burn incidents, the company wants to implement a proximity detection system to alert workers if parts are hot. That is the problem we have defined. Now we go to the second step, data availability. Now, remember, I think it was mentioned in the previous presentation, when you want to develop AI systems, you need data. If you don't have any data, forget about it, just exit right away, you need data. If you don't have data, and if this is really important to you, then you need to have a plan to collect the data. And after you have enough data, then you can start this process. So in this case, we say the data, do we have the data in this case, the heat treatment facility has cameras in with temperature gauging capabilities. And they have checked, and they have checked that they already have one year of images from these cameras. So the data is there. So that means for one year, these cameras were watching the workers and the heated objects, the whole thing. So you can actually go back to those images and point, oh, this is where the worker is close to the object, and there should be a warming. Or you go to another image and say, no, the worker is not in danger here, right? So let's say those one year of data is converted to 5,000 images, and some humans are going to watch those. And for each one of them say, there is a danger or there is no danger, that's it. So let's say you have that data. Okay, so what do you do next? The next is what that, you remember the brain that I said, that the computer acts as the brain, the machine learning task, what is that? So, in fact, what are the inputs and what are the outputs that we want from that machine learning brain, artificial intelligence brain? That should be, remember, you are not saying how the brain works in this step. In this step, all we are doing is what we're gonna give to the brain and what we expect from the brain. How brain is gonna convert the inputs to output, that is not in this step, that's a separate step. So in this case, you know, we are saying, oh, what we want is that we want the cameras to watch. So basically the inputs are images coming from the cameras and we want the machine learning to tell us whether the worker is in the proximity of a very hot object. That's what we want, input and output. Next step, machine learning development. Now we go to the brain part. Now that we know we have enough data, we know the inputs, we know the outputs, the question is that, what is that algorithm? What is that computer code that can actually convert the input to the output? And that is the main part that that computer scientist, that machine learning person, that AI expert, there are lots of titles people use these days. Those people come in and they talk with you and they develop that part. We never ask someone who's not trained in programming to do this, but the previous steps cannot be done by a programmer. And you will see the subsequent steps cannot be done by the programmer either. So programmer is involved basically in just this step as the main role and in some other steps as a side role. So now you can see that to develop these things, there's lots of work and people involved to actually develop these things. So in this case, we have a programmer and some people are coming in and they're developing an ML model for us. And then now we need to test it, right? So now we're gonna do the testing. So let's say the programmer told me that, oh, here's the algorithm. I have tested that in my world. That means I have some numbers that my computer gave me, all right? And using those numbers, I am telling you as the occupational medicine expert, I am a programmer. I tell you, my machine learning, my artificial brain is actually very good. It can convert the images to warnings. Okay, so now you say, okay, if it really does a good job, then let's put it there and let's send some workers next to those hot objects and see if it works. Now here, this is a very critical step. This is like, I come to you and I say, I have an algorithm which can tell you who's gonna come back to the emergency room within 30 days. Okay, unplanned visit. And I tell you, I have a very good accuracy. Well, you might say, well, that's really good. We trust you, but we need to verify. How do you verify? We say, okay, bring your algorithm. Let's, I have like 50 patients that are actually being discharged this week. And let's see what happens. So we're gonna keep your algorithm for the next two months and see what happens. Does it really work? Because if it works, it should work for those patients. That's the workplace testing. Now, remember again, the programmer is not even here. You use it and you test it. If that works and you are satisfied with the whole thing, then final step is the installation. In this step, we are sure this works, but we need to train the emergency room physicians. We need to train the nurses. We need to train our IT personnel because now this algorithm would be a part of our emergency room. In this case, we need to train the heat treatment plant workers, IT people, managers, supervisor, occupational safety experts. All of them need to be trained. Manuals need to be developed. You know, a kind of handbook should be written in case we see a situation that a worker has an accident, even when the system is working, but the system fails to detect what is the next step, how we should recover that, how we should train the system. All these things need to be installed and run. Now, remember, if you don't reach to this last step, all you have done is just a practice. You might be able to publish papers. You might be able to go to a conference and talk about things, but you do not have a real impact on the system. Real impact is only realized when you reach to this and the system is working and the people are happy with it and they feel they got something that has improved what they were doing before. That's the real impact. Everything before that is just talking. Okay, so I'm gonna jump this. Now, you might say, what's going to happen if in the middle or somewhere we fail? Then is the whole thing really wasted? Well, you don't reach your objective, which is putting a system to work. Yes, you have been stopped. I think this could happen at any time, right? You saw the pipeline. It could be anywhere. But all these bullets that you see on the screen, they're actually my experiences. I have been involved in so many of these projects. And I can tell you there are lots of side benefits here. One is that in many cases, people, when they want to start, they see that they don't have enough data, right? So they collect the data and they find a lot of mistakes in their data. You know, something they were not collecting, inconsistencies, like they delete their medical notes carefully and say, well, they didn't really do a good job in writing them. Now someone wants to do text mining. They cannot. So whatever. Then they try to come up with some data, right? And let's say the AI project fails. You still have a good data set now. Something you can trust. That's usually useful. And you also learn what you were doing wrong in terms of collecting your data. The other one is that sometimes we end up putting some automation components like sensors and other decision-making tools, actuators, communication tools, because those are the things that AI system needs. We install them. Maybe we never used them for the AI, but because now we have them, they actually can be very helpful for other purposes. The other one is that when you want to do an AI project, you are forced to think how you make decisions. Remember, all AI projects are trying to mimic what you do. There is not a single one that can actually do something that you have no idea how it happens. That's impossible. They are mimicking what you do, but the problem is that you might not be able to do everything at every time in a timely manner, but AI systems can actually do that. So you are forced to think about your decisions. Sometimes you see that, you know what, even in your mind, there are a class of patients that you actually don't know how to treat, or there are cases that you have no idea what to do. That's always useful when we do this practice. Then failed AI projects could actually help you succeed in the subsequent AI project. So if you do a project, you fail, and you have money to redo the whole thing, now you can actually succeed. That's another advantage. Okay, now let's go to a closer look at machine learning part. Remember, machine learning is the one that you create the brain. So I want to spend a lot of time on these things, but there are actually different types of, you can call them brain functions, like if you want to look at the natural intelligence. So there are different ways that you can develop the brain activity. Supervised learning, this is when a clear output is determined, and we are trying to predict it or classify it. Unsupervised learning is like, we don't say what is the output, and the system comes back and say, you can always use these inputs to predict those outputs. And you didn't even determine what is the input, what's the output. Reinforcement learning is when you actually are trying to learn in real time and improve things, okay? Now you can actually consider chat GPT a part of this reinforcement learning. In fact, whatever you type there in chat GPT, whatever you put there and you ask the question, is this correct? Did I do right? And it gives you, of course, a lot of nice suggestions that you didn't even think about, but you might think where those suggestions are coming. There were a lot of other people who put their same text in there, and this software basically puts them together and returns them back to you in a very smart way. So what you see on the screen is actually a collection of lots of people's ideas in a very refined way. And that's what you see. And every time you input something to chat GPT, that goes to the chat GPT learning model. So what you put there might be actually given to somebody else later. You can actually try that. That can be done. So now there is this slide. I don't want to spend a lot of time on this, but many people are trained in statistical analysis. And they always ask this question. Many people ask me that question. Okay, what is it that I can do with AI machine learning, but I cannot do with statistical analysis? There is actually a lot of things that you cannot do with statistical analysis. This table is gonna help you basically see the differences. And I'll be happy to answer questions later, because again, we don't have time to go through every single thing here. But again, if you have any question about this, I can always go back to this slide and we can discuss it in the question and answer part. So if you want to develop a machine learning model, what are the steps? Remember, this is about just the machine learning, the brain part. This is not about the whole AI system. The whole AI system development was the pipeline that I showed you. This thing is just a machine learning development, which was one of the steps in the pipeline. I am opening that step for you. Now, in this case, you need to collect your data. You need to clean it, get rid of wrong data, remove redundancies, remove inconsistencies, remove duplicates. Maybe if there are missing data, you can actually try to do something about it. There are lots of techniques to take care of missing data. After you have a data set, which you really think is good, remember now that computer scientist is next to you. They can actually tell you if the data is ready. And then you pre-process and then you model the data when pre-processing doesn't give you the brain model yet. It basically changes your data to a new data set, which is now ready to be given to a brain development computer program, right? And then, then we are going to basically write that program based on that data. Now remember, when people, because you hear this word a lot, deep learning. Deep learning is just one of the machine learning methods. And when you want to develop a deep learning model, that is the step. Everything before that has nothing to do with deep learning or any other algorithm. That is just this part. And you can see this is a very small part of the whole thing. And then the trained machine learning model. So now you develop the training model. It has to be deployed. It has to be really tested. So, and just again, a very brief kind of idea. Like, so when we say this part, how we have to determine the algorithm and develop it, the modeling part, how is it done? That's the modeling part. So you got the data set. You break your data set to two disjoint subsets. We call them training and testing. You don't touch the testing. You basically, and that's usually the norm, okay? Some people don't break the training to training and validation, but most of the time you have to do it. So, and so now your original data set, let's say you have 10,000 diabetes patients. That's your data set. You break them to 8,000 for training and 2,000 for testing. And then that 8,000 for training, 6,000 go to this smaller training. Another 2,000 goes to validation. And that 2,000 for testing is also there. Now remember, there is not a single patient that is in more than one set here. They're all separate. Okay. So what happens after that? Then you're going to talk about, okay, for diabetes patients, what are the information about the patients that we really want to use to predict if they are coming back within 30 days to our emergency room, unplanned visit? Right? So, and, you know, different people have different ideas, right? They say, well, look at the vital signs in the emergency room. Look at the blood sugar level. Look at this, look at that. Look at the history. Look at the, you know, complying with the prescription instructions, a history of that, exercise, whether ethnicity, you know, social context, so many things, right? So, okay. So all those, we call features. All those input information, we call them feature. And we need to select what we want, right? So the goal here is basically start from the train and putting different features in, you can play with them. Again, when I say you, this is actually the programmer that does it. They can put different things in, put the blood sugar in, don't put it in. You can look at things in different ways Every time you develop a model, then you use your validation set to test it. And there are lots of metrics to say if the model is really good and it's predicting. Prediction is, well, if a patient really came back within 30 days, if our model says that patient is going to come back, we are doing a good job. If the model says the patient is going to come back, but patient didn't come back, well, the model is not doing a good job. So we can just use that to predict which model with what features actually was the best, right? So let's say you do all those things and then you pick a model. Now you remember we had also a test set. Now, if you really want the real performance of your model, you have to give that test set to the model and see what happens. How many of the patients in that test set are actually categorized correctly? Now, the reason we say this is the real evaluation is because the patients in this set were not used to develop your model. So that means this is like a real test. And whatever the test set metric gives you, that is the accuracy of your model. You cannot use the accuracy of the train or validation and claim that as the accuracy of your model. The test can be, but there are lots of, of course, side notes here, which, you know, we can definitely discuss later, but it's not the scope of this presentation. So just quickly here, you know, when you want to evaluate these models, as I said, if you are trying to just do a binary classification, this is a very simple example. You can, let's say a patient is coming back or is not coming back to the emergency room within 30 days. So if a patient is coming back and you predict the patient was coming back, that's a true positive. And you can say like a false negative is that when a patient, we say, we predict the patient is not coming back, but the patient is actually coming back. That's called a false negative. So with this, you have true positive, true negative, false positive, false negative. Like in this case, if we look at this, so the actual value is written in the first column here and the predicted value is written here. And here, basically, if we want to know the basically, so let me just, if you want to know the false positive and false negative, the number of them, it's like eight is the false, is the true positive because eight patients were actually positive and we also said they're positive, right? So you can just continue this and calculate this. And another metric, these are just some examples. There are lots of metrics in machine learning and I'm not trying to give you a comprehensive list, but these are just some examples. Accuracy is the true predictions, which means true positive, so let me just show it to you. Train, hold on. So true positives and true negatives. That's divided by the total number of patients. In this case, we are predicting 13 of them correctly out of 19. So the accuracy is 68%. Another very important measure that we use for evaluating a model is AUC, area under the curve. Now, I don't have time to go through the details of this, but I can tell you, this is the single most important evaluation metric. I'll be happy to send you more information about this, but I'm sure you just go Google this area under the curve or AUROC, that's another name for it. You will see a lot of information online. I'll definitely invite you to read about this because it's very important. And finally, model evaluation, you can have good, you can, what we want in terms of fit, we want a good fit. Underfitting is a situation. So you see this dotted line and you see the actual dots. The dots are actual data. The line is the predicted, basically, value. This is a completely underfit because we are not doing anything here. I mean, there's no good prediction. If you look at the model on the right, you are doing an excellent prediction, but I can tell you, this is called an overfit. And when you see something which is actually give you a hundred percent accuracy, you should have a huge doubt. We usually call that overfitting. And that means they are using your data to tell you what you are doing. And that is not prediction. The nice prediction is that you're using 20 other people data, and then you come back to me and say what I am doing. That would be acceptable. And the model in the middle, that is actually what we really want. So finally, last part of my presentation, ethical concerns. So I'm gonna talk about four of them. There are a lot of ethical issues, lots of them. As you can see, there is no standard. There is no rule. There is no oversight over what AI is doing right now. Many people are working on this. There are lots of unclear things. And one problem is, unfortunately, some of the people who are working on these things, they don't know AI very well. And so they might come back with things that we cannot actually use. But we have to wait, and hopefully this will come back. But until then, so one problem with AI is that it's an ethical issue, right? The fact is that how much trust we can put in AI? Okay, that's a big, huge question. Can you really just use AI to judge about people? The answer is definitely no. But how are we gonna judge that? This is a very important situation. And which is also related to the third one, which is bias. If you don't use the right data to train your AI machine learning model, that data might be trained for a class of patients and it might work for them, but not the other side. For example, there are actually clear prediction models that were developed for dosing, drug dosing. And they use basically white patients, completely white patients. The whole population that they use, they were white. And they developed the dosing algorithms. And then when they use that, they gave it to some African-American patients, it didn't work. And in some cases, actually the patient was put into risk, risky situation. And when they went back, they saw that, well, this model wasn't really designed for African-American patients because you didn't have anybody in your dataset. If you don't give that to AI, no way AI can come back and tell you, oh, for this patient, it could be different. Now, a physician, a human physician might be able to see that, but not AI. AI does whatever you give it to that. And going back, privacy, I think the previous talk talked about this, privacy and surveillance. We have a problem with that. And ChatGPT is a very good example. Everything you put there will be actually used by ChatGPT. Do you really want that? Is that something you want, your knowledge? There are actually like some actors and actresses, they are suing ChatGPT because ChatGPT just used their information to develop its own model. And they're saying, you can't, you can't use my information. So that's privacy violation. There are not that much rules in that area right now. So, and finally, AI mimics what humans do. Because there is no human judgment, you could actually see AI can make easily big mistakes. I can tell you, I am an AI expert. Whatever AI system you show me, I can design an example which your AI system fails. I can do that. Doesn't matter what it is. Because all I need to understand is that how your artificial brain works. And all I need to do, if I see that your artificial brain cannot digest a given situation, I am going to create that situation and send it to it. And then you find the mistake. This is done by a class of problems. We call them adversarial networks, which actually can look at my papers in that area, that you actually do the reverse. And instead of sending sensing and sending the information to the AI, you actually try to generate an output that AI can fail. And then you predict what AI cannot do. That can be easily done, not by regular people, but people who know how to do these things, they can easily do it. So for that reason, for example, I would never trust an autonomous vehicle because somebody could create a scenario where your vehicle, your car should stop, but it doesn't. That's not difficult to create. It doesn't matter how your car artificial intelligence has been trained. So, and I am going to leave you with this slide and you can read this. These are some suggestions that how you can actually protect workers' privacy when you develop AI systems. And this is something for you to read. And of course, the slides will be made available to you if not already. And I'm going to stop here and would be very happy to answer your questions. Excellent talk, Dr. Darabi. Thank you so much for providing us with a high level overview of AI machine learning and applications to occupational medicine practice. Just to kind of provide some additional insights, this is primarily a clinical audience. So many of us don't come from a computer science or data science background, but I think just having these basic principles to kind of understand, I think this will help a lot of us as we start to kind of envision how we're going to employ some of these tools, especially as we continue to see AI expand into various facets of medicine. We don't currently have any questions in the Q&A box, but I did want to just kind of expand on some of the topics that you broached today. So we're always worried about AI, at least I know I am. I think it's a great tool, but clearly we need to make sure we have guardrails in place. So at least from your perspective, how do we kind of balance the use of AI without being fearful of it taking away a clinician's job? Any insights on that? Yeah, sure. So yeah, thank you very much for that question. That's actually a very good question. And many people have asked that question in different venues. That is a true concern in every occupation. And this was actually mentioned by the previous speaker as well, that like would ChatGPT, which is an AI system, be able to basically treat patients or diagnose patients, right? And the answer was, no, it can't. So the same answer is there that whether we can have an AI system which would completely replace the physicians, the answer I would say, no. And there is a reason. And that's the human judgment part. And in medicine, especially in complex scenarios, there are always things that your AI system could go wrong. The other one is definitely when we go to surgery and real-time interventions, the trust on AI cannot be that much because unless you have a completely reliable system. So I would say the best way that physicians can equip themselves in this AI-oriented world is by understanding what AI can do for them. So if we look at five years, 10 years from now, the physicians who try to learn what AI can do for them, like how to use ChatGPT at their work versus the physicians who do not want to accept this and say, this is not for me, there will be a difference in terms of how you do your job. So nobody will be replaced, but there would be a huge difference in people who use AI and the people who don't use AI at their work. Excellent, great. Looks like we got a couple of questions coming through. I'll start with Dr. Pajani. He really just wanted to give you a compliment, really thanking you for providing this high-level overview. So thank you, Dr. Pajani, for that. And looks like we got some additional questions here in the chat. Just a kind reminder, please put your questions in the Q&A box, folks, so that way we can track the questions at a later time. So Dr. Hu actually had a great question here. Let me go ahead and pull that up for you. So she asked, are there machine learning methods to tease out the thresholds of outcomes? For example, if you want to predict the optimal range of the number of inbox messages that are going to a particular clinician by day or even by month, any thoughts on that, Dr. Dharabi? Yes, yes. So there are lots of AI systems for this. Sometimes, usually, let's say, if you're using Epic for your healthcare software, or let's say you are using Microsoft products to manage emails or whatever messages, usually those vendors develop artificial intelligence modules that can accompany what you do, okay? So that's usually is there. And even if you say it, they might actually give it to you. So, because it actually encourages you to use their system even more and become more kind of relying, you're going to rely more on their system. So, but here is the basically very important thing that I think the audience needs to need to know. I cannot imagine the day that there will be an AI system out there that can do everything for you. That is not going to happen. And the reason is, if you go back to my beginning, the slides in the beginning of my presentation, I had an environment component in the AI system. The fact is that if you look at this world for everything, there are infinite number of environments. Claiming that you can have a brain that can work for all sorts of scenarios in all sorts of environments, that is theoretically impossible. It cannot be done. And here's the other thing. General software like ChatGPT or Microsoft products or Epic products that come with AI, and actually they do these messaging things, they have their own domain. In many cases, what physicians need is beyond that, because you have very special environments that these generic products cannot actually be customized for that, in many cases. So the only way that you can actually use AI is to actually develop a system for that. And I would say it is not really complicated. So if there is something, a task, that you have the data and you think, this is something that AI can do for me, just talk to AI people and you can develop one. If you follow it systematically, it can be done. I have done this with many physicians actually, and it works very, very well. Great, thank you so much for that. Looks like we got another question here from Thomas Gasser, and he actually asked a really, really interesting question. So he actually was inquiring about worker health and climate change, and he wanted to understand how AI applications can help us really kind of combat the climate change impacts on worker health and safety, especially in extreme environments. And he also asked about health systems resilience impacted by climate change. I know it's a big of a kind of broad question, but I think he really brings up a good point. I mean, we've been focusing primarily on the occupational medicine facets, but really how do we apply these technologies and these applications to more of the environmental medicine side of what we do? Climate change is a great example. Yes, excellent question, excellent question. And no, actually I'll be very happy to answer this because this is actually where AI can come in and you actually need to develop the AI model for this. I don't think there's any product out there that can do this right now. There are companies, and I know a couple of them that are trying to work on this, but they don't have a final product yet. So yes, the world is changing. Climate is changing. And there are lots of environmental issues, heat, humidity, flooding, radioactive, you name it, it's there. And here's the other thing. A lot of those issues are not there yet, right? So we know we're gonna have a lot of flooding in, for example, where I am, Illinois, there's a prediction for that, but it hasn't happened yet, but it will if it continues this way. So the question is, even if you go just to the flooding part, so we have agricultural workers, right? So if we have huge flooding, first thing is that it's a safety issue, the other one is like a work issue. I mean, if we have huge flooding in agricultural fields, those people are gonna lose their job because it's a huge damage. So how are we going to do this? There are multiple ways. One is predict what is going to happen and when is it going to happen. And AI can do that. And I'm talking about five years from now, two years from now, look at different scenarios of different things in different environments, different signs, like we talked about different safety issues. So what those could be, what's gonna happen, and try to identify, are you ready for those? And if not, how AI can help you monitor the situation. And if those things are about to happen, or if they have happened, what would be the handling procedure for that? AI can be huge in this area, including the logistics after a disaster. The AI actually is great for those. So yeah, that's an excellent question. And I think we need AI for that, definitely. Actually, I think this is a very timely question because I just returned back from the American Medical Informatics Association's annual symposium. And I was part of a mini summit on climate change and informatics. And these same topics actually came up during the conversation. So clearly developing a research framework, how do we develop these a priori hypotheses? How do we actually start putting together these inquiries? And then developing the machine learning algorithms appropriately, taking in all these inputs that you just mentioned, Dr. Dharabi. I think this is a ripe area for research and really timely. Looks like we got another question from Rachel Han. So she wanted to know more about budgeting for AI. And she wanted to know the cost of implementation, for example, in the use for fall hazard detection and prevention. Well, so that is like, I can tell you, you know, when you want to buy a car, so what's the cost? Well, it depends on what you want and what the goal is here, right? So depending on what your problem is, and how reliable the system should be, and what you expect from the system, the cost is different. So sometimes we started an AI project and it turns out, let's say, you know, we are involved in a lot of NIH projects, right? So sometimes you already have the data for the patients. So you don't spend any money to collect the patient's data, right? To develop the AI model. So that's a lot of savings. Sometimes you start an NIH project and the first two years is to basically collect data from, I don't know, 5,000 patients and then use it to develop. Now that itself could be hundreds of thousands of dollars, just data collection part. And then again, depending on what you're trying to do, are you trying to just predict like a simple thing and have it on a computer available to one physician? Or are you trying to develop a model that can be put on a network and a lot of physicians, different parts, can actually input to it and get the output, transfer information between them, rely on their prediction to treat patients. Yeah, you are talking about different scenarios. And I can tell you, starting from tens of thousands of dollars, you can go to millions of dollars, depending on what you're trying to do. Actually, that brings up a really good point. I was just thinking about this. So in terms of sustaining a lot of these NIH research projects and you get your allotment of funding and then the money runs out and then that pretty much ends the project. So I'm always curious to see how do projects continue, especially if you're already pursuing a line of inquiry, how do you sustain that and how do you incorporate other stakeholders into that? Yes, yes, that's a very important. And NIH actually has some initiatives there and NIH asks you to actually make your data available when you do a project. Make your method available, right? So all those things are there, but this is an extremely important topic. And I would say if all the people who do, let's say research on cancer, they try to share what they do. Let's say you develop a chat GPT type, not chat GPT, but another software where everybody puts what they learned there, right? So can we get all the data in one place and all the rules, all the learnings in one place and agree that now that could be a huge thing for everybody because I would say, I have a patient, age is this, type of cancer is this, category of cancer is this, these are all other vital signs. Tell me what happened to other patients with this kind of, what kind of treatments they received, what kind of outcomes they had, and this is based on what kind of logic can we have a software that does that? Not now, but it can be done if all people come together and do it. Yes. I'm glad you said that. I think open source is really the key and I think that's what you're alluding to. So making sure that this data is available for folks to really take that deep dive and really continue the work and the ongoing legacy for that. Looks like we're just at time. So I wanted to thank you, Dr. Dharabi, for your excellent presentation. I know the slides will be available and I will definitely be referencing that as I continue my informatics journey. So with that being said, I believe we are going to take a break and I think we're gonna reconvene in about 15 minutes, if I'm not mistaken. So go ahead and take your bio break, get a cup of coffee, get a bite to eat, and come right on back. Our final presenter of the day is Dr. Cici Hu, who is presenting on AI and machine learning and occupational health, current research, practice, and ethical and privacy implications. Dr. Cici Hu is a research fellow at the Center for Labor and Adjusted Economy at Harvard Law School and a research economist at the National Bureau of Economic Research, where she focuses her research on the economics of clinician burnout risk and wellbeing. Her expertise is in risk science and its application to climate change has been well-received in the media from around the world, including the World Economic Forum, Sky News, as well as the UN, among others. She's also co-author of The Self We Choose, a book published in Chinese that features the life stories of scientists from the largest all-women expedition to Antarctica. Dr. Hu completed her graduate studies at the London School of Economics and received her PhD from the University of Oxford. Good afternoon, everyone. Thank you so much for having me. It's an honor to be here. My name is Cici. I'm a research fellow as well as an economist from Harvard University, where I focus my research on clinician wellbeing and burnout. Today, I'll be talking about how we can use machine learning to predict turnover, as well as using a combination of economics and intervention science to think about how we can budget for clinician wellbeing and really make that business case for why it's important to invest in clinician wellbeing. So before I start, I'd like to acknowledge that this is not work done by myself only. I work with a very talented group of scientists, machine learning scientists, economists, psychologists, mathematicians, healthcare leaders, actually, because we have multiple health system partners to do these research projects. So it's really a group effort. I have no financial relationships to disclose with ineligible companies, but in the interest of full transparency, I am a co-founder of a mission-oriented company that does work on wellbeing. So let's get started. Okay, the agenda is pretty packed today. I'd like to cover four main sort of buckets of issues that I'd like to present to you about. The first thing that I'd like to do is to walk you through a methodology that we've developed using machine learning to predict turnover risk as an indicator of clinician burnout. At multiple health systems, so you can actually use this at your system. Then I'd like to share, very excited to share a research publication that we published earlier this year at the National Bureau of Economic Research, where we basically developed a methodology to quantify the comprehensive cost of clinician burnout. And that really sets the foundation of making a business case for investing in clinician wellbeing, because if you don't know how much it's costing your system, you don't really know how to budget for it. The third thing that I'd like to cover is really to dive deeper into some ways that we should really be thinking about how we quantify the ROI, the return of investment in wellbeing programs and interventions. And this is actually one of the really big challenges currently facing many wellbeing leaders across the country. And the last thing that I'd like to share with you, which is very, very important, I think is, because we're using machine learning and AI to make decisions, we really need to think very carefully about what are the ethical implications involved in using some of these methods to make decisions and what should we do in order to address some of the ethical sort of implications and concerns. So I'm gonna pre-first surface everything, all of these different themes with a overall sort of problem that we have sort of recognized and then go into the solutions or sort of like some of the ways that we are thinking about solving the problem, and then share some lessons with you. So the number one problem is that I don't think I need to really restate, because I think a lot of the people in the audience would be very familiar with this, is that burnout is a huge, huge issue in our healthcare industry right now. As of this year, I think the latest number on clinician burnout nationally is 64% of our clinicians across the country are burnt out, and 1 million of clinicians actually plan to leave the healthcare industry in the next two years. And I think we all recognize this is a huge problem, can we imagine a healthcare industry without our healthcare workers? So it's hugely, hugely important and a very sort of costly issue. So we have done research with, actually interviewed 114 health systems across the country by now over the last two years and talked to at least 200 sort of healthcare leaders, physicians, APPs, and nurses, medical staff, and healthcare leaders across the country. And one of the big problems that I think to try and tackle with burnout is basically, according to these interviewees, is that they really struggle to get ahead of the game, like ahead of understanding turnover risk and who is about to leave. So right now, the current solution that most health systems use are some form of surveys, either in the form of an engagement survey or a burnout survey or an intention to leave survey, some kind of survey that basically says, hey, what are the issues that you're struggling with and how can we help you improve what you're struggling with? But surveys are great for providing a snapshot in time of what are some of the problems at hand, but they really provide an incomplete view of well-being and retention issues. And actually, nationally, and that's because surveys are restricted by response rates and also they are subject to potential non-response bias. And I'll go into a bit of detail to explain what that means. So nationally, only about 30% of clinicians actually respond to engagement surveys at different health systems. And we're actually later publishing a paper on this topic this year where we basically find that survey non-respondents, so those who didn't respond to survey, and that could be 70% of your population, depending on your response rate, they have very, very different sort of risk profiles or sort of profiles in general. So overall, what we find in this particular study, and this was done at a nationwide health system, they're basically five times more likely to leave. And this is probably surprising to some of you and maybe not surprising because if you don't respond to a survey, you're much likely to be engaged with your work, right? So they're five times more likely to leave. They are generally less productive, and they have different risk drivers. And that's really the key part. And because basically imagine you're developing a wellbeing program, and you're using data that is basically from the 30% of the people who responded, but they struggle with different sets of issues to the 70% who are non-respondent. You're really developing programs that is missing the actually 70% of the population, right? So you're not really capturing the issues that they are really actually struggling with, and therefore not able to develop programs that are sort of upstream and actually really address the issues that are problematic. So in order to really innovate around this issue, how can we think about other ways of, you know, getting data that actually covers everyone so that we can really make our wellbeing programs as inclusive as possible? What we did was basically look into data that you already have. So that covers everybody. And that is basically the secrets sort of hidden or hidden gems in your EHR and HRIS data. And basically what we did was we combined these data sources. So EHR data, HRI data, and any existing survey data. So the idea is to gather as much data as possible so that you're covering as many people as possible when you're designing your wellbeing programs, right? So we collect this raw data from your health system. And then what we did was we read thousands of academic papers on the topic of clinician burnout and turnover, and basically drew a whole list of 100 plus predictors or risk factors that have an influence or associated with turnover risk. So what we then did was basically based on the literature, we then go into your raw data, feature engineer these raw data sets, basically back into this list, 100 plus list of risk sort of predictors or variables, however you want to name it. And then we crunch these list of 100 plus risk drivers into the machine learning models that we build. And then these machine learning models in the background will generate insights that is at the unit level, regional level, and as well as the individual level of what is the probability of everyone leaving, as well as a set of individualized or unit levelized sort of risk drivers, which basically identify the combination of risk drivers that actually is associated driving that particular risk. So we field tested this approach with a nationwide health system with 500 physicians and APPs. And the results were really, really good. And so these results are basically used to identify hotspots that you can prioritize in your organization because higher risk means higher priority. So in this particular study we did, we actually successfully found 60 percent of the high-risk clinicians were found 6 to 18 months ahead of time. And what's more important is that these ones who were found were much more likely to leave. So with that information, you can really start to prioritize your intervention efforts. Now, moving on to the second issue or the problem is that we know that clinician burnout is a huge issue nationally. But what is the economics, you know, right? Like we know that it's kind of at the intuitive level, it's very costly because people are leaving in the healthcare industry. Actually, there was a big study that is done recently that basically quantified nationwide it cost 4.6 billion nationally. But what this number is, I would say it's still very much an underestimate because this is purely based on turnover risk. But based on our interviews with 114 health systems, we actually developed a method that can calculate health system-specific cost of burnout and turnover. And so one of the problems that our interviewees highlighted is that, okay, we have an amazing AMA-based sort of calculator of burnout and turnover when they're trying to make the business case for investing in clinician well-being. But the problem is because it's not system-specific and it's very sort of, you know, national based on national averages, when you do go to the, you know, the finance people in your health system, usually there is some, you know, concerns around, hey, this is not system-specific. So what is your, can you tell me what is actually happening in our own organization so that we can actually allocate budget accordingly? And people really struggle with that. So we actually developed a methodology that actually is a system-specific cost calculator that can be applied to any health system across the country. So this is just a slide to really recap on the problem that we kind of like, I just highlighted earlier, cannot very easily quantify the cost of, comprehensive cost of burnout. And then therefore it's actually really difficult to quantify the ROI of interventions and ROI defined here as return of investment. Basically, you need to know the cost in order to do the ROI calculation. And then what other result is really, you know, we struggle to get budgets for clinician well-being programs and actually sustain that budget because there is not that, you know, continuous calculation of how much this thing is going to cost us, therefore allocating budget. So the methodology that we developed is published and at the National Bureau of Economic Research, I'm happy to share the paper and you can find it online, but basically this is a study that we did with a healthcare organization that basically looked at sort of burnout costs, not just in terms of turnover, which is what the literature tends to focus on. It's this important, don't get me wrong, but what we find is that we also looked into productivity impact as well as patient satisfaction impact. So we used data that, the EHR and HRS data that I talked about earlier, as well as some of the survey data. So what we found for this healthcare organization, which has crossed different states, basically for one year period, the total cost of clinician burnout and turnover at that particular system was 26 million. And there's actually 20 plus million for their physicians and APPs. And we had some really interesting findings. So the number one finding that I was very surprised by is that, okay, now the way that we think about clinician burnout cost is basically we conceptualize it into two buckets. So there is the cost of burnout of people who are still working at your organization, as well as the turnover cost. So in this slide, I'm showing you the turnover cost and the turnover cost breakdown. So we know that when people leave, there is recruitment. So that's a cost. We need to go and hire some new people. There is onboarding cost, and as well as lost revenue because you just have, you know, like you have to wait until you actually fill that vacancy. So in this particular study, what I found really interesting is that we quantify the onboarding costs for that particular organization and the methods are, you know, the calculations are all in the paper. But essentially what we found is that the new hires actually take for that particular organization between 8 to 12 months to reach their peak performance, depending on their role. So there is huge onboarding cost in this case of $2.5 billion. But I think what is more sort of surprising to me when I, you know, when we did the study, I was like, oh, wow, actually the majority, $18.5 million of this total cost rate is all to do with productivity losses. And that's basically because people are so burnt out and they, you know, reduce their FTEs, they reduce their workload, they just take and then they're less efficient. So there's huge productivity losses. And actually I think that's the main sort of crux of the paper where we really show, hey, the burnout cost is not just about people who are going to leave or who will leave. It is really around the people who are still at your organization, they're working really hard, they're struggling, and this is the, you know, the majority of the cost. The third thing which we will still kind of, we're still working on now is that we found that burnt out physicians and APPs actually have a general 14% less patient satisfaction score. And we're going to quantify that into a dollar amount into a next research project that we do, but there is significant patient satisfaction impact too. So the overall picture is we basically developed this methodology that, you know, like attempts to quantify the comprehensive cost of burnout, but, you know, dividing it into the people who leave the organization, but those who are still there. But down the road, I think we'll be looking into more sort of cost metrics to include things like quality and safety, costs and other things, which I actually will talk about a little bit later. But in this particular study, this is the different cost breakdowns that we looked into. So going to the next thing. So if you're thinking about budgeting sort of into the future, how can we turn this comprehensive cost calculator into a budgeting tool? And the answer really lies in the predictive piece. So what we did in this particular study is also like, we don't just calculate that sort of historical past sort of cost of burnout, right? But we also forecast your future cost. And based on that, you can actually then start budgeting for your intervention programs or well-being programs appropriately. And the way that we do this is basically using a combination of the turnover probability, but multiply that with the comprehensive cost of turnover. And then together with that evidence, then, you know, down the road, 12 months down the road, how much is actually burnout going to cost your system, and then budget appropriately. Now, I'm thinking, so basically, in order to use, to kind of take the predictive piece into another step is basically thinking about ROI. What is the ROI of actually investing in your well-being programs? So we've kind of talked about how we can calculate the historical cost and how we can forecast future costs 12 months down the road. Now, what we're talking about is, how can we use this predictive sort of leading indicators, the turnover as a leading indicator to actually quantify the ROI of investing in your well-being programs? So in this case, turnover risk is actually a leading indicator, which is very different to turnover rate, and that is a lagging variable. And this lagging variable, we know that whenever we're doing interventions, so even if you start now, it's going to take, you know, up to two years to budge if there is any hope of budging. So what we need to do is to, if you're really proactively trying to, you know, allocate resources for your well-being programs, you need to use leading indicators. And here, I think the innovation that we have is that we use turnover risk as a leading indicator to really look into the future and then help you think about the costs down the road and then budget. So this is the, in order to calculate that ROI, we need that turnover risk. And then the second piece of information is that what is the actual definition of ROI? How do we, what is the equation, right? So the equation here is actually pretty simple. I mean, there are some debates in economics around how do you actually calculate ROI, but typically, it's with this enumeration in front of you. So it's ROI equals the net return divided by the cost of investment. So in our case, when we're thinking about ROI, you know, in a well-being program, so the net return is usually the cost savings from a well-being program or intervention minus the cost of an intervention itself. So, you know, staffing resources you put in or IT costs in some cases, depending on what the intervention is or well-being program is. And then the potential cost savings from the intervention can be calculated using this methodology that we just talked about before. So that is the turnover probability times or multiplied by the comprehensive cost of turnover. So with these three pieces of information, you're able to actually properly quantify the ROI of any well-being program in your system. I've already kind of like talked through how we actually calculate the turnover risk. So I'm not going to go into details, but essentially you take data that you already have, HR or HRS, and then pump them into sort of feature engineering into that, you know, 100 plus risk factors that we identified earlier from the scientific literature, pump them in, and that generates these sort of risks for everybody. But then we aggregate everything up so that decisions can be made at the unit level, which is where the best practice is according to intervention science when you're developing well-being programs. And you can actually start to visualize these costs basically down the road. You can actually calculate, hey, this is a hypothetical clinic, how your turnover cost is going to be projected. Basically, what is it going to be over the next few months? And then how does it actually compare to your peers across your system? And this is just very imaginary data. We're not putting any real data here. But conceptually, you can imagine starting to think about how you can use the data and the research results in order to make proper intervention or well-being programs design. Okay. So I've talked a lot about ROI. I think now the next step is, okay, if we are really thinking about how to calculate ROI of specific interventions, we need two key components. One is the ROI piece and the predictive piece that I talked about earlier. The other section is really, the other key component is really intervention science. What is actually the intervention actually work or not? And this is really important because you need to know if the medication that you launch, medication as in the program you launched, actually had an effect on your people. And then what is your sort of the economics or the return, right? So one of the things that we really learned from working with multiple health systems is that often well-being programs are not launched or sort of rolled out in a randomized control setting. Now, the ideal way of actually assessing intervention effectiveness is really through these randomized RCDs or randomized control style settings. But in reality, they're not. So one of the things that we are doing with our health system partners in these projects is that, hey, to do intervention science properly, to really work out the ROI for specific interventions, we need to collect data on how the intervention was actually rolled out. What was the intervention? Who was it rolled out to? When? What was the exclusion or inclusion criteria? And how was it rolled out? And based on that, there's actually a lot of experimental designs that can actually sort of jump across or, you know, not address some of the limitations of the fact that, you know, interventions were not rolled out in a randomized control setting. So there are, you know, regression discontinuity designs, difference in difference, interrupted time series, multiple different statistical methods that can be…are designed to basically look at intervention effect sizes in a way that is robust and meaningful, even if it's not done in a randomized control sort of like setting. But in order to do that, so once we collect the details on the interventions, we also need to actually really do a robust check, depending on the experiment design, a list of different tests to make sure that the results are statistically sound, robust, and also meaningful. So in order to talk about, you know, test intervention effect sizes and especially the ROI thing, right? So you've kind of worked out an experimental design to test effect size, but what are the metrics? What are the cost metrics that we can actually use in order to quantify the ROI of the intervention? So some of the cost metrics, this is not a comprehensive list, but this is a list that we're kind of working on or we started working on. I've talked about the turnover risk already, but there are other cost metrics that you can think about that actually you might want to measure. So for example, we know that burnt-out physicians or, you know, APPs are more likely to take sick days, and that actually can be quantified, right? So if we launch a program that is supposed to address the number of sick days, then you can actually develop multiple economic models to actually translate that effect size into a dollar amount that gives you a cost savings from basically having intervened. And there are lots of different cost metrics. I'm not going to go into the details, but if you're interested, I'm very happy to talk to you more about how we are developing the different models in order to address the different cost metrics. Then lastly, I think this is really important because we don't live in a, you know, perfect world. So I think one of the ways, one of the best things about approaching the problem, you know, this way in terms of quantifying the ROI of interventions is that we can actually help to think about intervention and work design. What is the optimal design of an intervention rollout? So with these tests that we can do, basically you can actually compute the counterfactual. So that means what would have happened to the metric that you're trying to measure if you didn't intervene, and what happened now given your current status quo, the way that you're rolling out, so existing intervention effect sizes and the ROI, but what is the optimal design? What is the gap? So how could you actually roll out interventions in the future that improve the rollout and then you can actually increase the effect size? So that's kind of the way that we're approaching this. We start with, you know, small programs that are maybe not necessarily randomly assigned, but then we work slowly to an ideal way of launching and tracking the ROI of interventions. That was quite a lot of information. I think I am ready to move on to the last part of my presentation, which I think is very, very, very important. Now I've talked a little bit, quite a bit, about how we can use machine learning, drawing different data sources in order to make decisions around well-being and clinician well-being, right? But part of this involves collecting data and using data you kind of already have, but putting them together in a very sort of considerable way. There are, you know, ethical implications to this approach of answering a question. So what I'd like to kind of walk you through is some of the ethical sort of implications or concerns from a paper that I really like, but it's basically what they did was to go through a systematic review. I think it was 156 papers or something in the literature that summarizes some of the major concerns of using machine learning or AI to make decisions about certain populations. So in our case, in developing well-being programs. There are three major buckets of concerns. One is epistemic. And it can be broken down into three subcategories, which I'll go over in a second. The second is normative. And the third is transformative. And that is in the next slide. But let me quickly go through some of the epistemic concerns. So within this category, they are inclusive and conclusive evidence. Basically, those are algorithms that are probabilistic in nature and might be using the wrong data. And there's no causal relationship. So for example, let's say a EKG watch that is supposed to make a diagnosis of some kind of disease. But then it's actually the reading of it, it's wrong. It's technically just flawed. So what if that happens? This evidence is just wrong. Therefore, the diagnosis is wrong. There is instructable evidence. So those apply to basically recipients of an algorithm, a decision that basically they don't actually have any oversight to how the data was actually used or compiled to make that decision. So if a treatment was actually recommended by a support system that is run based on these algorithms, what if the data was inappropriately used? And basically, the user has no idea why the treatment was recommended based on what foundations. And then there is another subcategory of misguided evidence. So that is basically decision-making support systems that rely on algorithms to basically give you unreliable outcomes and outcomes that. So for example, this case study was. So I think IBM Watson for oncology in China is widely used. But it actually was trained on data that is just not on the Chinese population at all. So there are very much potential biases there. And then so there are also normative concerns. And that basically relates to unfair outcomes. And sometimes, it has transformative effects. But basically, if you have a group of certain people, let's say an algorithm that prioritizes certain patients because it's predicted on some sort of outcomes but actually misses some of the population that we are trying to protect, like other ethnic minority groups. And then in terms of transformative effects, so let's say an individual is using a health app. And then the health app makes recommendations. And then the user has, I kind of talked about this a little bit, has very little. They can't really challenge the decisions or the recommendations, and there's very little. And transparency around how the decisions or the algorithms were run. They also are at risk of losing autonomy and privacy of their data. The last thing is overarching concern that relates to traceability and accountability. And that's really where AI or algorithms are basically causing harm because decision support systems are essentially not, it's very difficult to know who is ultimately responsible if something bad happens. So I kind of wanted to go through this because I think it highlights some of the main sort of issues that come across when we are using machine learning or AI to develop algorithms that help inform certain decisions. And these are some of the main sort of concerns that people have. And now I'm going to move into how we as a research group have thought about addressing some of these concerns. And this is still really much, very much ongoing work. It's really still on the cutting edge of science or the AI ethics part. No one has the answer. But here is a framework to guide us to think about how can we actually address some of these concerns. So these relate to sort of 10 overarching themes or topics. And I'm going to sort of combine some of them because of time constraint. But essentially, so in terms of transparency and explainability, robustness and reliability, and transparency in the whole sort of AI and machine learning development process, so those three sort of main buckets or themes, how can we actually address these concerns? What we do typically is with our partners or research partners that we work with, essentially we always give them a report of our machine learning models. And that basically explains how the model was set up, how did we do the cross-validation, how do we test the results were sort of robust and valid, what kind of data was actually used, the exclusion and inclusion criteria, and what were the accuracy metrics. And this is really to provide that as much transparency as possible to the people who will be using the data, the insights that we generate. And then we also, within that report, talk about the limitations of the different models that we use. And then basically, we have also kept a very clear model inventory and what are the assumptions that are made. So everything is in that technical sort of report or document that our stakeholders that we work with have a very clear access and sort of understanding. The second thing, the next sort of buckets of themes to do with fairness and bias and social impact inclusion. So the way that we try to tackle this is for every, so all of the risk predictions that we do, we check for potential biases, we will look at the different gender groups and the race groups, and depending on if we have data on their sexual orientation and religion, and we will always check whether the scores differ significantly across groups. And if that is the case, we pause, and then we regroup and think about how can we actually tackle why is that the case and do further investigation. We are also making sure that we have data on diversity and inclusion to make sure that people are, every attempt is really made to try and include as many, to be as fair as possible. The next thing is really around the privacy and security around collection and usage and storage. So here are some examples of how you can actually make sure that the data that you collect is stored in a safe place. We use Amazon for data security and encryption, and then there's a bunch of other technical things. I'm actually not the expert here. I work with a bunch of technologically, basically experts who make sure that this whole data storage piece and security piece is the best possible that we can achieve. And then in terms of accountability and responsibility, I think there is a commitment that researchers need to make in order to actually, for example, being very available to questions. If people ask you how is the research done, how is the machine learning models run, you have to make sure that you're available to questions and to people asking basically for how you have done certain things. And then in terms of human control and autonomy and safety and risk management, I think it's very, very crucial. We work with our stakeholders very closely and have that human-centered control and ultimate decision-making process in place because of all the risks, all concerns that I have mentioned before. We need to have control in place. And for now, I think humans need to be there guarding the insights. The other thing that we do with our partners is that we work with psychologists. So when stakeholders are engaging with our results or with our models, they actually receive a trauma-informed training. And that basically means that when they are engaging with the machine learning results, they understand, they go through this training and think through basically what is a psychologically safe way of communicating the results. So for example, if you're going to a clinic leadership and you're saying, hey, your unit is burning out and these are the problems that people are potentially having, how do you engage that conversation in a very trauma-informed way? Because you're basically bringing bad news to them. And how can you actually make sure that it's communicated in a way that encourages action, empowers them, and also facilitate, create that collaborative approach and a supportive approach? So actually, this is really, really important. Because if you just go in and say, these are all the things that are wrong in your unit, and I can guarantee you it's going to go down pretty badly. People will get defensive. People will get fearful of bad news. So I think this is very important, how we communicate and how we outreach the machine learning model results. The last thing is that we are in the process of doing is basically setting up an ethics committee. And the committee will consist of ethicists, so the legal professionals, as well as researchers and domain experts, basically think through what are the perspectives of different stakeholders? How can we bring everything together to basically make sure that all of the research, all of the decisions that are based on the data insights and the research are done in a very ethical and transparent way? So with that, I have talked a lot. Thank you so much for your time and attention. If you'd like to discuss anything around using machine learning to develop well-being programs or how to calculate the ROI of your well-being programs or interventions, please feel free to reach out to me at xhu.law.harvard.edu. Thank you so much. All righty. Well, thank you so much, Dr. Hu. That was a very comprehensive overview of what's happening, especially around burnout. I know many of us recently saw the MMWR article that was issued a few weeks back, really highlighting the fact that burnout continues to plague healthcare workers across the country. So this is a very timely topic. I believe we have a few questions that have been posed in the chat. First off, I wanted to have Dr. Nguyen's question answered. Dr. Nguyen was asking about how these intervention programs are developed for physicians and healthcare workers in general, and if there's like a root cause analysis that can really look at the upstream factors behind the burnout. I wasn't sure if you had any thoughts on that, Dr. Hu. That's a really... Thank you so much, Manny, and thank you so much, Dr. Nguyen. That's a really great question. So the way that the models are built right now, they are sort of risk-driven. They're predictors that predict turnover risk. But what we do have with our health system partners is a sub-module that we have built, which is actually a root cause analysis. So basically, we pull in all of this data, right? Imagine you're a clinical unit leader, and then we tell you, hey, seven people out of 10 are struggling with a scheduling issue. And then these are the data that you have of the distribution of the schedules, how they're sort of spread out, and some people are not taking breaks and things like that. So we show you a bunch of data, and then what we do then is help sort of put together a group of people, usually the clinical sort of leadership, so their business partners and wellbeing champions. They will come together, look at this data together, and then go through that root cause analysis module we develop. So basically, the idea is that they sit down with whatever objective data that is available done by the machine learning models, and then they go into, okay, what about exactly the scheduling thing that is actually, is it a seasonal problem that is being flagged or something else? So it's kind of like a combination. I hope that kind of addresses. And then based on that discussion, usually what happens is then the group will come together with a bunch of interventions that they feel that's appropriate, that is risk driver-specific. So if it's to do with onboarding, for example, that one of the risk factors that's highlighted is, oh, your new hires are potentially struggling. You've just had like three new hires. And then the associated sort of intervention, a typical one that they tend to do is to have an onboarding program, and then they would work with HR. So that's kind of like how it works in my experience with multiple, a few health systems so far. Thank you so much for that. So I was really intrigued by your whole segment on ethics, especially around some of these machine learning algorithms that you and your colleagues have developed. You know, in our business, we're always cognizant of employee and applicant privacy and making sure that there is informed consent, especially around some of these new technologies. I wasn't sure what your process is for being able to get that consent. How do you disseminate this information to say a clinician who you want to be able to kind of identify what those upstream factors might be? How do you kind of build up that stakeholder engagement piece? That's a really great question. It's actually something we've internally discussed so many months discussing. So currently, basically we use data that health systems already have, right? So your RVU data, your productivity data, your schedule data, they're already being used for various multiple purposes of decision-making. The way that we surface the results is that we don't do any individual level results at all, because we are really aware of re-traumatizing our clinicians. So we don't want to do that. And basically what happens is that because the model need to be, in order to build accurate models, we need to go granular. And that's why we use data that you're already using anyway on everybody for this purpose. And then we then aggregate all of the risk factors and everything up to the unit level. So no one is going to be blamed for any particular things. And we only identify and work with risk factors that are work environment related. So environmental sort of workplace drivers. And then the idea is really to bridge that gap, to provide sort of evidence or objective data that you can sort of track over time and try to engage with the different risk factors. And we also do integrate with surveys. And so for some surveys do have good, like workplace drivers of burnout. So we'll take those things and that's all at the unit level. So that's kind of the way that we're approaching it. We're very, very aware of the issues that you raise. And I'm hoping that this approach will both address the problem right now, which is we have no data, to basically thinking through how do we protect the individuals that we're trying to actually help with wellbeing programs. So it's kind of walking that balance. Yes, absolutely. And I'm so glad you said the word balance because that's really the operative word here. I know many of us work at academic medical centers and work at other healthcare institutions where we take care of healthcare workers. And this is something that a lot of us do deal with on a day-to-day basis. At the end of the day, I feel the data can help guide a lot of us working with our respective executive leadership teams, as well as our wellbeing champions across the health system. But really, at least from my standpoint, and I'm sure many of my colleagues agree, this technology can be really scary for clinicians. You know, many of us work in locations where we wanna be able to provide the best care for our patients, but trying to make sure that we're taking care of our respective selves can also be a challenge. We always wanna make sure that we kind of fine-tune the utilization of these data inputs, and then really kind of taking it to the next level. How do you envision incorporating this data into the development of an intervention? Every healthcare institution's different. Data coming might be slightly different from institution to institution, but it's always good to kind of help us, at least this is a primarily clinical audience, how do we take that data and how do we actually incorporate that data to develop interventions to bend that burnout curve? Do you have any thoughts on that or possibly any experiences from some of the healthcare institutions you've worked with to date? Yeah, great question. Thank you. I think the way that we are doing it now is kind of around the risk drivers. So we've got right now, working with, I think it's now 200 plus, it's got to, because we have future engineering constantly, thinking through some of the things that might drive people, not sort of like that is risk factors that might drive turnover risk, right? So let's say we basically take data and then we break it down into the different units and you can segment it by the geographies, the locations, the specialties, the role, and how, so we're able to surface things like, oh, you're a female physicians of this particular specialty are potentially having, are having a higher turnover risk and that's kind of objectively measured. And then these are the potential like workplace related drivers that, so a lot of them are doing overtime work or they're taking little time off and we basically surface that granular level data. And then usually the decision makers sit down with that data and be like, hey, okay, some of the things that need like system level change and some of them don't. So within all of the sort of risk drivers, we actually put in like a sphere of control for different levels of leadership. So if it's like a gender issue, let's say the female physician thing, or some other vulnerable groups that we are observing higher risk, then that gets escalated to the system level. But if it's like a very local unit level, I think somewhere around feeding culture and things like that, then we'll kind of put it into the local level. So like, you're not kind of, it's kind of like designing the research so that it's not freaking people out because we really don't want people to feel that and paralyzing sort of, again, we really want to empower action and empower, make them feel empowered so they can do something within their sphere of control, if that makes sense. I hope that answered your question. Yeah, I feel like this is an ongoing discussion. I know a lot of our colleagues here within ACOM, as well as in the greater occupational and environmental medicine community continues to really try to kind of direct some of these discussions. But we appreciate your time today, Dr. Hu. Clearly you and your team have utilized these machine learning algorithms to at least gain some more insights. And really, I really did appreciate your concept of human centered involvement when it comes to the data interpretation. I think we always have to be cognizant of that because at the end of the day, we're working in these institutions and we have to be able to bring that human perspective to the data itself. And hopefully being able to create holistic interventions that really meet the clinicians where they are and encourages them to seek out resources and really try to address those root causes before something terrible happens. And we've seen a number of such cases over the last few years, especially through the pandemic. So with that being said, I think we're really on time today. I know we're running ahead of schedule, which is not a bad thing. So I'm gonna go ahead and conclude day one of the virtual fall summit. We thank you all for joining us today. And we're gonna continue this tomorrow. Dr. Khan will be moderating the climate change and occupational and environmental medicine section tomorrow and we have a great lineup of speakers talking about a variety of different environmental health topics. I believe that we'll be starting at 1240 Eastern, 1140 Central time. So we look forward to seeing you all tomorrow. Have a great rest of your afternoon and evening. Thank you all.
Video Summary
AI systems are computer systems that can perform tasks that require human intelligence. They use algorithms and vast amounts of data to improve their performance over time. There are different types of AI systems, including rule-based systems, machine learning systems, and neural networks. AI systems have the potential to transform industries but also raise ethical and privacy concerns. The cost of implementing AI can vary depending on factors such as the complexity of the system, data collection and processing requirements, expertise needed, and additional resources. The speaker discussed the use of machine learning for predicting turnover risk and developing interventions for clinician burnout. They emphasized the importance of collecting data from multiple sources to have a comprehensive view of clinician well-being. They also discussed the need to calculate the return on investment for well-being programs and the challenges in measuring cost savings. Ethical considerations around bias, privacy, accountability, transparency, and stakeholder engagement were also highlighted.
Keywords
AI systems
computer systems
human intelligence
algorithms
data
performance improvement
rule-based systems
machine learning systems
neural networks
industry transformation
ethical concerns
privacy concerns
cost of implementing AI
data collection
processing requirements
×
Please select your language
1
English