false
Catalog
AOHC Encore 2024
324 Holoportaton: A New Frontier in Healthcare
324 Holoportaton: A New Frontier in Healthcare
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, good afternoon. Welcome to our afternoon session that I am so excited to introduce. We have faculty that are experts in machine learning, AI, technology, and the interface with health care. I'd like you to give a warm welcome to Dr. Barry Hoffman and Dr. Roger Azevedo with the University of Central Florida College of Health Professions and Sciences and School of Simulation and Training. Thank you, Dr. Corretto and your conference planning team. It's really an honor to be here and we appreciate the invitation to come. So I'm going to get us started and list a few of our disclosures here for some of the work that we're going to present. And during the short time that we have, we want to give you a little snapshot of some of the work that we're doing. And we're going to start off with some innovative work in the area of holoportation and hologram technology. So as we get started, I want to just describe a little bit of the work and why we explored its use, in particular in health care and health care education. I'll go through very briefly how it works. There may be a few people in the audience who we had a chance to see on site yesterday and actually demonstrate the use of the holoportation technology. We'll look at a few of the different characteristics that are unique with this type of display system and some of the actual use cases that we are implementing holoportation technology with at the University of Central Florida. We want to try to explore some thought leadership and some opportunities for use that maybe we haven't had a chance to do, but you may find innovative in your own setting. And talk a little bit about some of the emerging work technologies that we're using to document the learning science associated with this technology. So as I get started, I want to just describe the technology itself. And it was really developed as a telecommunications tool. I first learned of the technology about four years ago during the pandemic. And if you know the performer Puff Daddy, Puff Daddy was actually beaming into his son's living room through this device to sing happy birthday. And one was in Miami, the other was in Malibu, California. And so when we saw this used for pure entertainment purpose, we really started to think about all of the ways that we could use this technology to advance our ability to train our students with a wide variety of patient experiences and train them with soft skills, train them to learn more from patients, symptom profile, lived experiences, and what have you. So as I was mentioning, it was originally developed and implemented in the entertainment industry. Now there are a number of verticals that is being utilized. We were really the first university, or we were the first university to use this technology in its instance in healthcare education and started to develop that pathway. So it's a new form of 3D capture technology. It looks as if it is a 3D model of a person that is constructed and then transmitted elsewhere in the world in real time. So imagine a life-size Zoom call, and that's probably the easiest way to describe how we use it. And as I mentioned before, we really saw this as a way to capture the humanity of an individual's lived experience, understand their disability, really look in depth at their symptom profile and the severity of symptoms that can occur, particularly in long-term and chronic conditions. It allowed us to provide more on-demand skill training and unlock geographic constraints. And so one of the things that we're also interested in and share a little bit of our work today is some of the emerging research that we're embarking in that is demonstrating that individuals who engage with technology such as holoportation or hologram experiences have that feeling of social and spatial co-presence. And so as we start studying the use of this with our students, we hear them share their feedback as if they forget as if it's simulation or a representation of an individual. They feel as if that person is in the room with them. And so oftentimes that co-presence is lost when we engage with traditional 2D platforms. So this is a picture of me in our lab. And what you see is me on the left-hand side of the screen. And I'm in front of a camera that is a 4K camera. There's nothing unique or proprietary about that camera. In fact, we can also make this representation with an iPhone now. So an iPhone 14 or higher, we can get that same image quality. And so essentially we're taking an image from that camera and it is transmitted into what we call the box. And so the technology itself, what you see in the picture, is a very large refrigerator size. We call that the Epic. And those of you who were with us on site yesterday had a chance to experience that large Epic. What we brought with us today is the smaller tabletop version, which is in the center of the screen. And those of you who are in the back, you may want to work your way up front so when we show some of the examples, you'll be able to see some of those subtle details. So please feel free to come forward as you're comfortable. So that's the box. The studio, as I mentioned, is a 4K camera. We can use an iPhone. We have lighting, some traditional photo studio lighting, a white backdrop. And part of the magic in how we create that volumetric image is through a piece of acrylic. And so we use a shadowing effect, the acrylic, the white background, and the box. The technology works two ways. What you're going to see me demonstrate today is our recorded content. And we have developed a library of various patient experiences, lectures, skill-based training. And I'll show you a sample of that. We can use that on demand. You can imagine in higher education, semester after semester in the variety of classes, we can, instead of having that patient come to class and the physical logistics and the scheduling burden, and even for some of the conditions where it really isn't feasible or healthy for individuals to come out in a large group of students, we have the ability to share their experiences and their symptom profile and the assessment with our learners. So that's the first version, is that is by recording the content. The second is through what we call a live beam version. Were any of you, how many of you were with us yesterday in, okay, a few of you. So you had a chance to experience live beam where we actually had somebody from California beam into our box and we were able to have live conversation with little to no latency. And it's all give and take based on your connection. We are also able to live beam from our room. Some of the attendees also had the opportunity to turn themselves into a hologram and right from one side of our room and they transmitted right into the box. And so that's the basic components of the technology. And I'm happy to talk a little bit further offline if you want to get a little more in-depth. So I want to take just a few minutes during my presentation and share a little bit of the use cases that we have found helpful. And again, we are really just embarking in this area of clinical integration and research work over the past three years. And our use cases have continually grown over time. This is by no means an exhaustive list, but these are some of the ways that we are engaging with the technology today. So what you see in the box is actually a patient and all of our individuals who are in the box are actually volunteer patients. We are not actually using this to deliver health care at this point in time, although we'll talk a little bit more about that application as that's emerging. So we use this as a means for us to capture storytelling of individuals' lived experiences. That's both the patient and the caregiver. We are utilizing it for caregiver education, caregiver support. In our simulation space, I'll show you an example of how we're using it with an embedded participant with some skills training, grand rounds, and case presentations. We have an application in rural health that we are expanding into hopefully later within this year, utilizing it as a clinical extender, health care consultation, and delivery of clinical services in the near future. And I'm thinking that now some of you, as you see some of the images and as we're talking about this, are already thinking about novel ways that you might consider using it in your own space. So I'm going to just share a couple of examples of our live beam and our utilization of our live beams. This is actually in a conference center, a large conference center at UCF that holds about 300 to 500 people. And if you remember, just as the pandemic was ending in the U.S., and in particular in Florida, that wasn't necessarily the case all over the world. And so we were having our first in-person symposium. This is our Institute for Exercise Science and Rehabilitation Symposium. And their keynote speaker was coming from Australia. And so Australia was not open for travel. And so we had to address some of the geographical barriers. And so we took our large epic box that you see there in the image from our lab and moved it over across campus in a travel box. And we were able to beam this particular speaker, the keynote, from Australia to Orlando. And he was talking about exercise oncology. And so from his lab in Perth, Australia, he was able to show data, and you see his slides on the screen live, how certain exercise was changing tumor biology in prostate cancer. So as we moved on in our utilization, this is another one of our early live beam experiences. We started to engage with live training with content experts. We were fortunate to have a philanthropic gift from Brooks Rehabilitation to help us bring in the technology to UCF. And so this is actually one of their practitioners who is on the floor of their neurorecovery unit. And he was actually giving a lecture to students and clinicians in the space on the use of various assistive technologies that are used for mobility support. When the lecture was over and the demonstration was over, we then moved into a panel discussion. So now imagine how you might connect individuals in various areas geographically where you may want to do some case consultation, some grand rounds, and have some live demonstration and interaction with some patient-specific data. So another fun experience that we've had at UCF, this is Howie Mandel, who is actually in the picture. For those of you who watch some of the shows that he's been in, he is very vocal in the community about his battle with ADHD and OCD and mental health. And so this is actually a picture. The students, they loved getting up and doing their selfies with Howie. So that's what you're seeing there. But this was another panel presentation where we had a variety of different members from the community talking a little bit about neurodiversity and transitioning from childhood to adulthood, and Howie beamed in from California to partake in that panel. And so this is a little example of some of that footage that I put here. Could you give us some sound on the video? Well, I apologize that you're not able to hear the video. You can see a bit of the screen capture. It's a hologram. That's how I deal with it. Let me see if I can start that over for you. I mean, this is obvious. I have OCD, ADHD. I've got social anxiety. So the way I'm able to deal with it is I can stand in front of people as long as I'm 3,000 miles away and it's a hologram. So you get the idea. I'm going to move on from our live beam cases and talk a little bit about some of the content that we're creating and its implementation, particularly in health care education. And this is actually one of our neurologists, Dr. Boddy, who is recording an individual with Huntington's. And we have a Huntington's disease clinic that we run in our area. And one of the things that you can imagine with Huntington's disease is being able to capture the progression of symptoms, being able to train our students. And in addition to the Huntington's disease, in general movement disorders, having our students exposed to the wide variety and severity levels. So one of the things that Dr. Boddy is working on with us is capturing gait. And he runs a fellowship, an international fellowship, in gait disorders. And one of the challenges is the exposure to the wide variety of gait disorders. And so the idea and the vision is to collect and expose the learners via holoportation, things that are recorded and can be utilized on demand, to hopefully reduce the time to have increased access to exposure to those conditions. Another example of how we're using it is as a skills lab. And so in this particular picture, you see the tabletop version. And we're actually doing an Adaptathon. We have some of our students, our engineering students, some of our physical therapy students, who are adapting technology. These are toys with 3D printed switches to adapt toys for children with mobility impairment who can't access some of those typical toys to learn and develop fine and gross motor skills and cognitive development. And so instead of having to have instructors at every station repeatedly go over information, we're able to use multiple stations with the tabletops. So you can imagine that this could be a skills lab in debriding a burn or wound care. And so we can set up step by step and reduce the burden of the instructor for the basic instruction and have that instructor available for assessment or for some of the higher level needs of training. This is actually one of our first libraries that we developed. One of our faculty who teaches our pediatric physical therapy course had a big challenge, particularly post-pandemic, bringing a wide variety of children in for mobility assessment and training our doctoral physical therapy students. And so the challenge of having hands-on experiences were really no longer possible. Also, children with disabilities, it's very high risk of bringing them in. The logistics can be highly burdensome. We also know that the majority of our students had very little exposure to children. And so bringing them in live in person created a lot of behavioral challenges across the individuals in this space. And so she started to develop a library where we have infants from two months old to 10 years old. And we're able to see an array of conditions with low muscle tone and increased muscle tone atrophy and spasticity and normal development. And what the students, when we studied the students' response from use of the technology as compared to traditional method, and that might be through a video and through an on person, the hologram was the preferred modality when compared to the traditional recorded content. And students had noted appreciation for concepts such as center of mass, base of support, and physical assistance. Those are some of the things that are very hard to pick up as easily on some 2D platforms. This is actually an example of one of our hologram classrooms. And this is a patient who is a cancer survivor. And he tells his story about HPV related tonsil cancer, what the sequel of symptoms was like, what getting diagnosed was like, and what his life is like now. A very robust learning experience that you couldn't capture unless you heard it from that individual. One of the things that we really identified with students is they were making a connection. They were making a connection to the content and the concepts they were grasping in a much richer manner. And so we started to embark on the study of co-presence, which we know has the ability to make that interaction even more authentic. And what we found is that our participants stated having more of a feeling as if they could interact in that same physical space, collectively modifying their mutual reality. So let me just take a pause from some of these slides and give you an example of some of the cases that we have. And you can imagine the impact it has when you see this person life size. So I'm going to switch over here. I'll talk a little bit louder. This is actually a very small clip of one of our Parkinson's patients. And so you should be able to see some of the subtle characteristics of hand tremor. I'm 82 years old. I developed Parkinson's about 5 or 6 years ago. It was a shooting in my right hand. And it was essential tremor because I had a friend who had that. Finally, my primary care physician in Chicago where we were living sent me to a neurologist. And she diagnosed it as Parkinson's. My balance is not as good as it used to be. I walk very carefully so I don't trip and fall. I still play golf because it doesn't improve my golf game. So what I want you to do is just stand feet together. And I want you to close your eyes. I'm here with you. Try to get that knee up to your waist. Close to your waist. There you go. Let's go two steps forward. So you get the concept. I'll show you another example. This is a small clip of that cancer survivor case that I was just sharing with you. I am 51 years old. I am a survivor of stage 4 HPV related tonsil cancer. I'm sharing my story because I really want people to understand the importance of the HPV vaccine. At the age of 44, while I was taking a financial exam for our family business, I put my hand on my face to ponder a question, moved my hand onto my neck, and felt a small bump. I'm sharing this with you because I felt perfectly healthy. I really had no symptoms. So that gives you an example of some of those cases. While I'm standing over here and we're looking at the technology and it's being amplified, I'm going to jump to something I was going to show you towards the end of my presentation. We're starting to work with this technology and implementing some AI integrations. One of the things that we're just piloting, in fact, yesterday was the first day we were able to actually demonstrate this to the group that was on site, and now you are the second group that is seeing this. We are able to take that same case and we can translate that into a different language. So now you can imagine some of the things that we're able to do. This is just with our recorded content. So I'll give you an example of that same case in Chinese. And how would you like to see that in Russian? Same case, we're able to change the language, match it to the lips, same voice. So now you see why it's so much fun to go to work every day. Okay. Let me move on a little bit here and talk a little bit about some of the other use cases here. And, you know, one of the things that we're able to do is we're able to do a lot of things with this technology. And as we see now in the media and in our clinical settings, and particularly in higher education, it's really a strong call to action to address the health disparities, particularly fueled by the social determinants of health. And so we started to embark in contact with some of the other use cases. And so what you see in the picture here is actually a standardized patient where we have utilized simulation best practices to develop cases in the social determinants of health. And so we've developed some libraries by utilizing standardized patients. And so what you see in the picture here is actually a standardized patient where we have utilized simulation best practices to develop cases in the social determinants of health. And so what you see in the picture here is actually a standardized patient where we have utilized simulation best practices to develop cases in the social determinants of health with pain science curriculum, training our students in how to develop a rapport in that therapeutic alliance, training in caregiver education. And now we've developed a whole course to be delivered via hologram in the biopsychosocial aspects of aging. So this is an example from our social determinants of health curriculum. And we actually studied this across a variety of our health care students in medicine and nursing, physical therapy, speech pathology, social work. And we validated cases centered around PTSD, stroke, and then a pediatric genetic condition. And, you know, the learners viewed the individual story. They were able to interview the patient. And they were able to have a lot of context to the history, the physical ability, medications, and what have you. And so we have submitted this work for publication. We hope to have this out very soon. One of the things that we learned from the student engagement is that there was strong connection and high realism to the use of the technology and implementation of that technology as compared to others that they are utilizing. They have a stronger ability to understand the social determinants of health. They felt it was incredibly immersive. And some quotes taken from some of that data, they were saying, as if I was there, they felt as if that individual was in the room with them. Another stated the hologram patient gave a realistic up-close and personal look at a situation in which many social determinants of health were at play. And the last person said the only thing that could make it better would be to make it interactive. Well, we really took that one to heart because we're really interested in the AI integrations and developing that. This is another case. And in the interest of time, I'm not going to be able to share this video with you. But this is actually an embedded participant scenario. So what you see are some nursing students who are in the background. And they are working with an infant that was just born. And in comes, via hologram, a father who had missed the birth. He just came back from deployment. And he's very agitated. He's very upset. He's very concerned. And all of a sudden, he's interrupted to the nurses. And they have to now have a way to interact with him. So some training in being able to communicate in those situations. I also want to highlight some ongoing partnership work. We have been fortunate to work closely with the National VA Simulation Center, SimLearn, which are now referred to as SimVet integrations, on a variety of different integrations within the VA system through live beam scenarios and integrations, simulation-based tools, and looking at the implementation and evaluating the implementation of the technology as a clinical extender. And so looking at this from the context of wound care and rural health, women's health, social work, counseling, and those sorts of things. One of our very first examples of partnership with the VA is actually the live beam in. This is one of the VA physicians, Dr. Scott Wiltz, who beamed in, actually, from his vacation home. We decided to plan this a week. We didn't realize he was going to be on vacation. And so we packed him up with a tripod, an iPhone, and a white backdrop. And he drove on vacation, and he set this up on his own without any tech support, with any IT services, just me and somebody else from our team on the phone with him. And we were able to have him beam in for a lecture. And we did that very seamlessly, and that was our first time doing that successfully with just an iPhone and not all of the other heavy technology implemented with that. But what we also did, what you see pictured here, is a patient with TBI and one of our doctoral physical therapy students who was doing a neurologic assessment. And so Dr. Wiltz was able to now become the remote medical monitor. So now thinking about the application in health care and specialty care, not having to be in the physical location, but also bridging or eliminating that barrier, eliminating that gap to specialty care. And so he was able to not touch the patient, but he was able to physically interact, and he was also able to provide some feedback to our student learner. So all of this looks really fun, looks really technology forward. It's very eye-catching. And so one of the things that we learned very quick as we were going down this path is that we needed to understand a little bit more of where is this or why is this a differentiator. We started to realize that our students' engagement was different than it had in traditional ways or training in traditional ways. And so we wanted to study the learning science. And so I'm very fortunate that I was able to collaborate with Dr. Azevedo who's going to expand a little bit more on what you're actually looking at here. But this is a way of instrumenting students or instrumenting the learner to assess what they are experiencing as they engage in this holographic simulation. And so basically what you're seeing is an individual who's instrumented with eye tracking, and they have a strip of sensors on their wrist, some around the neck, which are multimodal metacognitive sensors. And we're able to see what they see by tracking that. And so when you saw that Parkinson's patient, what you probably noticed, for those of you who are sitting close up, is that there was a hand tremor. And when the physical therapist came into view and started to do an assessment and see what their fall risk was, as the physical task and maybe the cognitive load got harder, their hand tremor increased. All we know when we look at our students and when I look at you is that you were looking at the box. I have no idea if you missed that teachable moment of looking at how far apart their feet were because you might have been looking still at their hand tremor or maybe an expression on their face. So this allows us to track that. I don't know if you want to add more to it. Do you want to switch now? Okay, yeah. And I'll provide some more details in terms of the actual processes that we're looking at in terms of clinical decision-making and the cognitive loads and whether they induce additional errors. And not only within the framework, as Dr. Hoffman is saying, is understanding basically how these new technologies impact clinical decision-making. So this is just giving you an example of the eye tracking for a patient, for a student who saw a patient with Huntington's disease, not only everywhere that they looked but a heat map. The last thing you see in that picture is also our integration of clinical data. So we're starting to integrate imaging, we're starting to integrate AI conversations, and, as you see, some of the translation features. Just a couple more minutes and we're going to switch over to look a little bit more at other technology and assessment. But I want to just share with you what some of our students are saying. It looks like someone's standing right in front of me. I'm able to see every little detail, see their clothing, see how they're moving. So I can hear you. And can you look at the tip of my finger? It's life-size. It's not like a computer screen where I'm looking into this other world. It's like he's here in my room, in my world, and I'm able to experience more of what he's experiencing. So, as I mentioned earlier, our ongoing and future work is to take all that we have learned from implementing this in the education space, and we've been preparing in our labs to be able to implement this, actually, in some health care delivery. So we are due to have a delivery of our mobile health unit at the end of the summer, and our first deployment of what we call hollow health care is scheduled to be implemented in conjunction with our mobile health unit so that we can actually outfit the technology with the mobile health unit program and be able to have our physicians who are not on the bus and not in some of the rural areas able to access those patients for specialty care. And we see this as a way to enhance engagement with some of the high-demand, short-supply specialists, deliver preventative health care, and hopefully expedite treatment pathways. So I'm going to turn the rest of this over to Dr. Azevedo, and I have plenty of content if you want to see it and play around with this after the presentation today. I'd be happy to share more with you. Good afternoon, everyone, and thank you so much, Dr. Hoffman and Dr. Corretto, and the organizing committee for our invitation to be here. So today, now you're going to be listening to the other side of this, which is from a psychologist's perspective. So being in the School of Modeling, Simulation, and Training, we're interested, and as a psychologist, is how do these technologies basically impact clinical decision-making? Do they induce additional cognitive load on the clinician? Does that mean that they're going to have more clinical errors? What's going to happen to patient outcomes, et cetera? So this is more from a psychological perspective. So we ask the big question with any kind of new technology, right, is there is a science of technology. A lot of technologies build technology and implement those technologies across different industries and fields without actually having or knowing anything about the psychology of learning, reasoning, thinking, and problem-solving. So clinical decision-making is extremely important, and what Dr. Hoffman is saying, and this is why this is such a wonderful collaboration, is that we study self-regulation, which is part of clinical decision-making. I'm interested in how clinicians, for example, use their perceptual processes, your cognitive processes, your metacognitive processes, your motivational processes, and also social processes, whether it's human-to-human, clinician-to-participant, or patient, et cetera. And the question is how do we collect that? So we collectively in the field of artificial intelligence education call that multimodal data. And so we're going to talk a little bit about that and also leave you with, as Dr. Hoffman was saying, is what is the future of AI with devices such as the one you're seeing right here, right? Can we use NLP? How do we use explainable AI? Gen AI, artificial agents, and also we're now kind of in the business, if you will, quote-unquote, of developing human digital twins because we have envy from engineers. Okay? So kind of jump into a little bit. So, yes, as good psychologists, we have constructs. We're not going to go through the left-hand side, but as you can see on the right-hand side, there's a typical participant. This is the kind of instrumentation that we collect. So when we can, and that could be a clinician. That could be one of Dr. Hoffman's students. That could be a college of medicine. It could be a nurse, a nursing student, et cetera. We try to collect as much as data as possible from what they say, what they're looking at, what their facial expressions indicates, whether they're confused or frustrated, their physiological sensors, what they are doing with that particular system, okay, in order to understand not just what happens before and after in terms of clinical diagnosis, but also what are those processes and how do they temporally unfold during the clinical decision-making process. So how is that applied? I'm here speaking through. You obviously know about clinical decision-making, so we adopt a particular model, which in this one tends to be very generic. We don't have to go through that, right? But what's really important is as we capture that multimodal data, as clinicians, for example, are using, you know, looking at Huntington's patients or Parkinson's patients, one of the things that is important is can we capture with that multimodal data some of those major issues that we know right for decades that impact clinical decision-making, such as diagnostic errors, right, biases, lack of adherence to compliance, et cetera, and clinical guidelines. Are they not communicating with the patient, et cetera? So back to this question here and this figure that Dr. Hoffman presented. So here we are. So we have one of our clinicians there. So we have a camera. We're trying to figure out what is the optimal. If you are the clinician using the Dr. Hologram, what is the optimal positioning so we can capture facial expressions? Because your negative facial expressions could be indicative that you are actually induced. This thing is inducing extremist cognitive load. And if we know that, then we need to understand that, when it's happening, why it's happening, and then build AI to actually diffuse that, right? And whether it's the patient that becomes intelligent over time or an avatar that is embedded in proto, that could be an external scaffolder, just like you would do for a more junior clinician. Eye tracking, right? We've got the eye tracker. Well, I want to see basically not just what you're saying but what you are fixating on. So if you've got a patient who's got facial paralysis because of a stroke, are you actually fixating on the area of interest? If you are not, then that's going to be a big problem. If we don't have an eye tracker, we don't know that the clinician potentially is not looking at a relevant finding. If you're not looking at a relevant finding, obviously you know what happens in terms of clinical decision making. So we try to take this suite of instrumentation that could be either portable, so here's an example. We bring this over from our School of Modeling and Simulation to Dr. Hoffman's Innovation Center, just like we did yesterday with some of our participants. Or the question is we can bring them into the lab, factory. We go into Modeling and Simulation, the Nemours Children's Hospital. It doesn't matter really where we are, but we are able to collect this data ubiquitously, so that's an advantage. So here's another typical participant, and this is the types of data that you see on the top right. Screen recordings of what they were doing, so if they're interacting with a hologram, we want to be able to capture that. What does that look like? Their verbalizations in terms of agnostic clinical decision making, their physiology, their eye tracking. Sometimes we have self-report measures. And, of course, in health care you also collect checklists, pre and post, and also during clinical interventions and environmental actions, and also log file data. So what are they doing with PROTO at the millisecond level? Because then we can make inferences about their clinical decision making. On the bottom, and that's going to take too long, but basically what we do is we take this data, this data is captured, and each data channel is analyzed. Inferences are made based on psychological foundations, which obviously have a root also in clinical decision making from different areas. And then we run it through different statistical methods, AI techniques, et cetera, to basically understand the impact of these new technologies on clinical decision making and also adherence and acceptance in the workforce. This is more of what that data channel, which I don't have time to talk about, but what I'm going to say is I'm going to do this on purpose. So this is just a little snippet of the types of data that we can extract from each data channel. The top is interaction files. If you look at the right-hand side, the first gaze, that gives you an example of the types of metrics that we're able to extract from a clinician's eye tracking data. And if they're looking at proto, then the question is, what are fixations, what do gaze heat maps, et cetera, indicate to us, and then what are the inferences that we can make so we can make the system more intelligent and more supportive for clinicians, and what does that look like? So that's just one example. When we think about training and teaching, right, well, we're interested, obviously, how would this impact not only clinical decision making, but the learning, developing. How can someone develop clinical skills with this kind of environment, and what does that take? And, of course, practicing. When we think about teaching, right, modeling that we do as experts, whether you're an expert clinician, how much scaffolding do you articulate, do you reflect, what are those processes, right? And then the question becomes that we could have, just like in that picture, which is in Dr. Hoffman's Innovation Center, is now imagine that same teaching room, if you will, is instrumented with cameras, ambient sensors, cameras, and walls that can hear, feel, and think, right, so that then they can actually be more sensitive to the clinician's needs in real time in addition to what could be provided within proto using artificial intelligence. And so you can think about and assemble a suite of AI tools that can accelerate clinical decision making or support clinical decision making by using generative AI, for example. So, if I'm struggling to diagnose hunting them patients' disease cases, then the question may be, I could go through three, four, five cases, then the question becomes, what's going to be my next case? Who's going to decide that? Is it the faculty, right, or could it be the AI that is based on the faculty? Explainable. Can it explain that my clinical decision-making is faulty in certain aspects? And so, the question is, why? Can we have the AI explain that? Supervision, right, can it provide, basically, can it generate eye-tracking data in real time that is synthetic data that no clinician has ever generated to basically model for me for this complex case? This is, these are the clinical findings, and I'm going to show them to you literally by showing you my heat map of an expert, okay? So, those are some of the things that, you know, we're kind of embarking and looking at. And, you know, you can think about this also, Dr. Hoffman talked about this, kind of as a learning and training AI-based ecosystem, right? So, here, for example, you have the little, the proto, so you have, I've tried to exemplify this by having six, right? With each one, imagine you are teaching, right? So, you're getting what the clinical decision-making is happening at each individual level, right? But, now, imagine with AI, right, these students could be synchronously working together in the same room. They could be geographically, it can be somebody here, somebody in the U.K., Australia, India, doesn't really make a difference, right? What is the curriculum? How does the curriculum change as each of these patients basically are learning from all these different individual learners based on those interactions that are happening, right? And the question is, also, it could be synchronous, asynchronous, and then how do we integrate this in terms of a human and AI co-learning, where not only are those patients that you've seen are learning themselves using AI, right, because they're being exposed to more and more clinicians, right? But at the same time is how are the clinicians themselves, the humans, co-evolving as they're interacting with these new technologies? So, I think there's a lot of interesting approaches that we can do, and the human AI co-learning is one of the new areas that a lot of funding, for example, especially with NSF, is being driven to. And here's an example of what, you know, just to give an example, this is actually you're seeing eye-tracking data, okay? Those are the bubbles from actually a medical student in one of our experiments. This should show you live data. They decided that they have five cases, which are in blue, okay, of upper respiratory disorder. They have decided, we know now that they have decided to look at chest X-ray. We know that what they're doing on the chest X-ray, okay, and also the mouse. And what you see on the bottom is not an EEG. On the bottom is the camera that is actually doing a coding of their facial expressions of emotions, okay? So, that's the kind of data that we've been piloting and working with Dr. Hoffman and her team. Just to give you kind of a snippet, that's the kind of data that we have. And let me just, so just get to this also, because part of this also Dr. Hoffman said is, if you look in the middle, is, you know, that's actually my post-doc, Megan, Dr. Weedbush, and we've actually created a digital twin of her. So, this is like Dr. Hoffman is saying, sorry, don't leave now, you know. So, we're trying to figure out, imagine now for patient compliance, so imagine Megan was just diagnosed with prediabetes, right? The question is now, if we have a replica in terms of her medical records of what she is, right, the question is, can she interrogate her future self or talk to her future self in terms of what would, show me in real time, or near real time, what would happen to me six months from now if I don't change my lifestyle, my eating, et cetera, a year, five years, et cetera, right? So, that's some of the other things. And so, if she can actually, using natural language processing, talk to her, that would be wonderful. And I think we'll, yeah, so we've got plenty of other projects, but I think we'll, you want to leave it open to questions? Yeah. Thank you. That last bit got my attention. If you can show somebody the negative consequences of their behaviors, can you show somebody who wants to lose weight or make positive changes what they could positively look like in a period of time? That would be one of the... So in terms of compliance, yes, absolutely. That's something that if we can model that, right? Because nobody actually tells you what you need to do, right? Or what you're gonna look like, or also the health benefits, right? So for example, I'm turning 58, I mean, I go to the gym five times a week for an hour and it's brutal, right? So the question becomes, I mean, I would love to have a human digital twin where I can say, listen, I'm turning 58, right? I go five times a week, it's one hour each time. It's hardcore weight training and some cardio. The question is, as I get older, do I still have to go five times? What happens if I just go four times a week or three times a week, or do I have to change to do something else? Do I need more cardio? So imagine being able to not only see the changes in yourself, but also have an explanation in terms of the medical data that you look at, which is very not often translated to humans, right? To the patients themselves. Yes. I love the presentation. So by background, I'm an engineer in a prior life. Now I'm an Occ Med resident about to graduate next month. So, I love everything here. The question I have with this is that how cost prohibitive is it to get this technology? Is it this? And the second question I have with eye tracking, has there been any thoughts about using that in operating rooms, like training like surgical residents and whatnot for an improved operating skill in a lower period of time? Yes. So, I'll answer the first. Sure. Go ahead. Barry can answer the first. So, the second, we have, when I was at McGill in the modeling simulation in the, that belonged to the school, the Faculty of Medicine, we were actually collecting eye tracking data on surgeons so that we can then compile their eye tracking behavior with verbalizations of their explanation of what they were doing and then using that for residents to see if we can accelerate cardiac surgery residency training. And there's been some other work in other areas, such as pathology also, where they've tried this, and a lot of work also in the area of radiology, especially with chest X-rays. Yeah. So, I can share some of that data. And I guess my other question, with the human digital twin concept, can you take it further to simulate actual surgery? Like how people would look like intraoperatively, like a given patient, so you can simulate the surgery mentally before you do go to actual OR? Yeah. So, potentially, yes. And actually, Dr. Hoffman, so if you see that slide in the middle, the top, that's Dr. Hoffman's BLISS, which is the … Oh. That space is, it's a blended learning simulation suite, so it is, you know, kind of, not just the visual environment, but it includes all of the stressors that you may experience, so we can cite sounds, smells, we can dial up the simulation as needed, so … Yeah. So, imagine, you know, so that's representing an ICU, but imagine you had a high-fidelity mannequin, which they do, right, and several of our collaborators, and you want to do a procedure, right? Imagine having, now, a digital twin on that wall, that's a three-way projection wall. Imagine having a digital twin of the mannequin, so as you are coming up with a new surgical procedure, can you simulate that on the digital twin of the mannequin? And where that procedure may take several hours, et cetera, can you accelerate that to near real-time, or … So, that's something that we've been thinking about and discussing. Yeah. Thank you. I think your first question was about, is it cost-prohibitive, and I think what we see with technology, you know, as it's out, interestingly, in technology, the price goes down, and so we were very early in, and so our large Epic, the refrigerator-sized machine, at that time, I can only share our experience with it, but that cost us about $65,000, $70,000. They now, over a few years, are half the price and half the weight, and, you know, so moving it and the physical load and the expense of moving it also reduces with that. This just came out about a year ago, and these cost us about $6,000, so I imagine, just like anything else, with time and replication and, you know, driving down the cost of some of the technology, it will be even more affordable. We've started to work … We've had a national engineering and architecture firm who started to work with us on the vision of what technology such as this does to facility design and how does it alter the space needed and how do you account for some of the front-end cost of putting in the technology and the impact that that has, the ROI that has, and that goes far beyond what I'm trained to do, but that just gives you a little sample of some of the ways people are thinking about cost in particular and then looking at healthcare delivery systems. Awesome. Thank you. Great presentation. I'm so excited about, basically, sci-fi coming to reality here on so many levels with AI and this technology and several. I've just got two comments and then a question. The first comment is rural health and mental health. I love that you all are considering it for this because in my experience, I was in a rural hospital, zero specialists. They would show up two to three days after something horrific would happen to the patient. It would be very helpful in real time to get that kind of help in many rural places. So that and then with mental health, people are isolated and I love the Howie Mandel example there because there's so many people that are suffering like that and just stay enclosed in their home. So I'm just going to applaud that and say great work and considerations on that. The other thing is when you monitor somebody so closely, there might be some uncomfortable feelings about that, sort of thinking about are they competent enough or not and those kinds of feelings. So I hope that gets taken into consideration when you are monitoring people. So that's something to consider. In terms of this instrumentation? Yeah. Yes, absolutely. Yeah, absolutely. So typically with any kind of technology, whether we're doing this kind of work or we're in a high school teaching kids about the human circulatory system with a game, for example, or simulation. So we always try to get rid of the novelty effect, right? Oh, I got this thing on. People are watching me. Somebody's collecting data. So once we get through the novelty effect, then it just becomes I'm wearing a bracelet and instead of an Apple watch, it's an EDA bracelet. Yes, absolutely. But absolutely. Yeah. And also some of the feeling for older practitioners about you're evaluating how they're performing their care. Younger practitioners would be more open to this. So yeah. I have experienced that as a psychologist. Yes. Over the last 25 years. It's like you're coming in here to analyze how I make decisions? Yes. Who are you? Yes. Yes. Yes. Uh-huh. Yep. And the last question is, and you're probably not at the stage to consider all this, but would telemedicine rules of coverage and regulations apply to something like this? We don't know what we don't know yet. But all of those are the regulatory and the state regulatory kind of applications all are on the table now. We've had a couple of years to be thinking about this, and first and foremost, we started to evaluate in what cases does it make sense to use this technology, what cases doesn't. Some of Roger's work is looking at what angle, what does the light effect have on different variations in skin tone and what have you. Where can't you see an accurate image? In what cases does it make sense to evaluate? For example, we've had orthopedic surgeons in to look at the technology and to imagine could you reduce the barrier of having to bring somebody into the office at post-op week one or week two and looking at wound care, looking at a variety of maybe skin conditions and those items. But we haven't had a chance because the technology wasn't ready with HIPAA compliance to be able to be delivered that way, so we're very close to that. So all of those other nuances are right at our fingertips that we're going to be learning and struggling through, and we hope to maybe report back next year maybe our trials and errors and things that we learned from that. Thank you all very much. I appreciate it. Yeah, you've got my head spinning too. Hi, I'm Rikard Mohan, I'm the founder and owner of Zella, where we provide international medical clinics around the world as remote as possible, so you can see how I'm getting excited about this. I mean, at the moment we use video consultations in all these parts of the world, over the shoulder coaching, but obviously there's nothing in comparison to this. One sort of simple question, not related, but then one that is related, can you do EMDR through this process for PTSD, eye movement, dissociation therapy? Yeah, sorry, I'm not familiar with that. Okay, sorry. But yeah, thank you. But then a core question for me, so if we were to be using these in our remote sites, middle of the desert in Africa, middle of nowhere, from a logistics point of view, and obviously a connection point of view, is that something that's going to be viable? You know, the fun thing about talking about this technology in front of new groups is that we learn new applications, and that's just something I think we would love to talk to you after and give it a try. We don't know until we try it out and see what the barriers are. You just answered my next question, so I'll chat to you after. It's really, really exciting. We're quite new that we do over-the-shoulder coaching, so we may have experts in the UK with a paramedic in the field, and actually talking through the video how to do this, how to do this, and this is a much better version of that. So we need to chat. Yeah, yeah. Absolutely. Okay. Excellent discussion and talk. Thank you. One of the themes as we have gone through this conference and had many sessions about various technologies in healthcare, one of the kind of themes has been sometimes this can take on a life of its own. So could you speak to the technology that you're using, maybe some of the considerations or pitfalls that you're being mindful of, and how you're guiding the process forward for just use to the degree that you're willing to share? Yeah. So I guess one of the concerns that we've talked about is that you are seeing a hypervolumetric three-dimensional being, right, human. So the question is, can we come up with a haptic device that will allow you, the clinician, to palpate and physically examine? And if you can't, then what are the limitations? Back to the question you asked about the eye tracking, it would be, I mean, a portable eye tracker is still very expensive, right? So we're talking about at least $15,000, $10,000. So if you do not have access to all the instrumentation because you have multiple sites, what is the bare minimum data that you need to collect? So for example, facial recognition of, you know, the clinician's facial expression would be okay. Verbalizations, we can do that now with CHAT-GPT 4.0 if we have, you know, basically a language large model of, you know, medical and clinical decision making. So there might be some data, for example, the eye tracking data, which is very revealing that we may not be able to do, but the question is, we can still work on those types of technologies. So right now, those are two of the things, maybe, that will come to mind. Dr. Hoffman? Yeah. I think my question may be a little more basic. I think I could probably spend a couple hours talking about what we've learned in our process over the past few years. The technology itself was out. It wasn't shelf-ready in healthcare. So, you know, we brought it into our innovation center to tinker with it, to figure out where does it make sense, and then to translate that. And for us, one of the first spaces was in education. And so, you know, I think at the very beginning, just taking a new technology that isn't commercial, it isn't so readily available, that has a variety of use cases and validation data behind it to bring that into a large university system. And some of the IT, you know, and the regulatory requirements, that was probably a good six to eight months just in the process. So I imagine some of you who may be in large health systems might experience something like that. So going through our vendor risk management process, and then going through the evaluation of the risk of the technology on the enterprise server, those sorts of things. And then, you know, the process of just the logistics of the, you know, the technical aspects of having, you know, a consistent signal. And when you need it to be live, you know, to make sure that you don't have any interruptions, you know, the plugs, you know, and the electrical, you know, components. All those things about moving it that you have to consider. So I would say going through that process. And then, you know, the next phase was our how we do it phase. There wasn't a book or a guidebook that says step one, two, you know, and so we were creating that step by step. How do we create content? What is the best angle? One of the limitations we've learned, you know, you see the image, you don't see the back of that person, right? And so where other technologies, you can get kind of that 360 degree view. And so looking at some of those limitations and how do we overcome it? So I mean, I could kind of go through a lot of different categories and share our processes. And then, you know, coming from the perspective of how do you scale the demand? And so for us, it was an end of one, getting started with it. And then the multiplier effect as we have more and more faculty and clinical champions that want to do that. How do we work at scale for that? So that's another process that, you know, that we had to develop. Wonderful. Well, we want to give Dr. Azevedo and Dr. Hoffman a warm welcome for your excellent presentation. You touched on our conference theme of innovation, collaboration, and empower OEM. Thank you.
Video Summary
The speakers discussed the innovative use of holoportation and hologram technology in healthcare education and potentially healthcare delivery. They emphasized the potential applications in rural healthcare, mental health, and remote locations. They considered the logistics and cost of implementing the technology, highlighting the need for connectivity and regulatory compliance. The speakers also mentioned the various data collection methods such as eye tracking and facial recognition for evaluating clinical decision-making. As the technology evolves, they are exploring potential applications in telemedicine, surgical training, and personalized remote healthcare interventions. The speakers also shared insights into the challenges and opportunities in scaling up the use of this technology and navigating the complexities of integrating it into healthcare systems.
Keywords
holoportation
hologram technology
healthcare education
healthcare delivery
rural healthcare
mental health
remote locations
data collection methods
telemedicine
surgical training
×
Please select your language
1
English