false
Catalog
Webinar Recording: Teaching with VR and AR
Teaching with VR and AR
Teaching with VR and AR
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning and welcome to today's webinar, Teaching with VR and AR. My name is Heather Hodge and I'm with the American College of Occupational and Environmental Medicine. There are two features available to communicate with the panelists and other attendees. You may post general messages in the chat feature. Messages can be shared with either the panelists or all participants. There is a drop down box to select who you want to share your message with. Go ahead and give it a try by introducing yourselves to the panelists and attendees. Let us know your role and where you are from. Questions on the other hand should be submitted in the Q&A box. Panelists are monitoring this box for questions so please be sure to post all questions here and not in the chat box. If you're not familiar with ACOM, we are a membership organization that promotes the health and safety of workers, workplaces, and environments through education, research, development of public policy, and advancing the field of occupational health. Before we get started, just a reminder that we are recording today's session and we'll email the link of the recording to all registrants. We are delighted to have Dr. Ryan Rivera with us as faculty today. Dr. Rivera is assistant medical director of the Stanford Emergency Department and a clinical assistant professor at Stanford Medical School where he teaches health system science and is developing their digital health program. He has been heavily involved in health technology, having worked at Google overseeing all health search products, and currently is founder and CEO of SimX, one of the first and largest providers of VR medical simulation software. He has also worked extensively in healthcare policy, sitting currently on the CMA's delegation to the AMA and on the ACEP section council. He is a former board member from the American Medical Association, the California Medical Association, and CALPAC. He has previously worked for CMS, AHRQ, and the FDA. He got his B.S. in business management from Brigham Young University, his M.D. from the UC Davis School of Medicine, and his M.P.H. in healthcare policy and management from Harvard. We are glad that you're able to join us today, Dr. Rivera, and we're looking forward to another fantastic webinar. I will turn it over to you now. All right, thanks so much for that introduction. I'm going to go ahead and share my screen here. I think today we're going to cover a lot of ground. What I'd like to do is be able to provide those of you on this webinar with a general overview of VR and AR technologies, just kind of give you a sense of what they are and what they're capable of in current state. Then we're also more specifically going to delve down into how those technologies are being used right now in education, in medical education specifically. We're going to go through a few use cases for that. Then we also want to talk about where this technology is going. If you're here on the call and this is your first introduction to VR and AR, frankly, by the time you started adopting something four or five months from now and really using it a year from now, it's already going to be a little bit different. It's a space that's advancing very quickly. We want to give you a sense of what's coming up in the near future too and what the capabilities are going to be. Of course, I'm happy to answer questions not just at the end but along the way. If you have questions about anything that we're talking about, feel free to pop those into the Q&A box and we'll address them as we go. Before we talk about this futuristic technology, let's talk a little bit about the past. Medical education traditionally, if you were trained to be a doctor or a nurse or an EMT even way back in, I was going to say the 60s, but it looks like it's from the 30s here, then this is what it would look like. You'd spend a lot of time sitting in the classroom, hearing lectures, reading textbooks, and then you would practice your skills on some type of mannequin. Then you would really get most of your education, most of your learning on real patients. Now, of course, it's completely different. Sure, you still have the lecture component and we still have the mannequins as well. You do get a lot of your training still on real patients, but everything's in color and the mannequins are nicer. There's that. It really is kind of remarkable how similar actually medical education is now to what it was 50 years ago. Especially when you compare that to the fact that we have supercomputers in our pockets. You've got self-driving cars driving down the street. You've got robots doing backflips here in 2021, and yet we are still training people to save lives in the same way that we have been training them for decades. Is VR going to solve all that? Sure, I guess. I'm a bit of an optimist, but I think it is actually going to solve a lot of the problems that we're going to raise here in the next few minutes with traditional medical education. Is it also going to lead to some kind of post-apocalyptic future where everyone's disconnected from each other? Possibly, but that's a little bit beyond the scope of this presentation. I think those are interesting topics, actually, but we'll have to save those for another time. Now, as far as disclosures go, this was mentioned in my introduction. I am the founder and CEO of Cimex. We are a commercial entity that partners with academic centers to make VR medical training software. I think that is important to disclose, though this is obviously not a sales pitch here. I'm not here to sell you on Cimex specifically. I think it is also important background in that it is true that I am an optimist in this space, so much so that I have devoted a fair chunk of my life to this type of training. I think that's important context for all of you, many of whom I presume are educators yourselves, going into this to realize that I'm definitely on the more optimistic side of how this technology is going to be used. I guess also by way of background, I generally believe very strongly in academic and industry partnerships. I don't necessarily see this as like, oh, you're either on the industry side or you're on the academic side. I think when you're talking about brand new technology, almost always there is just tremendous information sharing that goes on between those who are leading in academia and those who are leading in industry. There's just very few people who are really developing comprehensive training materials in VR, and so you can't afford not to collaborate and work together so you can build off of each other. I guess one other important piece of background here is that I am an academic in that I work at an academic medical center and I do teach, but when it comes to educational philosophy, I'm not necessarily an academic. I think this is probably better framed as notes from the front line. I don't have a master's in educational theory. Prior to founding SimX, I mostly did quality improvement and patient safety work, and I am approaching virtual reality sim from that perspective, not from the hyperacademic lens of rapid cycle deliberate practice and other new methodologies and simulation. I think for better or for worse, my perspective on this is as an implementer of this technology more so than as someone who studies the educational philosophy behind it. Like I said, actually what brought me to this originally was problems like this. I worked for CMS and AHRQ developing packages and metrics to try to make care safer, some of which became the foundations of like MACRA and MIPS. I think what I realized eventually is that those types of programs are valuable, but they are never going to lead to the kind of very large-scale changes in safety that we need in our healthcare system. Now, these numbers that you see here are probably some of the most alarming ways to present the data. In truth, preventable deaths are very hard to calculate accurately in our healthcare system, as I'm sure many of you know. But there are far, far, far too many, and what we need to do is decrease it not by like 15%, but we need to decrease it by like 75% or 90%. I think when you look at industries that have done this successfully, like the aviation industry, the way that they did that was not through giving people pilots trackable metrics and that sort of thing. It was through simulation, and it's through simulation that was very frequent, but also very accurate, right? Their cockpits are a very accurate representation of their real practice environment, and that is key, right? Because then all those thousands of hours of simulation translate very directly into their actual practice. I think one of the big limitations that we have in healthcare is that when we do simulation training, we have this, right? We have these kind of large CPR dummies, basically, but with a little bit of robotic capabilities. I don't know how many of you have experience using—I think this is the SimMan 3G from Laerdale. It's probably the most popular mannequin in this space. Pretty similar capabilities if you get the top-end products from CAE or Gamard or any of the other big mannequin manufacturers. Really, they're pretty limited, right? This mannequin can't have any traumatic injuries. They can't have a stroke. They can't really have a rash. They can shake a little bit to tell you that they're seizing. They've got a microphone in their mouth or a speaker in their mouth, so you can talk into a microphone in another room that comes out the speaker in their face. It kind of seems like they're talking to you. These things are $50,000 to $250,000 each, which I think a lot of people aren't aware of. They're pretty incredibly expensive, but it's not just the cost of the mannequins themselves. In order to do simulation training in healthcare, you have to build a simulation, at least, room and very often a multimillion-dollar simulation center around it. It's a physical mannequin. It weighs 150 pounds. You're not going to be hauling this thing around. To get good sim, you need some sense of the environment, so you need to build the tools around it. You need to practice in a room that kind of mimics your actual clinical practice. It's turned a simulation into something that is an event that's very rare. Here at Stanford, for example, we have a multimillion-dollar sim center. Functionally, we have three rooms that you can use. We have, very often, a two- to three-month wait to use those rooms. Despite millions of dollars invested in building our sim capabilities, sim is not something you do frequently. When you do it, the realism very often isn't there. Then, looking at VR, I'm just going to show some clips and pull this closer. These are primary clips of our product because that's what I have easy access to. We're going to go through a bunch of other products in this space as well. Just to give you a sense of generally what the concept is here, there are huge advantages in terms of the realism in what can be portrayed in your patient. They can be a baby or a grandmother. They can be vomiting. They can be missing limbs. You can have four or five of them at once, and they can all have different physiology and learning goals. Then, you can have any environment. This environment is obviously a traditional hospital room. As we go through the rest of this clip, you'll see some environments where you are out in front of a burning car or resuscitating somebody on the ground. There's that one right there. We have cases where you're in a moving transport helicopter, cases when you're in an OR or an EV. It's not just about being able to replicate the walls of that environment and the background of that environment, but also the psychosocial aspects of that environment. We have cases where you are resuscitating a child, and the parents are there, and they're crying. You have to explain to them what's happening while you resuscitate their baby. That is real life if you are a PICU doc or a NICU doc. In fact, that's the hard part of being that kind of doc. It's not just remembering your algorithms. It's remembering your algorithms while you're managing those kinds of complex psychosocial situations. Then, not just the environment, but you can replicate any tool. This case that you're looking at here is a transvenous pacemaker placement case. It's like a 45-step procedure. Actually, even just getting a TB pacemaker kit in order to practice with can be really tricky. You use it up, and then what are you going to do? The fact that you can have all of these tools, and you can just use it over and over again, you can have 10 people do it at once, and you can do it every day and not have to worry about where you're going to get this equipment, is really powerful and is a huge advantage of VR as well. This is a home health case. Again, just emphasizing that you can really replicate environments very well in VR, which is a huge benefit. This is actually a neonatal ICU case, speaking of. It's got a functioning ventilator back there. That's a functioning transcontinuous pacemaker. They've got a fix-it over there. They've got a bunch of syringe pumps that they can interact with. Again, the fact that you can replicate all of these tools very easily in virtual and synthetic environments is a huge boon. But there's a lot of other benefits too. This middle point is really, I think, where VR shines the most. The fact that when you are using these all-in-one headsets, these next-generation headsets, which we'll talk about these in a minute, but the Oculus 2 or the Focus 3 or the Pico, these things set up in less than five minutes, and they set up anywhere. You don't have to plug them into a computer. You don't need external sensors. You can literally carry enough equipment in a backpack for a four- or five-person multiplayer simulation, and you can run it in an empty classroom. You can run it in a parking lot. You can run it in the middle of the woods. And so that flexibility in how you implement simulation is a huge benefit and then reduces costs, right? Just a few years ago, you would have to pay $6,000, $7,000 for all the equipment to run a really high-quality virtual reality training program. Now you can get headsets for under $400. And in either case, compared to a mannequin, the cost savings are tremendous, but especially now. I mean, you're talking about a drastic cost savings compared to investing in physical simulation. And then when you're talking about VR, there's huge benefits in tracking. So very obviously, you know, it's pretty easy to track, you know, did somebody actually listen with a stethoscope? Did they actually do their cranial nerve exam? But there's some huge potential here that we'll talk about as we go throughout this presentation and being able to track things like eye movements. So we can say, hey, you know, you just finished that sim. You were only looking at your patient 40% of the time that you were talking to them. You know, standard for your level of training is 75%, you know, what's going on. And that, I think, there's huge potential when it comes to virtual reality and our ability to track performance and feed that back to learners. And then information sharing is another huge benefit. I mean, right now in physical sim, you basically have a bunch of simulation professionals all across the country, all just like building very similar training modules that they're deploying to their students. Because it's not really a good way to share information. You can go to a conference and like share PDFs with each other or share ideas through presentations. But with virtual reality and synthetic simulation environments, it's very easy to build something and then just share it across the world. So at the outset of the COVID-19 pandemic, we built a couple of sims on training around COVID triage and inpatient COVID management. And we were able to distribute them to 400 institutions within a couple of weeks. So that ability to build something and then just share it across the world is, I think, also a huge benefit of virtual reality. And then finally, one of the other benefits is just that it does appear to be better. And this is, if anyone's interested, we can send the kind of lit review that we have here out. There is more and more evidence coming out every month that VR sim, when done well, leads to better learning retention and better skills translation than traditional simulation methods. And it makes sense, right? Because just like that aviation example that we talked about, I mean, the better your training environment matches up with your actual practice environment, then the better those skills are going to translate, right? And then finally, the last benefit here, at least that I will mention, is that there are things you can replicate in VR that you can't in traditional SIM. Um, I don't know if any of you have ever tried to participate in a, uh, physical, like mass casualty SIM, a lot of hospitals will do these and you'll do it out in the parking lot. You'll need 40 volunteers to run the day and they'll do some kind of half-hearted acting about their medical problems. And you'll go around and tag them. And it is an incredibly resource intensive process that you can't really do more than once a year. And still the, you know, there's a lot of suspension of disbelief involved. Um, versus with virtual reality, the ability to replicate realistically some of these environments and patient presentations, and there's things that you can send that you just can't, or even something as simple, simple as, you know, neurologic findings that. A mannequin can't, can't do that. An actor can't really do the ability to incorporate those is, is tremendous. Now I started by getting you hyped, hopefully about the potential for virtual reality and medical training. Let's let's now pause a bit and talk about some definitions and then go through in a little bit of a systematic way. Um, each of these kind of XR technologies as we refer to them and what their strengths and weaknesses are, and can be a little bit more detail on that. So, and I apologize if this is very fundamental, if some of you have experience with this technology, this might seem a little simplistic, but I think it is useful for everyone to be on the same page about the terminology here. So when we, let's talk about VR versus AR versus MR or mixed reality. So when you're talking about virtual reality, we're talking about when the entire environment is, uh, is virtual. So you are completely enclosed in a virtual environment. And that's that video you saw with somebody walking around inside of a virtual environment. Um, so there's no component of the real world that you are seeing. Um, and so that's when we're talking about virtual reality, that's what we're talking about. And then augmented reality, there's a little bit of, um, uh, the, the, the terminology here is still settling out around where are the lines between augmented and mixed reality. But I think probably the simplest way to think of it and probably how it will settle out is that augmented reality is when the real world is a star of the show. So this example here, uh, shows you that it's somebody who's looking at, you know, some food and they're getting some nutritional information popping off. Or, uh, if you saw the old Google glass ads where somebody is like looking down a street and they see the Yelp reviews popping off the restaurants, right. That's augmented reality where the real world is the star of the show and you're using virtual content just to augment that and then mixed reality. So this is where the virtual content and the real world are intermixed. And so this is actually like Pokemon go. If you're familiar with that, right? Like you hold up your phone and you see Pikachu there and Pikachu appears to be in the real world with you. Um, that is, even though it's often referred to as an AR app, that is probably more properly referred to as a mixed reality experience. Um, and all of these things together are called XR, uh, sometimes. So that just refers to any kind of combination of the, of the virtual world and the real world is a XR technology. Now, um, let's delve into each one of these a little bit. So when we're talking about, uh, virtual reality, this is certainly the most developed of these spaces and has been around the longest. And so there's a plethora of various types of VR headsets that are out there and they can basically be divided into two different categories and they are very distinct in their use cases and their capabilities, so you've got your three top headsets and you've got your six top headsets. So three DOF headsets are ones where it can DOF stands for degrees of freedom. So it will track your head movements, left and right, up and down. But what it doesn't track is forward, back, left, right. So like, if you take a step forward, it can't tell that you're taking a step forward, it can only tell how you're tilting your head, if that makes sense. And so these are the experiences that many of you may have tried where, you know, you put your phone into a little piece of cardboard or into a little plastic headset and you can watch 360 videos through this. And it's, I think there's a lot of good use cases for this, but it is very different from then the six DOF headsets. So the six DOF experiences, this is your Oculus Rift, this is your HTC Vive, this is your PlayStation VR, where they're tracking not just your head movements, but your movement in 3D space. And so again, that video that we showed you, that was a trainee walking around inside these environments and it's tracking, they're not just their head movements, but their body movements and their hand movements with millimeter precision. And those can either, those can come in a couple of different varieties. There's wired headsets that need to be connected to a gaming computer. And very often those have external sensors as well, that you need to set up that are tracking you from the outside in, and then there's the new all-in-one, the newer all-in-one headsets like the Oculus Quest and like the Focus 3, where they have cameras built into them that are tracking from the inside out, and so they basically map the world around you. And then as you move your head and your body, they're looking at relative changes in this mapped world around you to determine how you are moving. And that's how it tracks your position in 3D space. So you've got your three DOF headsets, you've got your six DOF headsets. Now talking about some of the use cases for VR in medical education right now. One of the first use cases I think that we saw is what you see in the upper left. So these were kind of virtual reality task trainers for surgical procedures. And, you know, to be honest, this is probably not, there's not a ton of value. I think as an educator, something like this is kind of a whiz bang, cool thing. But, you know, this person in this picture is doing a laparoscopic procedure, which in real life is done with a flat screen. They don't put on virtual reality headsets when they are, and they're not looking around their laparoscopic video content when they're actually doing a procedure. And so this is one of those situations where I think the VR, it takes you a step back away from the realism of the simulation. But still, this was one of the first, I think, that we saw. In the upper middle is another very common VR use case, this is VR anatomy. And this is something that if you have a VR headset, you can download some pretty well-developed commercial products in this space. And it's really cool. I think it actually does capture a lot of the value that you get out of cadaver anatomy training, because you still get that sense of space. You still get to kind of physically see how things connect. You can very often peel back like layers of muscle and see how things are stacked in 3D space within the human body. And so it gives you a lot of the same benefits that drive our utilization of cadavers for anatomy training, but in something that you can do at home. Right. So I think actually there's a lot of a lot of power in using VR for anatomy training. And then in the upper left, this is a representation of the concept of 360 video, which, you know, compared to some of these other six degrees of freedom experiences, it might seem relatively unimpressive. But actually, 360 video, I think, has a tremendous place in education as well. So, for example, you might know that surgeons, when they're learning a new procedure, it is relatively common for them to literally fly to other parts of the world where those procedures are being performed and then stand in the operating room and just watch. Right. Because, you know, obviously, if you're an experienced surgeon, what you need is not learning how to make the cuts and, you know, learn the tools. What you need to learn is just the steps of the procedure. And you can get that by watching. And so in some places, especially areas that are resource constrained, they are replacing that with 360 videos of the operating room. And it has the added benefit of, you know, you don't have to worry about the sterile field. You can really peer over what they are doing and you can very often view it from a few different perspectives and you can learn about new procedures that way. And that actually works. 360 video is also really valuable for patient education. So at the Stanford Children's Hospital, we use this for patients. So, you know, kids that are going to go in for surgery, you can put a headset on them. They can see what the OR is like. They can see what the ICU is like and all the machines and all the colors and things like that. And then that way, when they are then taken to the OR and when they wake up in the ICU, that's not the first time that they've seen those things. And that can actually be really valuable and really powerful. Then in the bottom left, this is this picture is designed to represent the concept of using VR for simulation training, which I already talked about extensively. That's what my company does. This is not a picture of my company's product. This is VR patients, which is also kind of in this space. And I think that's a great use case for VR as well. And then in the bottom right, this is an interesting company that I have no particular affiliation with, Catechist, but they they are using VR in a relatively unique way where you put on a headset as an instructor, you walk around inside of a synthetic environment, you record yourself demonstrating procedures or walking through an H&P or something like that. And then trainees can put on the headset at a different time and they basically play that back, that experience back like a video. So they see you walking around in 3D space. They can also walk around in that 3D space so they can like look over your virtual shoulder as you're doing an intubation or they can kind of watch while you take a history from the patient. And so I think that's an interesting concept because it doesn't really have an analog in in traditional training. And so I think this is we're going to see more and more like this very often. Whenever you have a new technology, the first things that are implemented there are like, oh, here's the the standard version of this. And now we've implemented it, that same thing, that same concept in this new technology. But then inevitably you start to develop concepts that are native to that technology and can't be translated outside of it. And I think that's what the Catechist represents is kind of one of the first like types of training that doesn't really have a great analog outside of VR. So kind of interesting. And then here's another interesting use case for VR. And in the interest of time, I might not watch this entire video, but it's great for empathy. I'm going to turn the sound off, too, so I can talk over it. It can be used for empathy training. So this is Embodied Labs, which does empathy training for helping people understand geriatric care. And so, like, for example, you get to be Alfred. So Alfred here, I think I'm not a high expert. It is macular degeneration, I think, is what causes this type of visual impairment where the middle of their vision is gone. And so there he was. You get to be this elderly patient in kind of a confusing family situation with your macular degeneration and experience what that's like. They have later on here, you know, you used to be an elderly patient taking a trip to the doctor's office and you're sitting there expected to fill out a bunch of forms. The doctor is asking you questions and they kind of intentionally, every once in a while, like garble the physician's speech to you so that you have to struggle to kind of understand what's being asked and you have to ask them to repeat themselves. And so I think really interesting concept. I've seen people use this for empathy training for like the emergency department as well. So you get to experience what it is like to be a patient sitting in a bed in the hallway while you hear kind of, you know, nurses talking negatively about you in the other room. You know, the doctor comes and talks to you and is there for 30 seconds and then gets pulled away by a call. And so you get to kind of put yourself in somebody else's shoes. And VR, I think, is a great way to do that, both within medicine and outside of it, actually. This has been used a lot outside of medical training to help people understand like different socioeconomic situations and things like that. And again, feel free to any questions that you have about any of these technologies, either the tech itself or the use cases, feel free to pop those in the Q&A and we can address them as we go. But let's move on into AR. So AR, definitely a bit of a less developed space. And remember, we're defining AR here as a situation where the real world is the star of the show. Sometimes it is loosely used to refer to MR as well, but we'll address that separately. And so when you talk about true AR, there's really not that many glasses on the market. There's a Vuzix, which I think is the one here in the bottom left. Google Glass is no longer on the market. There's rumors that Apple and Amazon will make glasses. So these are not real products. These are just potential future products. But there really aren't that many tools. Now, a lot of the most common tools for AR are phones, right? Because really all you need is a screen and a camera to do AR technology. So probably as far as things that are actually in use in the AR space, this bottom left is the most common. And so this is, as you can see, somebody holding a phone up to a textbook. And so this textbook is being augmented with 3D content that helps you better understand the diagrams. And so that's true AR and it's a cool use case. Here in the upper left, what you're seeing is something that people have been doing now for some time, but to me, the utility is not necessarily super clear. Using AR glasses to provide surgeons in the operating room with kind of real time information about the surgery that they are participating in, which is education in the sense that it's real time information being provided. And so you can, you know, instead of having a screen there and having a scrub tech scrolling through the CT images for you, you can just have it up in the air and you can use hand motions to be able to navigate that, which. There's probably some value there, but really how much value is there over just having another human being scroll through a CT for you? Probably not a ton. Here in the upper right now, AR has found a lot of use in just in time education outside of medicine. And so for like equipment repair, you can imagine it's relatively easy when you have a fixed device to be able to program into your headset the capability to recognize its various features. And then, you know, it can tell you, hey, unscrew this panel. OK, now that you've done that, you see these three wires like this wire is the one that you need to check. So pop that out. And so you do that and then it tells you the next step after that. And so that is in use actually in manufacturing industries and engine and equipment repair. And the bottom right, this is, you know, again, a mock up and a representation of what that could look like on humans, because we've talked about about, you know, basically translating that same use case to people such that you could, you know, open someone up and it says, all right, you know, here's here's where their gallbladder is. Here's their liver and here's where the mass is based on the CT data. So this is where you need to cut. Now, there are some significant barriers to that. I mean, human beings are just a lot less homogenous than machines. And so being able to train an A.I. to represent to recognize an arbitrary human liver, for example, is extremely challenging. And even recognizing the same person's liver under different circumstances is ridiculously challenging. So we're actually not very close to this. And I will say, as someone who runs a VR and AR company, people approach us and they're like, hey, can you do this? We'd love to be able to, you know, overlay our MRI content onto a human body in the in the OR so that you could cut. But like the consequences, too, are just much more, much higher. Like if you are going to say that you're going to overlay virtual content and make decisions about where to cut based on that. I mean, you need a very high level of accuracy. And we're just not even close. I would say this is a use case that is 15 to 20 years away at best. And the limitation is not the headsets. It's the the algorithms that drive the computer vision and the ability to recognize content. So then let's talk about MR and mixed reality. And so when you talk about things like the HoloLens or the Magic Leap, which you might have heard of, those are even though they're colloquially referred to as AR headsets, they're more properly termed MR headsets. And then Windows has a whole series of Windows mixed reality headsets that fall into the same category. And remember, this is where the real world and the virtual world are intermixed. And so there are some interesting use cases happening here. So in the upper left, this is a product by CAE where you can have a pregnant mannequin and they can overlay like the the virtual fetus and you can see the station as it progresses through birth, which is kind of cool. And the bottom left also actually a CAE product. I have no CAE affiliation, but they happen to be doing a lot of interesting work in this space where you can have a fake ultrasound probe and just an empty, cheap plastic mannequin. And it will basically recognize the probe and be able to give you like an ultrasound feedback image that is what you would see if you were ultrasounding a real person. So it's basically internally creating, you know, organs and then, you know, figuring out what your ultrasound image would look like if that mannequin had organs and presenting that to you on an ultrasound screen. And then beyond that, unfortunately, I need to find a good video of this because it this one picture doesn't really give you a good sense, but it also allows you to then like kind of see into this fake patient's body. And so you can see in 3D form what their anatomy actually looks like. And then you can also see, all right, so if I put my probe here and it's fanning out in this shape, that's why I see this ultrasound image. So that's, I think, a really cool idea because it allows you to make connections that you can't, there's not really any other training methodology that allows you to do that, to combine like a visual representation of the 3D anatomy of a person and like what their 2D ultrasound image would be based on that and be able to kind of see how your ultrasound fanning, you know, actually fans through different organs and how that translates. So I think just an interesting example of how MR opens up kind of ways of making connections between concepts that you can't really do in other training methods. The upper right is, I think, another good example of that. So this is not a commercial product, but it's a academic project out of Europe where they're projecting the circulatory system over just cheap CPR mannequins. And as you do chest compressions, you start to see the blood pressure building up. And then eventually you can see the perfusion of the brain that, you know, doesn't really start until you're, you know, 10 or 15 seconds in. And that's an important concept when you're training CPR, right, is that this is why you want to spend as little time as possible off the chest because it takes a while to build up enough blood pressure that you're actually perfusing the brain. And so you can teach that through this MR technology in a way that's, you know, really cheap and easy. And then here in the bottom right, this is something that I think we will see in the future, which is using MR to do simulation training in real world environments. So overlaying virtual patients onto real hospital beds and then using like real stethoscopes or real ultrasound machines on those virtual patients as if they were real. And that I think combines a lot of the benefits of VR simulation in terms of being able to have, you know, all the patient physiology and demographic variety that you can get through virtual patients while still being able to have all the tactile elements of real in-person training. And so I think that in some ways is like the holy grail of simulation, right? Being able to project virtual patients onto any empty space and just do simulation training anywhere. Well, we can skip that in the interest of time. I want to make sure we have time for questions. So, you know, it might seem like, well, if that's kind of the holy grail of simulation training, why aren't we doing more of that? And this is maybe delving a little bit into the details here of the technology, but there are some significant limitations to mixed reality hardware. And myself, as someone interested in this space, we started in mixed reality and a lot of our early experiments were in mixed reality. And then we kind of hit up against some hard limitations in the hardware that made us pivot a bit to virtual reality. So the way that these headsets work is they have clear lenses in front of your field of vision, and then they have a screen built somewhere into this device. And then they have a mirror that reflects that screen onto the glass in front of your eyes. And so that's how you get the virtual content is coming from this screen. And then the real world content is coming through the glass. And so it's, you know, merging those two in front of your eyes. And, you know, that process has some significant limitations. One being that the field of view actually ends up being very small. So usually, I don't know if you've ever used a hollow lens or something like that. But you can only get virtual content over the center of your field of view. And so in that example, I showed where you have a virtual patient lying on an empty hospital bed, what you would really see if you're looking at them is, you know, you can see their virtual head and shoulders. And even though you can see where their chest and abdomen should be, you'll instead just see empty hospital bed because it can't really project virtual content over that area. And then as you as you pan down and look at their chest and abdomen, you know, again, even though you can see the pillow, you no longer see their head because it can't really show you virtual content over that space. And so when you watch tech demos of this type of technology, they have ways of editing that out. So it appears as if the virtual content is very immersive. But in reality, it's like looking with a flashlight in a dark room, as you look around that space. And then similarly, you know, again, when you look at tech demos from HoloLens and others, they show this very opaque virtual content in the real world, but that's not what it looks like when you actually put on a headset. As you can imagine, when the tech is based around reflect refracting a screen onto a piece of glass, and then you also have the real world coming through that same piece of glass, you get like basically 50% opacity of your virtual content. So everything is very ghosty. And it does make it hard if you're trying to look at detailed content, to be able to really make out even expressions on a virtual patient's face. And so I think because of this, until somebody invents a different way to do mixed reality, those limitations are going to really, really impact the use cases and the adoption of MR in medical training. So what's next, right? So we talked about is what people are doing now. But again, I think this technology is changing and growing so fast, that if you are someone who is interested in this space and interested in adopting or facilitating technology in this space, what you really need to know is what's happening, what's going to happen in the next few years. And the first thing here is I think it's all about now the all in one VR headsets. So I mean, I've got three different all in one VR headsets sitting on my desk right now. But they're so small. They're like this. And they are as powerful as wired headsets were just a couple years ago. And they're only getting more and more powerful. And so there's really, I would say if you are working at an institution that's considering adopting VR technology, I would never recommend that anybody invest in a wired headset that needs to plug into a computer at this stage. It just doesn't make any sense, though, the wireless headsets are cheaper. And the quality is certainly as good as you are going to need for any type of important medical application. And they're drastically easier to set up. And added on to this, there's the emerging technology of what is called edge computing, where the actual computation and graphics crunching is done in the cloud. And then just the visual content is delivered to the local device. So you might have seen this in the gaming industry, actually, Google has a product called Google Stadia, it's a service that you can sign up for. And you can play really high quality games that usually require a gaming computer. And you can play them on anything, play them on a tablet or on your phone, because it's the Google servers that are actually running the game and doing all the graphics processing. And then that is being just the just the visuals of that are being served up to your phone. And so your phone doesn't need a powerful graphics processor. And so this technology, I think, is only going to accelerate the trend towards headsets that are very light on local processors. And so, you know, again, just all the more reason to not get any type of plugged in VR headset, it doesn't make any sense, the wireless ones are just going to be more power than you could ever need very soon. Other interesting technology that's coming soon is there are headsets being developed, not on the market right now, but coming out probably within the next 12 months, that will have integrated eye tracking. So just additional sensors inside the headsets that are always looking at your eyes, and telling the system where your eyes are looking within that space. And those are for gaming applications and things can be useful to drive, you know, differences in how the graphics look depending on your eye movements, which will just kind of increase the immersion and sensor realism, but also very useful for performance tracking, as I mentioned earlier, being able to tell when you are looking a patient in the eyes when you're speaking to them, being able to tell when you have glanced over at the monitor, and you know, what time did you check the monitor when you went into a critical patient's room. So being able to integrate that kind of information to performance tracking is going to be really powerful. And then biometrics. So there are headsets that are starting to incorporate biometrics, things like kind of minute differences in sweat on the surface of your skin, and minute differences in body temperature, things like that. Already there are some of our company's customers like Mayo Clinic, are doing experiments where they're putting VR headsets on people, and putting on separate biometric sensors. And then what they're doing is tracking physiologic stress levels over time, right? Because in an educational context, you want to keep people in that stress sweet spot. If they're not stressed enough, then obviously they're not stretching themselves. But if they're too stressed, they're not really retaining the information that's happening. And so what they're doing is they're modifying scenarios in real time, and tracking biometric sensors to keep people in that optimal window of physiologic stress. And so then you're talking about training medical trainees, like you're training Olympic athletes, right? You see Olympic athletes running on those treadmills with all those sensors coming off their face, trying to keep them in that in that zone. And being able to keep medical trainees in that stress zone, I think is a really cool concept. What you're also going to see is hand tracking. So in all the videos I showed you, you may have noticed that they were holding controllers in their hands. And that's true of basically all VR and most MR technologies as well, though there are there's some MR technologies that rely on hand tracking, where they use controllers to determine the position of your hands. Now, hand tracking algorithms are getting better and better, all the major VR headsets now have at least beta versions of hand tracking. That is pretty good. And as this gets better, I think it's going to wholly replace controllers, so you will no longer need to have controllers. That's great for things like CPR training, right? Where right now it's really clunky to have, like controllers that you're trying to hold in your hand while you're doing good chest compressions. There's a lot of other applications as well. And then I think we talked about the limitations with MR technology. I think what we are going to see within the next probably three years is a different way of doing mixed reality where you take VR headsets and you have binocular cameras on the front and that pulls in your view of the real world. And then that is intermixed with virtual content and served up to you inside the VR headset. So that, in my mind, resolves a lot of the challenges with current MR hardware and is going to, I think, make a lot of those MR use cases a lot more viable. But I think that's probably in the next three years or so. Then volumetric cameras. So this is really cool stuff. It used to be that you needed to have a big ring of cameras in order to get a 3D picture or video of something. But now what they have is these inside out cameras that you can place in the center of a room and they will be able to tell based on the light coming directly into the lenses, but also based on alternative light paths for things that are light that is bouncing off the room, it can then compute a 3D model of the space from the inside out. So you can just set it in the center of a room or in the back of a transport helicopter or something like that, snap a picture, and then you have a 3D representation of that space that you can walk around in that's photorealistic. So really cool tech that I think is going to allow us to bring more photorealism into virtual experiences. And then this is smell-o-vision, basically. There's a couple of companies that are incorporating smell into VR, which might seem trivial, but actually many of you probably know that smell is actually most closely linked with memory. And so being able to replicate smells that are important to a real practice environment can be very helpful in training. So we do a lot of work with the Department of Defense, and this is a little bit of a graphic example, but when you are working with somebody who has severe traumatic injuries and you're out in the desert, there's a smell to that experience that is shocking to people who have not smelled it before. And being able to replicate that in a training environment, it just means that's one more thing that when you get to that real-life situation, it will help you bring back your memories of your training, and it will be less shocking to you in that experience, and it will allow you to operate more effectively. And so actually, I think there's a lot of power in being able to incorporate smells into sims. And then everybody always asks about the tactile element, right? They say, okay, but you can't actually feel when you're putting in the IVs. You can't actually feel the stethoscope when you pick it up. And that is true in current VR technology. And unfortunately, this is something that is very far away. At first, it might seem like, well, but all you need to do is make gloves that will buzz or that will provide you some resistance. And there are gloves that exist where you can put your hands on a virtual ball, and you'll be able to feel that ball. And as you squeeze your fingers, it will oppose your motion and make it feel like that ball has mass. But as you can imagine, you could just push your hands forward and push right through it. And how's it going to resist that? It can't, right? Unless it has pneumatics or some other technology that is resisting the movements of your whole arms, and it runs all the way across your back. And then, sure, you can make it so that you can't push up against the wall, but you can still take a step forward, and you'll just go right through the wall. And so how's it going to stop that? So really, in order to give you a good tactile experience, you actually do need like a bodysuit, which you can imagine is not easy from an engineering perspective to develop, and it's not going to be very usable. And there's a lot of safety concerns with somebody putting on a bodysuit that has the capability of strongly opposing all of their joints, right? So we're actually quite far away from something like this. And so I think, though, that there is a lot of space in areas that don't really require a lot of tactile feedback, because all of you know, when you're trained to be a physician, the hard part is not remembering what a stethoscope feels like, or what the right level of pressure is to put a stethoscope on somebody's chest. The hard part is just knowing when to listen, and based on what you hear, what to do next. And that's true of almost every procedure, even surgeons, you know, in a very procedural specialty, the portion of their training where they need to know what it feels like to cut with a scalpel or to sew up a wound is a very small part of their of their training. But there's a very long component of their career where it is valuable for them to train on, okay, what are the steps of this procedure? If I get here, and this goes wrong, what do I do next? And all of that can be done without tactile feedback. And so, you know, I think I have come to believe that there is a lot of value in these technologies, even without tactile elements. I'm excited for the day when we can incorporate tactile feedback into VR sims, it'll be amazing. But it is unfortunately a little ways away. And then so what else is coming next? Well, that's that's for all of you to decide. And this is my cheesy ending slide. But it is true. I mean, we are just a few years and my company actually started selling in 2018. And we were one of the first. And so we are like three years out from the commercial availability of these kinds of products. You can imagine if you were sitting in a presentation in the 50s, just a few years out from when mannequins were introduced. And people were talking about how they're going to be used in the future, people would never have been able to guess how we are using them now. I think that's true of any new technology. And so I've told you how it's being used, you know, in this moment at the just like the very, the very outset of this technology, but five years from now, 10 years from now, 20 years from now, it's going to be a whole different world. And it's going to be people like you who are interested in this stuff and start, you know, adopting it over the course of the next five years and experimenting with it and ideating around it, we're going to come up with, you know, the educational technologies that we're going to be using 10 to 20 years from now. So thanks so much for your time for letting me come and talk to you about this topic today. Happy with our last eight minutes or so to answer any questions that you have about about any of this. Just a reminder, please post any questions that you have in the q&a box. Thank you. If there's no questions, that's fine too. I think we are gonna, I understand we have a video recording of this presentation that was sent out and then I can also send in my slides for people who are interested in checking out the slides as well. That would be great. I'm not seeing any questions either. So Dr. Ribeiro, on behalf of the ACOM leadership, thank you so much for presenting today. As a reminder, today's webinar was recorded. A link will be sent to all registrants and we will include Dr. Ribeiro's slides with that. Please check the ACOM website often for new webinars. And lastly, to our attendees, thank you for being here today. We hope you have a great day and that you stay safe. Thanks so much. Thank you, Dr. Ribeiro. All right, thanks so much for having me.
Video Summary
In a webinar on Teaching with VR and AR led by Dr. Ryan Rivera, various applications and advancements in virtual reality (VR) and augmented reality (AR) technologies were discussed. Key points included the transition to wireless all-in-one VR headsets for easier setup and improved performance, the potential integration of biometrics and eye tracking for performance tracking and physiological stress management, the use of VR for empathy training to experience different perspectives, and the development of mixed reality (MR) technologies to overcome limitations in current AR hardware. The presentation highlighted the ongoing evolution of VR and AR technologies, emphasizing the importance of continued exploration and experimentation to shape the future of medical training and education.
Keywords
Teaching with VR and AR
Dr. Ryan Rivera
Virtual Reality
Augmented Reality
Wireless VR headsets
Biometrics
Eye tracking
Empathy training
Mixed Reality technologies
×
Please select your language
1
English