false
Catalog
AOHC Encore 2022
415: Advancing ACOEM's Clinical Practice Guideline ...
415: Advancing ACOEM's Clinical Practice Guidelines
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
These are going to be our three speakers. We have Dr. Kurt Hegman, the Director of the Rocky Mountain Center for Occupation and Environmental Health at the University of Utah. He's also the Editor-in-Chief of the ACOM Practice Guideline. I'm Carrie Wisner, and I work for MD Guidelines. I'm the Assistant Director of Epidemiology. And then we also have Matt Thies, who is the Associate Professor at the University of Utah. And we all are in some way related to the ACOM Guidelines, which we will be talking about in this presentation. So our three learning objectives is we're going to summarize the evidence-based guideline changes. We're going to identify methodological improvements. And then we're going to give you some examples from research in real life about using the guidelines. Okay. I'm going to go ahead and hand it over to Dr. Hegman. Thank you, Carrie. I appreciate it. Thank you for the opportunity to speak with you today. Oh, there we go. So many of us, I think, have been taught that it takes 17 years to change practice. Well, you know, I always had the impression that was one anecdote, and was it really true? And I finally had occasion to start digging into that literature, and I found out, oh, my, it's not just true. It's been replicated many times. And that was one of those come-to-Jesus moments for me. And it also seconded one of my challenges, which was, and some of you have heard me speak of this before, but the challenge of changing practice. What we're doing today is now giving you more oomph to change. We have about 48 randomized controlled trials that are published a day right now. The pace of that is increasing. Think about that work. For us to digest one article as a half hour in producing the guidelines, that's with highly efficient people. I think you have a slight challenge. I don't know about the rest of you. I feel uncomfortable even mentioning that. I mean, certainly it's our hope that guidelines will help to change things for the better. And that's one of the reasons we're doing this, both in terms of constructing guidelines, but also, of course, in terms of teaching about them. We'll talk a little bit about advances in guidelines. I'll mention one of them right now. There are computer programs that are being developed internationally to search literature databases and so forth more efficiently. Our team, led by Dr. Thies, has piloted those sorts of things. And as far as I know to date, they're still not efficient enough. They're not as good as a person. So not as good as a person. And consequently, I mean, at some point in time it'll happen. But it's glacial at the moment. So what about our changing of practice? Occupational health nurses here in the audience. Physician assistants, physicians, MDs, DOs, psychologists, Dr. Cartier. But he's perfect. He's in the front row here. He's chair of our disability panel. The physician's experience to change clinical practice has struggled to unlearn, unlearn. The results in this direct quote are two findings include practice change disturbs the status quo equilibrium. Establishing a new equilibrium that incorporates the change may be a struggle. I think it's a big struggle, but okay, me. And secondly, part of the struggle to establish a new equilibrium incorporating a practice change involves both the evidence itself and tensions between evidence and context. In my case, after we did the first round of comprehensive updates of the AECOM guidelines, I think some of you may recall this number, but it took me about six years to change my practice. And I went through two high-level residency programs and was double-boarded. And I guess you could say I was completely incompetent. And that's okay. I'll accept that rock. But we'll show you some data. I think I have company. So updates to the guidelines, the mental health part of this, and Dr. Cartier was part of that too. Anxiety, depression, PTSD, the last of those we updated a year ago, anxiety. The work disability prevention guideline was released this month, last month, I'm sorry, a couple weeks ago. The COVID-19 update was a week ago. And shoulders disorders, we have already done the randomized control, randomized RCT, the rotator cuff tendonitis, parts of it, and most of the rest of it is in process. To finish this year, a comprehensive update of the opioids guideline. And then we're going to revise our chronic pain, which is shrinking with time because we're moving most of that into other body part areas or the disability. And then redoing the traumatic brain injury guideline. The anxiety guideline was another interesting one. All of the mental health ones have been interesting where I think it's fair to say everybody, the experts on the mental health expert panel all learned a lot. And that's one of the fascinating things about the guidelines is when we take on a topic, we typically find when you actually synthesize the evidence from numerous sources how there are a lot of things we should know which are not widely known. But let me move past this slide into this one which I for this talk compiled to compare what do things look like between PTSD, depression, and anxiety. And every single one of these things, cognitive behavioral therapy, appears to have evidence of significant efficacy such that it should be bedrock treatment for these disorders. Think how that is already divergent from what happens in practice today. All right? Somebody goes primary care, what do they get? SSRI. Script. Yep. You got it. And how long do those take to work? Yep, a month. There we go. To find out if it works, you know. And meanwhile, there are things that could be done like CBT and aerobic exercise which begin working early. And those things have the same level of evidence, meaning a replicated evidence base for every one of those disorders. Later on in this talk, you may see a little bit more information on that. There are some differences, of course. The PTSD, there's evidence of prolonged exposure therapy, exposure therapy being effective, virtual reality being essentially a kissing cousin. There certainly is evidence of efficacy of the medications. And yet, of course, our guideline doesn't say run out and give them a benzodiazepine right away, an addictive medication. People chuckle here in the audience because everybody is aware of safety critical work, safety sensitive work and driving cars and thinking straight and so forth. But think of what goes on in the real world. The work disability prevention guideline summary was interesting for the dearth of quality evidence. Item after item is recommended insufficient evidence where the evidence base is basically a surrogate or another disorder or some aspect of disability. And I'm so glad Dr. Kertay is here because he can answer all those difficult questions for you. But he's saying he came to heckle me, not the other way around. Well, and yet, think about how important this is, you know, how important it is to have a role and productivity in life, you know, and to not be disabled. So critical stuff. So we'll give you a few of the randomized controlled trial results on rotator cuff tendonitis. Initially, of course, from the prevention standpoint, I won't talk through each one of these things. There's relatively little known of quality. Probably not a major surprise for most of you. And thus, most of the evidence is in the insufficient evidence or I category. Some of the things map out the way you would have guessed. Anti-inflammatories have an A strength of evidence, an A also for calcific tendonitis for shockwave therapy. And ultimately not recommended on a series of things, ultrasound, low-level laser, oral steroids and so forth. So most of these things are as they were in the prior guidelines. There are a few nuances. In general, unfortunately, despite that 48 RCTs produced a day, most of the RCTs as in probably 90% are in the piled higher and deeper category. Instead of those focused things to fill the hole in the dam in the dike. I'm pleading. The COVID-19 guideline underwent a major revision with this update that was released one week ago. If I'm going to call out a few of the things out of it, one is that we are in this guideline transitioning to management as an endemic virus. We began rewriting that about the first of the year at a time when it was pretty controversial really to be writing that. So we were lucky everything kind of panned out the way it did so we didn't have to rewrite it again. And then another thing is related to yesterday's Patterson lecture. We are strongly advocating for after-action reviews which are needed in pretty much everywhere in terms of clinics, medical systems, all levels of government, all levels of industry. Everybody has had problems with management of the COVID epidemic and we need to analyze these things so we don't reproduce the errors which were made with both Ebola and COVID. Or at least minimize them. What was advocated yesterday is a continuous quality improvement type of model and we certainly would second that. The third item is for management of this thing is aerosols. We're pretty much convinced, I think, as a series of medical experts that that's what's going on with this thing is aerosolization. De-emphasis of contact spread. So I'll give you an anecdote. You all remember, I'm sure, January 2020 it was wash your hands, wash your hands, wash your hands. We were, meanwhile, looking at the data and saying, so if it's all wash your hands, how is it spreading so dang fast? And so my wife and I were, we finally decided to break out and went to the Turks and Caicos and checking in to come back at the Delta chicken thing at the airport, I look up and there's the government poster. It's from the ceiling all the way down to the floor. Wash your hands, wash your hands, wash your hands. Think about that. Think about the amount of contact and all that in terms of aerosols and droplets which had occurred, meanwhile, and how hard it is to get things to change. We're recommending N95 respirators for those significantly immunosuppressed and with reasons to use those sorts of things. Recommendations against lockdowns, recommendations to discontinue masking, schools should be open and without masking, and another big one is we need multi-arm randomized controlled trials. We need to look at the therapies to see what works and doesn't work and compare them head to head. Why? Because we are still getting, even with these Omicron surges and sub-variants, we're getting problems with people developing brain fog and that sort of thing. It's like, well, wait a minute. We need to start out with what is effective in the first 24 hours or 24 to 48 hours of infection to abort that process. This idea of trying to treat stuff a week later is, if you remember your influenza treatment, let's see, when do you give relenza? A week later? No. Not quite. Methodology, we've had a couple things that we're working on. We are incorporating rating of epidemiological studies in a systematic way to address certain things. I put a couple examples up here, opioid doses, risks of crash and disability. We're also piloting efficient searches for likely low-yield topics, so, for example, we're working on the trauma-related stuff regarding the shoulder and most of that stuff is not going to have quality evidence, so how do we not waste time so we get more updated guidelines out to you sooner? And GradePro, a platform for the guidelines development, Lucy Shannon is here from Reed Group and she can talk to that as well, very articulately. So Carrie. All right, so I'm going to talk about more of the research side of the house, looking at the evidence-based guidelines applied over claims review to see how many times the guidelines are being used and what maybe some of the effects of that might be. So let's talk a little bit about the impact of evidence-based medicine. So the cost of overtreatment or low-value care in the U.S. every year is upwards of $100 billion. So overtreatment is unnecessary treatment for a condition that is not life-threatening and would never cause any symptoms. Overtreatment may lead to problems and harmful side effects. Low-value care, on the other hand, is defined as services that are medically unnecessary and provide no health benefits to the patient. These can add strain to our already overburdened healthcare system. So why is overtreatment happening? In a study of physicians that were surveyed about why overtreatment might happen is they have fears of malpractice issues. They get patient requests. You get patients that come in and want certain things or, you know, have certain conditions and if it's maybe not that harmful, that can be sometimes an option, which leads to overtreatment. And also difficulty accessing medical records. If you can't get the tests that you need, you're going to have to do them again. And that's a systematic error that can be fixed with more data-sharing platforms. But at the individual level, you can't fix that. Like that's you need timely results. So that leads to overtreatment. Low-value care also stems from our fee-for-service systems, organizational culture, and the viewpoint that doing something is better than nothing. And a lot of that comes from training, the system you're working in, the people that you work for, a lot of those things that are, again, systemic and need to be addressed at a higher level, but often comes down to clinicians doing, you know, what they think is best for their patients or what's going to be accepted in their system. We've been talking about evidence-based guidelines for 20-plus years. The National Academies recommended that best practice guidelines are important for having quality care. And that was back in 2001. And we're still now talking about that. So I think that it just shows that we think that this is a good idea in general. We would like to apply it, but like Kurt was saying, getting those applications and changing stuff over time is a challenge. And we know that evidence-based medicine is a tool. It's supposed to be helping you guys practice the art of medicine without having to read, you know, the overwhelming number of studies that are out there. I found a stat from the New England Journal of Medicine that said, in 1950, medical knowledge doubled every 50 years. By 2020, it doubles every 73 days. It doubles every 73 days. So if you wanted free time that wasn't involved in reading clinical studies all the time, that's why evidence-based medicine is there as a tool for you to have trusted sources that you can quickly look up stuff and drill down as much as you want. Like reading titles of articles is great, reading summaries, digging into the methodology, all those things are important, but we know that your time is valuable. And evidence-based medicine can maybe fill some of that gap. So this is hot off the presses. This came out April 16th, and we looked at PTSD in a worker compensation group. So we looked at worker compensation data from California over a 10-year period. We looked for anybody that had a PTSD diagnosis and then no physical injury. So removing the people that would have other diagnoses in there that wouldn't be contributable to a duration or treatment that was PTSD-specific. And then we separated it by workplace violence or not. So workplace violence was assault, sexual assault, harassment. Non-workplace violence was accidents, witnesses, witnessing, seeing like a suicide or homicide or accident. In the workplace violence group, we saw a lot of like robberies, rapes, riots. You could understand why a lot of these people have approved PTSD claims. So the outcomes that we looked at were treatments, lost workdays, and the care, and we compared it to the ACOM guidelines. It was retroactively added to these, but we built this model, a binary logistic regression, to look at what the major variables were. We found that workplace violence PTSD tended to be women, younger, in retail, and earning less than $25,000 a year. Over a third of all PTSD cases did not return to work. There are some caveats with that. You lose people to other systems and stuff, and you can read the full paper, but that's a pretty staggering number of people that don't return to their original jobs or employers. The average duration leave that people were out for a workplace violence PTSD claim with no physical injury was 132.5 days. The same group of people, but their PTSD was not caused by workplace violence. Their average duration was 91 days, so a pretty striking difference between those two groups. So like I said, we compared it to the ACOM clinical guidelines. So here you can see the top 10 services and the top 10 prescriptions. I have the recommended ACOM evidence quality, so green is good, yellow is doctor's choice, and red is not recommended. The types of services was pretty similar across the two groups, so I think these groups are being treated pretty equally in terms of what they're getting, but I think you'll notice in the top section there's no red. There's no no recommended services. There's nothing that's in theory causing, could cause harm from giving these services. However, in the top ten prescriptions, three out of the top five are against clinical guidelines. That's pretty shocking to see those in high frequency. So 70% of the people followed recommended for services, again, I think part of that is because there's no not recommended, but only 23% followed recommendations for prescriptions. So that's a pretty big difference between those two for services and prescriptions, and then looking across workplace violence versus non-workplace violence. So then the focus of this talk is looking at guidelines specifically. So I pulled out some general stuff that is looking at guidelines trying to remove some of the workplace violence aspect. But leaves were pretty similar for people that did and did not follow guidelines for services, a 23-day difference or a 12-day difference in the two groups. I think some of that can be attributable to, again, there weren't any services that actively caused harm. They're pretty similar, maybe innocuous. Some of that's going to go into like overtreatment, but in terms of duration, maybe not that much of an impact. However, we looked at leaves that were longer for those that did not follow guidelines for prescriptions, we saw a much bigger difference. So we saw 118-day difference for people that followed guidelines. Their durations were much shorter, and 102 days for non-workplace violence for people that followed the guidelines. So again, that impact of that not recommended, you can really see in the duration. Some of the limitations of the study was that we didn't capture severity because it's ICD based, so that could be influencing some of these. But another interesting thing that we found was that the median number of CBT sessions was two. So the guidelines recommend six over a period of at least six weeks, sometimes twice a week as a starter point. These leaves are three, four, five months on average. Some of them are stretching into multiple years, all in the worker compensation group. So pretty striking that CBT was only at two. That's probably not a therapeutic dose and is of some concern. So from this study, in the future, we'd like to look at the clinician types, who's using the guidelines, are there certain clinics, types, educations, you know, who's using the guidelines. And then we'd also like to look at those who had physical injuries. So like when those workplace violence events occurred and it became physical, they were not included in this study. So we'd like to go tease out that data. And then also, this is only billable information. So we didn't capture alternative medicines, if these people are going somewhere else, getting something that's not going to be billable through the worker compensation. So we'd like to study that. And then also, kind of our take-home message was like, should clinical guidelines have pieces by cause? And I know that's been part of the conversation about the PTSD guideline is the different types of cause. And we found that the durations were a lot longer for those workplace violence claims. So maybe they need to be getting special referrals or special treatments or at least something to help bring that down, especially I think it's important for occupational people that can provide exposure in the workplace or if the workplace violence event was perpetrated by a coworker, a boss, something like that, that can make the returning to work a big issue. So there's a lot of workplace violence specialists out there that can be reached out to. But again, like a lot of the mental health people have long wait lists and getting into a lot of these treatments and finding those specialists is a challenge. So. Cool. Now we're going to do an example of musculoskeletal guideline following. Thank you. Thank you. All righty. So the one article on mental health and guidelines I think is, to my knowledge, is the only article out there looking at a mental health guideline. So relatively new, great stuff. There's a few, there's a handful more looking at musculoskeletal disorders. So let's go through those and be able to talk about them. We're going to start back. So the other ones non-ACOM related, we're going to start with those because there's only a few. Starting here, lumbar fusions coming out of Washington State data from, it was published in 97. They had quite a few years of data right there. Their guideline came out in 1988. Their guideline demonstrated, look, it's, if their guideline was followed, fusion rates declined and it was specifically just about fusion, declined from 26% to 3% among all lumbar operations. And the largest benefits of those were among the workers' comp population. So demonstrated efficacy there, again, and a very focused aspect of musculoskeletal disorders, but still moving forward. Next talking about imaging, also from Washington State Labor and Industry. This was a prospective population cohort of 1,700 workers. Before the guideline came into recommendation, 19% were non-adherent with imaging guidelines. And those who were in that category had 50% more PT and OT visits and less chiropractic visits. After the guideline, claims went down, it reduced MRIs, increased X-rays, reduced injections, and then also reduced both costs and disability. So helping save money and getting workers back to work. Next one, also from Washington State, looked at utilization review program, which was heavily based on guidelines, directing them to say, okay, among all of these factors, what are, how well are clinicians following the guidelines there? They estimated, just in 2014, that there was a $7.5 million savings and a positive ROI. So again, giving recommendations and credence to say, look, guidelines are out there, they're helping, or they have the ability to help, maybe we need to use them a little bit more. Going across the pond over to Ireland, the one article that we were able to identify from there was also a small prospective pilot study published in 2007. Nothing has come out since then that I'm aware of. But looking at acute low back pain. So again, one of the most common outcomes, or one of the most common concerns in terms of musculoskeletal disorders, only 39% were compliant with a referral to, for further treatment after that first visit, 54% were compliant for secondary care, and 50% were compliant with return to work recommendations. So not high levels of compliance. So again, this is getting, to me at least, getting to that point of guidelines are out there, the purpose of having a guideline is to be able to help treat patients more efficiently and more effectively. But the big challenge is getting those guidelines into the right hands and getting people to follow them. So going here, now talking about ACOM-specific guideline implementation and research. So this was a large retrospective cohort study among workers' comp claims from California over an 11-year period. We had a big chunk of people, and I'll get through that flowchart a little bit, but it ended up being 85% of all of the people who were eligible for this, looking at acute low back pain. What we were trying to do is record review, looking to say, okay, what were the treatments that were assigned one week after diagnosis, and how did those line up with the ACOM guidelines, as well as what was going on with lost work days, and moving forward with that. So just a flowchart here in terms of total eligible people at the beginning, records that we had just over 70,000, and we ended up with 85% of those at the end. So we got rid of people who didn't have a medical visit or had red flags in their claims or other issues like a prior low back workers' compensation claim. So really tried to hone it down to a relatively pure population here to see what's going on. So diving into this, when I saw these data, I was actually really surprised, but then also a little bit not surprised at the same time. So similar to what Kerry had, we have, you know, by frequency of different interventions or different diagnostic tools, what was going on. So you can see this top one is x-ray. It is a moderately not recommended for acute low back pain, yet nearly 50% of all of the patients in this large study had an x-ray within that first week. Then you go down, NSAIDs, muscle relaxants are recommended, and those were 44 and 35% respectively. And then again, in that first week, nearly 20% of the acute low back pain patients were prescribed opioids. Looking at this in terms of how things move forward, and well, I'll get to that in a minute. So talking about the interventions that were recommended, again, this is getting into those frequencies. So the recommended interventions and how frequently they were used in these data, again, NSAIDs were there at the top at 44%. There was a few in the no recommendation section that were used in this California workers' comp data, lidocaine, patches, infrared, and ultrasound. And then a host of things that were not recommended with different levels of frequency. Again, the big one there being an x-ray at 50%, but other things like electrical stimulation at 15%. So all of these things were adding up and costing additional money. So in terms of the breakdown, obviously providers, they have a different mix of all of these together, right? If you look at those who only did recommended treatments, that was 14.2% of the patients with acute low back pain versus only those patients who had only non-recommended treatments was also in that same realm, 14.6. But most of them had a mix, both recommended and non-recommended treatments, or multiple recommended treatments, multiple non-recommended treatments, making up some of those. These are not necessarily mutually exclusive, right? So you can see multiple recommended treatments, 43.7%, and then those who had multiple non-recommended treatments. So there's an additive effect there going on. Looking at some figures here across the y-axis is our proportion, the x-axis is time. Again, going from 2009 through 2018, you can see modest changes over this timeframe in terms of what was going on. NSAIDs there in the top left corner may actually have gone down a little bit over that time. Here's some other ones, opioids in the bottom right-hand corner is the big one that I want to point out that that was a meaningful decline over this timeframe. So that does suggest that whether it was the ACOM guidelines or external pressures or some of both, that there were some positive changes to practice there. So what did this come out with in terms of lost workdays, right? That was one of the outcomes. The big thing is, so our comparison group was those patients who had only non-recommended treatments, right? And there was a mix of no or other medical recommendations or if they had both recommended and non-recommended. Our gold standard would be that bottom one if they only had recommended. So there was an 11 and a half day improvement in return to work, return to work lost workdays among those low back pain patients who had only recommended, ACOM recommended treatments. And then a mixture in between there when they had a mixture of both recommended and non-recommended treatments. Those are all statistically significant large impacts there. So if you put that into actual numbers, that is 29% fewer lost days for all of these patients. So people were getting back to work faster. Opioid prescriptions fell 86% over this time. And I would expect that there was also cost savings there, although that's one thing that we did not look at with these data. Another study was a retrospective cohort study done here in Utah. We randomly selected worker comp claims, again, looking at low back pain. And our main outcomes were medical costs and then the care, how that sorted out for their low back pain. We did this one a little bit differently. What we did is actually assigned a value for their care. So if it was an ACOM recommended, our basic one was just the ACOM score. If it was recommended, they got a positive one. If there was no recommendation, it was zero. And if it was recommended against, they got a negative one. We also went and sat down with several providers and said, okay, our one, zero, and negative one is pretty crude. How could we improve that, right? So they said, okay, progressive walking program, staying active, directional stretching. These all had a larger magnitude in terms of the expert rating. And then things like opioids or MRIs had a larger negative rating. So we had our plus, our positive one, zero, negative one, total sum score for what patients had prescribed and then also had performed, and then also these expert ratings. So our ACOM plus minus one scoring ranged from a negative three to a positive six. So again, quite a range in there. Our expert scoring would not surprisingly had a little more distribution going from negative 11.7 to positive 14.8. Looking at medical costs, we had costs that were zero all the way up to 24,000. Obviously those, as you would expect, were highly skewed. And so those were log transformed for analysis, being able to look at what's going on. In terms of these patients, just over half were male. Pain score was seven out of 10 in terms of the initial pain rating on average. Looking at, again, our ranges for our ACOM scoring there of negative three to six. You can see how that goes on. So looking at this graphically, how did the ACOM plus and minus score sort out with costs? This is, and again, it's the log transform cost. So you can see a nice downward slope here showing that as there were more, as there was an increase in numbers of ACOM recommended treatments, costs went down. statistically significant, resulting in an average cost reduction of 3.52 per unit, $352 per unit of increase. So every time a provider did something that was recommended, it would end up saving medical costs $352. When you look at the larger cost reduction, which is medical plus indemnity, that was an even bigger change at $586. Another example to look at is pharmaceutical, looking at opioid prescriptions. Again, this was done here in Utah in 2000, well, 18 months, it was a pre-post study before large workers carrier here, Workers' Compensation Fund of Utah implemented an opioid reduction based on the ACOM guidelines. So the intervention was first fill prescription, require utilization review based on the guidelines, making sure there was correct diagnosis, it was severely sufficient, less than 50 milligrams morphine equivalent dose, less than 14 day supply and short acting. So our outcomes that we were looking at were the numbers of prescriptions and return on investment. Is this something that is going to be beneficial? So 54,000 claims, pretty evenly split in the pre-post. The intervention had 46% opioids were filled as they were written, 34% were partially approved and 20% were denied mostly due to mild injuries. This pre-post comparison found that there was a 33% reduction in those requiring opioids, a 60% reduction in second fill of opioids and a 63% reduction in those who were using opioids greater than 90 days. Also there was big meaningful reductions, 28 and 23% in the 50 milligrams and 90 milligrams morphine equivalent dose. So statewide opioid fatalities also decreased over this timeframe that may be partially related. Again, fatalities from opioids are not just from what's going on in workplace prescriptions, but we were really buoyed by the fact that there were these meaningful reductions by performing this intervention early on and hopefully adjusting that trajectory of how opioids were being prescribed within this population. Next one is also an opioids following guidelines. This one is looking at opioid prescription for carpal tunnel release surgery, 7,800 participants there, 70% of them had a filled opioid after carpal tunnel release, 28.9% prescribed an opioid contrary to the guidelines. So nearly a third were prescribed contrary to the guidelines. A small fraction was prescribed larger than a five-day supply and you guys can read this, but what was nice to see was you can see at that tail end in 2014, there is a meaningful and I believe statistically significant also decline in what was going on in terms of those who were prescribed opioids there at the top. Unfortunately there wasn't this similar decline, the red line at the bottom is those who were prescribed opioids contrary to the guidelines. So this ended up in reducing disability by two days, which was a 5% drop, also reducing medical costs, which equates to approximately $102 million in savings if this was applied universally. So in conclusion, compliance with guidelines is associated with better outcomes, we just need to get people using them, right? Better outcomes in terms of lower costs, better return to work, better outcomes for our patients, and then also consistent results using different study designs in different jurisdictions. These were done, again, Washington State had a few, several were done here in Utah, California, and then in Ireland. But there's still a need out there, we need to get more information on treatment sequencing, more information on some of these other body parts or other disorders, and the big question I think is how can we go about doing this? So that's it for now, and I believe we're open for questions. While Bob moves his way to the mic, I'm going to point out this graph, you don't have to read the graph in fine detail on this slide, but if you think about what it should look like versus what it does look like, I mean I don't, personally I don't think we should have everybody, you know, all the way to the right in a vertical line, you know, but this is like a buckshot cloud, and my take home message that I'm taking from this is we're lucky if 20% of people get events-based care for their spine. In this case it's Utah data, but I'm quite confident other areas of the country are not doing better. Bob might be wrong, you know. You could all prove me completely wrong on that one. Bob? Yeah, thanks. It's good to see some evidence about effectiveness coming out. So I have two questions. The first one has to do with the implementation problem. And as I think we all know, the electronic medical record has the capability of having decision-making alerts, including, and ACOM actually produced some clinical decision support, actually for primary care providers, which is another point I'd like to bring up. That said, anyone who works in the real world recognizes that many physicians hate those pop-ups because they get in the way of kind of clinical flow. So on one hand, there's the kind of opportunity to provide just-in-time decision support to make the right decision. On the other hand, many institutions that have instituted perhaps too many of those find a lot of pushback, and they kind of take them out of the EMR. Any thoughts? Now, I did talk to a gentleman yesterday who said that their institution had started to link up-to-date with diagnostic codes, and their physicians actually really liked that. I just wonder, you know, your thoughts about this opportunity with EMRs to really bring the guidelines to the point of care in a way that's more successful than when it's on a computer. You know, you have to go to it, et cetera. So that's my first question. Yeah, great question. I think, you know, like what Matt was talking about is that implementation is challenging. And I think a lot of people are struggling with that, and yet finding the balance between being annoying as a pop-up and being like a resource that's supporting the art of medicine. So I know a number of groups that work with MD guidelines that have integrated the data into their electronic records that they have more of a, there's small amounts of information, like the recommended duration, and then there's a click more to go launch into more detail. And then you can drive down into the ACOM guidelines about the evidence, and people use that to like copy and paste evidence for reports and things like that. So I think finding the balance between sending, and it's also a lot of front-ended work to have set up the records that way. So you really have to have a champion in the system to be like, we really want this information. It's super valuable. And then the second barrier is then how do we get people to use those. So I think you have to have a lot of top-down buy-in for those. And I certainly know that there's a lot of different personalities around those. And adopting guidelines, like Kurt was saying, takes 17 years. And even when you're trying, it's taking six plus years. So I think getting people excited about the evidence, and some of our studies kind of found that to be like pointing out that using the guidelines makes a difference. And so I think getting people excited about the evidence and understanding it will help other groups. I know ACOM's obviously very excited about using evidence-based guidelines. But other groups may not, or linking it to other data sets, like up-to-date, where people are getting information that they really care about. Bob always asks these visionary questions, which I love. Fan of his. I think that this is really a critical thing. Let me add one element and one more level of my own experience. So obviously, I had a front row seat to look at all the evidence. And you would have thought I could easily have changed as I went. What I found out is, of course, I had this rope pattern of a cookie cutter when a patient came in. And bang, I was on the doing the same thing. And I had to literally go mentally stop, stop. And eventually, by diagnosis, I knocked them out, the more common ones, first. But this is because about half of what had been taught to me was just factually inaccurate. It was just not correct. So I think that that part of the, if you remember the evidence itself being a source of conflict, and then the context. So I think electronic tools are going to be one solution for sure. But I think if we don't deal with the context adequately, we're not going to get over this very easily anyway. So a vision for the future. And some of this based on my experience writing guidelines myself, not with the same methodology, not as rigorous as yours. But I think it provided some useful opportunities in terms of implementation and also further development of guidelines. So when I was involved in doing this, I thought rather than having a kind of a flowchart you needed to follow or looking something up, et cetera, et cetera, to put together what's now very easy in a medical record is to put together an order template. And you can choose, based on guidelines, as to what to put on that order template. In fact, almost all EMRs and institutions have order templates for certain kinds of things, like pre-op testing or whatever that might be. So one thought, rather than a kind of a clinical decision support in the usual way that's provided, is to build order templates, which has the recommended or the up to you kind of thing to do. And you can override it. But it's right there, because what matters most, and it even mattered to you, Dr. Hageman, is kind of the ease of actually getting through the appointment and making it easy. Just click, click, click, or even an opt out. It's already there. All you do is you click this. And there's the order set already done. And you're done. The second piece, and I'd like a reaction to both, is when we wrote guidelines, we realized that what goes on in the exam room has only a minor effect overall on the impact of a workers' comp case in at least one chunk and the most expensive workers' comp claims. But a lot of what impacted the outcome is what happened at the workplace. And so when we put together guidelines, we put together guidelines not only for the physician, but we put together guidelines for the employer with a heavy emphasis on the role of the supervisor and put guidelines together for the claims manager, because they also can have a big impact on the outcome. So answered the first question. And the second question is kind of a vision for the future of the ACOM guidelines. My encouragement would be to think about broadening beyond the role of the provider to the role of other key components in the workers' compensation system. Question number two, I will pass right away to Lucy Shannon. She helps. So we meet every week to talk about things. And this is the kind of thing that is great for discussion and to try to think about. And I appreciate that. I do think that there are significant differences, though, by jurisdiction and stuff that come into play with this. And so we'd have to kind of work through that. But again, I think it's a good idea to talk about. If I take the first crack at the other question. So if you have templates, and let's just say your practice is one of those on the left. That's not this table. But remember the graph, right? And you're on the left half of it. So your standard low back pain protocol is number one NSAID, number two opioids at night, and PRN severe pain, and so forth. Are you going to opt into a template which never mentions opioids, and NSAIDs are at the bottom, and at the top, it talks about progressive walking program. And next, you have to actually figure out what their directional stretch pattern is and prescribe that. And what I'm getting at is I think the answer is we've got a significant educational piece. Because while I agree with that, I don't think it will work as a standalone. Because I think the answer is people are going to just ignore it and go right past it to their cookie cutter, which has no legs, no arms, and the head is decapitated. Can I chime in a little bit, too? I also think that there would need to be a larger systematic change in terms of time demands and requirements placed on providers. It would be ideal if providers could, because this would take time to learn, to adopt, to, you know. And I think that it's hard in terms of talking, when I've talked to providers, just the time demands and all of the other competing things that go on that detract from actual being able to provide the best care for your patients. So I agree with what you say. However, what happens, I think, now, absolutely no question about it, is that within institutions, groups get together, look at the evidence, look at guidelines, and then develop templates. And along with that, there's a learning process, and education, and blah, blah, blah, and then buy-in. I mean, it happens in institutions. You can't just say, all of a sudden, it's going to happen everywhere. But it does get people, I think, to change more rapidly. Right. Right. Anyway. Yeah. It's a great discussion. In the exam room. Yeah. Great discussion. Thank you. You had it up there. Yeah, no, I was going to. Just to follow up on that, there are some organizations that have very effectively implemented clinical practice guidelines. Think about the VA DOD guidelines that have been in place now since 98, and that are pretty much universally implemented. People adhere 85% to 90% to those. And we forget how those were implemented. Originally, between 2000 and 2003, the EPRP program, the External Peer Review Program, hand-coded the CPRS record by the VA DOD guideline, and then fed that back on a quarterly basis to the clinicians who were doing the work. Yes, there were those point of service reminders that grew by 2005. Killing clocks was a huge complaint. But the difference was the feedback to the providers. And by 2007, 2008, 85% of MI patients were on their aspirin and beta block. And the crucial issue was feedback and the series of performance metrics. Dr. McClellan, you all have talked about the importance of a system. And the crucial thing in what we think of as occupational health care delivery, as T. Girardi pointed out the other day, is the financial system that drives that. And so until we enlist the financial driver into the quality improvement process in a direct way. And so I wonder whether any of the state plans, California, Washington, Utah, Ohio, have thought about using a claims manager feedback process to feed back to the clinicians how well they're doing with these guidelines. Yeah, I think that's a great point. And I know with certain groups, they've tried to add the carrot to be like, if you use evidence-based guidelines, we're going to make sure that your claim goes by faster or that it's automatically approved. And trying to add that carrot. I think I went to an earlier session that talked about using marketing to create a message for public health. So kind of doing that spin of place, price, making it fun. Not fun, but that people want to go use the guidelines and making them easily available. And also, like you said, having the feedback loop. I think that's a great idea. I know some of the work that we do is about showing the impact. Because it takes a while and a number of patients to really have them do well. And sometimes you lose them to follow up. So maybe you did a great job, but you'll never know. Because they went back to work and everything was fine. So I think building those loops is a great idea. And creating kind of that internet viral thing to be like, I want to go use these guidelines because it's helping my patients. And having it individually driven, as well as like those systematic changes to make it easier. And so I think, yeah, feedback loop would be great. I think incentivization is the common word around making systems function differently. Where's the incentive? Michael's very visionary, too, on this. And we didn't bring in this piece. I think the feedback is very important. And I'll give you one more anecdote. We've got Joe for a question. I got one major point on this. One on that. Yeah, OK. So in about 1995, they rated me, along with other people, and de-identified in how well I did at return to work. And I thought, oh, I do pretty well. And you already know the punchline. I had improvements to make. And it's just one of those fascinating things. But it is part of the feedback process. And I agree that's an important point. Thank you for bringing that up. What I wanted to bring up here is I'm going to, I don't know if everybody has the same conclusion, OK? So these are new data. Just look at this last week, because it just came out last week. And I've not thoroughly dissected it. But 70% followed the recommendations for services. OK. If you looked at what I had presented earlier about the AECOM guidelines, I mean, what did it say for CBT? That was like in the highest tier of rating, OK? What percentage are actually getting that? Less than half. Less than half. OK, how about exercise? And I'll accept either aerobic exercise or physical therapy. I'm going to be really liberal here. It's terrible. The aerobic exercise is 1.5%, and the PTOT is 4% to 5%. So if I want to be really generous and add those together at 6%, if I multiply 44 by 6%, I come up with awful. OK? So mental health looks like it's really in trouble on an evidence basis. And that probably brought less to the microphone, too. So Joe. Hey, thanks. Yeah, great stuff. Two questions. On the data, and I may have missed it, when the return to work successes, have you looked down the line? How much success do you have a year down the line, two years? Are you able to capture that? And then also number two, I didn't see anything about litigation costs and impacts on that. So are you referring to any of the particular studies, just in general? Yeah, so the return to work stuff was, it's in there. It's a little, there's a little bit going out, not quite that far. And then I don't think we have anything in terms of the litigation costs. Yeah, well, Utah, you virtually never have an attorney attached with an acute low back pain claim. It just doesn't even occur. They would have to have moved from a state that uses that model, literally. And so that's not going to be a significant variable. In those data, it would be a variable in the California data, for sure. Which we didn't have. Which we did not have that variable, right. And because of the magnitudes of these differences, I'm not sure it would have had that large of an impact. But I think you're right. It's an uncontrolled variable. OK, good. And then on the return to work, I'm assuming that's to the same employer? Or could it be returning to the workforce? Acute low back pain will almost always be the same employer. There's almost no exception, because the time for at least the Utah studies is immediate. Most of these people, if you treat them properly, they'll be back at work. So there's very few change of employers. But there were, I think there were a couple. Yeah. Yeah. Great. Thanks. Yeah. Thank you. I know you're almost out of time. So this is quick. I originally had a question of, if 44% are getting CBT, why aren't the numbers even better in terms of return to work? And you answered it by saying the median number of sessions is two. I would venture to say that when we do studies like this, we ought to look at, are these people actually getting CBT? Because I would venture that if you got two sessions, that doesn't count. So. Yep. I think that feeds into an online question that asked, are there studies about why providers deviate from the guidelines, patient expectations, lack of knowledge, guidelines, employer pressure? And I think that that's a good point. And that's what we wanted to do next round of our study. I know there's a handful of studies that look at duration and returning to work based on what type of provider you've seen. I've seen some of those. But I think that there's definitely more studies for that. And I think that feeds into our overall talk that, how do we get these guidelines into people's hands? And how do we get them to use them? Because if we're finding only certain types of providers are using guidelines, we can change our messaging a little bit to make sure that other groups are involved and can be using the guidelines effectively in their own roles, whether that be NPs or chiropractors or other type of people. I still have patients who have pain. Pre-guidelines, I had patients that I ended up with chronic disability dysfunction. I haven't seen a patient like that in at least three or four years. Because by using evidence-based guidelines, I don't have that kind of problem anymore. The people are functional. So I guess one last question. How do these guidelines compare with the recommendations of ODG? Well, let me tell you. Yeah, so we use the ACOM guideline. So MD Guidelines is responsible for publishing the ACOM guidelines. So those are all done by ACOM members. It drives a lot of our tools. We also offer return-to-work guidelines that are based on national data that is then teased out by professionals from a variety of fields about what are those psychosocial factors. And teasing those out so you just get what the actual recovery period is. Call that the physiological optimum. And so that's your best case scenario. They got the right care at the right time. Everything went right for them. That's your best case scenario. We also offer population data so you can understand what's going on with the good, bad, ugly care in between. And those are all data-driven but reviewed by medical professionals. ODG just does national data. It just says, this is what's going on on the level and doesn't have that finesse to give more of a review. Obviously, I work for MD Guidelines, so I'm a bit biased. So that's my commercial interest. But methodology, it's like ACOM versus just data. Yeah. On the treatment, the ACOM is more refined. Yeah, exactly. We've got a methodology, and we follow the standards in the industry. And we go to Guidelines International network meetings and try to keep everything totally up to date. I think if I'm going to cut to the chase, you have levels. It's the old thing of, what do you want? Do you want precision of a treatment to get better faster, or is a generic OK? Well, I think the answer is always precision and detail is really the ultimate answer. And that's where I think we've excelled. It's not just an exercise or activity. Keep regain activity or something. The description, it's actually got, this is what you need to do in terms of aerobic exercise and directional stretch and CBT and what works and how does it compare to other things. So I think your summary of the detail is exactly what's correct. And the transparency. All the ACOM methodologies out there, you can go read it, follow it along. Cool. And that's it. We're out of time. Thank you for sticking around for this last one. I appreciate it very much. Thank you. Thank you. Cool. Nice job. Yeah. Thanks for inviting us.
Video Summary
In this video presentation, Dr. Kurt Hegeman, Carrie Wisner, and Matt Thies discuss the use of evidence-based guidelines in occupational health and the challenges of implementing them. The speakers highlight the importance of guidelines in improving patient outcomes and reducing costs. They focus on the ACOM (American College of Occupational and Environmental Medicine) guidelines and discuss the need for healthcare professionals to follow them for better results. They share data from various studies that demonstrate the effectiveness of guideline adherence in different areas such as low back pain, mental health, and opioid prescription. The speakers also emphasize the importance of feedback and education in promoting guideline use. They discuss the potential use of electronic medical records (EMRs) to provide clinical decision support and integrate guidelines into the point of care. They also suggest the idea of creating order templates based on guidelines to simplify the implementation process for healthcare providers. The speakers acknowledge the challenges of changing provider behavior and the need for continued efforts to improve guideline adherence. Overall, the presentation underscores the value of evidence-based guidelines in occupational health and encourages their widespread implementation.
Keywords
evidence-based guidelines
occupational health
implementing guidelines
patient outcomes
ACOM guidelines
healthcare professionals
guideline adherence
low back pain
mental health
opioid prescription
×
Please select your language
1
English