false
Catalog
AOHC Encore 2024
117 A Call for Research:How to Start a Study
117 A Call for Research:How to Start a Study
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, yeah, our session today is called A Call for Research. And we have nothing to disclose. But before we get started, I want to tell you how this session started. I have the honor of being here with my two esteemed colleagues here. I have listened to these two through the years, and they've inspired my research and inspired my work. So now I am council chair of the External Relations and Communications. And as part of that role, I also run this cabinet of councils. And so we get together periodically, we get together here once a year and then throughout the year to just basically see what's going on with each of our councils. This idea was created by my predecessor, Dr. Kessler, and she thought, you know, we do a lot within ACOM, but we don't always know what each other are doing. There's a lot that's going on, a lot of overlap that can happen, a lot of common need. So we were talking at one of the meetings during the winter, and the three of us had a very lively discussion that actually went after the meeting on how much we need research in occupational medicine, how important it is, and how we can possibly help our members to do it, to think about it, to get more into the body of occupational medicine knowledge. So this session arose out of that, and we thought, you know, this is a good way to present it to you guys. Special thanks to Erin Ransford. Many of you might know her. She's our person who keeps this whole thing running, and she's the one that tells me, you know, okay, it's time for a meeting, let's get everybody together. So I wanted to give a shout out to her as well. So within the course of our practice, we see lots of different patients. You know, there's so many opportunities for finding out more, really. So I want you to think about that as you go through this session. It's just all the people that you've seen, what you would have liked to know about these people, you know, how we can improve the care. That's really a lot of inspiration for the research. And I won't read through the objectives, but just give you an idea of how this is going to go. Dr. McKenzie will give you an overview of research design. She actually gave a talk a while back on how to study a study that was very, very well received. We thought we'd sort of do an abbreviated version of that. I basically asked her to teach all of epidemiology in 30 minutes, so she's going to do her best with that. And then Dr. Nabil will tell you what's going on with the Council on OEM Science and some current research that might give you ideas and maybe you can get involved with. And then I'm going to tell you about a project that I'm doing and also try to inspire you why research, you know, if you haven't already come to the conclusion that it's good for your own mental health and career, why that might be. So yeah, without further ado, objective number one. Thank you, Dr. Kowalski, and thank you for inviting me to participate in this session. My name is Judith McKenzie, as Dr. Kowalski said. And I'm going to talk to you about study design and writing a paper and critically evaluating research that's out there, including yours. So research, study design, and contributing new knowledge to literature is important. We see a lot in our practice and we like to probably think, how can I write this up? What system would I use to submit it to JOEM or JAMA or some other journal? How would I go about doing that? We also need to review critically what's out there. You know, someone may say, oh, I found the cure for AIDS, and we think, oh, that's wonderful. It's all over the news. And then when you read the article, you realize that there are a lot of flaws and that's not really what happened. So it's important to be able to review critically so that we don't fall into accepting something on face value. And it's good for our daily work in problem-solving and moving our field forward. So I'm going to start with writing your manuscript. History of the scientific paper, definition, how to organize, and the language of the scientific paper. The first journals were published in 1665 in England and France, Journal des Scavans in France, and the Royal Society of London published the philosophical transactions. I'm kind of a history person, so I like to throw a little history in there. Since then, they have been the primary means of communication. They're in thousands, published each year. In 1850s, work needed to be more critiqued. Louis Pasteur had his experiments, and peers were saying, well, how do we know this? How do we know that? And he had to give more detail about what he did so other people could reproduce it. And that started that reproducibility in medicine tenet. And suddenly, there was an outpouring of people writing papers, and paper was scarce and expensive, and they were taking up too much space. So they came up with this MRAD method to reduce redundancy and reduce the length of papers. MRAD means Introduction, Method, Results, and Discussion, which I'll say again later. Now we hear about reproducibility, that everything has to be reproduced. The National Academy of Medicine addressed this a few years ago in 2019, and they talked about replicability and reproducibility are useful, but there are other ways to gain confidence in scientific knowledge. So if there is evidence from different areas pointing to the same thing, that's one way to gain confidence. You don't necessarily have to reproduce things exactly. So reproducibility, which we learned in school, has not gone by the wayside, but certainly there are more robust ways to gain confidence in science. There are multiple ways to gain confidence over time. A scientific paper must be published in a permanent format, so you can write a beautiful paper and you can stick it in your folder, well, it's not really a scientific paper because it's not published. No one can see it. A sheet of retrievable published once in an appropriate forum, and written and published describing your results, and an effective first disclosure. So we have the posted session later today where we have attendings and residents presenting their first disclosure of whatever they found out, so you can visit that. That's an effective first disclosure. And others should be able to see what you did and repeat the experiments, and at the end see whether the conclusions are justified. Did you really find the cure for AIDS? The format is introduction, materials and methods, results, conclusion, discussion. Should be logical, clear, and precise. The intro provides a rationale for the study, background information usually. A lot of people start the intro when they're doing the research, so you might take notes. In the old days when I wrote my thesis for college, I had little index cards in a long box. We don't do that anymore. I didn't paste on Word documents, so make it easy. And if there's preliminary work, you may just want to say that. And if you're using specialized terms, I gave a talk this morning, and I used some terms like N-O-O-W means never out of work. You have to define them, because people can't read your mind. So define your terms. The introduction, you want to be brief. What is your objective? What is your aim? You want to say what the problem is. You want to say are there gaps in the literature? You want to say, well, this is how I will address these gaps. So in the context of what's out there, where do you fit in? What are you offering? Materials and methods, study design, what's the population? Is the population the group of people in this room that we're going to study? Is it school children? What is your population? Talk about the statistics that you use. And also when we write papers, at least for me, sometimes I mix up my end results into methods. So you just have to be, try to be very, go back and double check everything. It's like learning your first soap note, where you mix, or I did, I put subjective, objective, like I mixed everything up. And then you become facile at it and no longer do that. And describe enough to be reproducible, as I said before, although reproducibility is more robust definition now. The result is the most important section, because this is new knowledge. And sometimes that's a little scary to write, because you think, oh, I just discovered this thing. Should I tell people about it? This is, you know, but be confident this is what you found. And you use robust methods. So go ahead and tell your new knowledge. And it's not going to be as long, probably, as the rest of your paper, but certainly, you know, present what you found. Write your table, do your tables and your figures. We may have a tendency to have a table and then write two paragraphs on the table. The table should be self-descriptive and you write a little bit about it, but don't repeat everything. That's redundant. It also, you know, takes up more space. And try to be as clear as you possibly can. And again, don't repeat things, methods and results and so on. Also, don't interpret your results and your results. That's for later. The results should be, the data should be in line with the story you're telling. Sometimes when we do our research, we get all this data and it's a lot of work. And we think, you know, I don't care, I'm putting everything in it. Well, that's nice, but your reader is going to get very confused. So you may have a lot of data, but streamline and present what it is, what is your point. Don't distract the reader or yourself. I mean, maybe hold that data for another paper. And if it's irrelevant, don't give it. Discussion, discuss the results, don't rephrase them. That's also difficult to do, I think. And if something doesn't fit in, don't just say, oh, I'm going to exclude that because it doesn't fit. Because then the reader will say, where'd that go? You know, people are going to critique your paper, you're going to critique other people's papers. So just point it out. Show how your work agrees with other work and talk about the limitations. It's always nice to read a paper and people talk about the limitations before I find them and think, oh, yeah, they found that. Because in general, there's no perfect study. And don't go beyond what your data shows. You didn't find a cure for AIDS, so you don't say you found a cure for AIDS. Be clear and succinct. And what is the evidence for your conclusions? And check that your conclusions match your objectives. Sometimes we have journal cover the residents and with the conclusion and we look at the objectives and they're totally different. So they say something, this is what we planned to do, and then at the end, they didn't do it and they talk about something a little different. So that's one of the things that you just want to be careful. You know, you have a storyline, you follow your storyline, and you write about that and tell your story to your reader. In terms of language, it's very hard to write short, I think. I had a friend in college who would write a paper the night before and get an A. Like, I could never do that. I had to write and edit and rewrite to get to where I needed to go. So try to be direct. You may start roundabout in the beginning, be direct. I use the key terms over and over. Not elegant variation. You may recall in English class, your professor said, oh, you need to use synonyms. But not here. You just want to stick with what it is. This is scientific, technical writing. You want the reader to go as smoothly as possible. And write short, as I said before, slash and burn, circle and pound. So write long, then write short. Unless you're amazing and can write short immediately, then that's good, too. And don't cause the reader to expend too much energy figuring out what you're trying to do. It's called drinking too many cups of coffee to get through. It's a beautiful technical work, and it's full of all these big words. But it's too hard to grasp. So try to help us out when you read your paper. Transitional words are good, and there are a bunch of them here. So between sentences, you may say for a second, then also in addition, furthermore, nevertheless, therefore in contrast. Or within sentences, and whereas, because since although, and others would be like in general or surprisingly. My epidemiologist doesn't like words surprisingly in scientific research. So I tend not to use it around her, but there are words that you can use. So be prepared to revise. You know, writing a paper is not easy. It looks easy when you read it, when someone else writes it. And as you authors know, it's not easy. So cut down on the long words for easier reading. Be brief. This takes effort. Think of the reader. Cater to the demographic of the reader. If you're writing a paper to Ox, that's fine. But if you think other people are going to read from other specialties, try to think of your audience when you write and define your specialized terms well. So tell the story of your work, how it answers the question, and how it fits in to other people's work and how it contributes to other work. So that's the first part of what I'm going to talk to you about. It's very brief, but it's, you know, tips on writing a paper. And now I'm going to shift gears a little bit and talk about critical review. So write your paper, and now you get to review other people's papers. So this is a critical review piece. Take a deep breath. So how do you organize a critique? There is a structure that I use and that I teach our residents in organizing the critique. Look at the title and authors, what's the hypothesis, the aim. Look at the methods. What is the study design? What's the population they're using? How are they collecting the data? And what's the statistics? Is the statistics appropriate for what they're doing? You may want to consult a statistician, if you have a friend who's a statistician, or maybe Google or ask AI nowadays, Dolly. Look at the results, the author's conclusions, and then the strengths and weaknesses. And is it internally valid and externally valid? So I'm going to talk about how to look at those things. Looking at the article, is it journal peer review? Is it NEJM? Is it JOEM? Is it JAMA? Or is it a trade journal that may not be peer reviewed? Do the authors have a history of research in the area? Is it people who've written on COVID for a long time or on lead? Or are they a brand new researcher? This may give you an idea of how to look at the paper. And is there a reason for them to be biased? Is there funding from somewhere that can affect what they are saying? So those are things that you want to look at. The research question, what is the aim? What is the objective? What is the goal? Is it descriptive? Is it explaining why if A happens, B happens? Is it predicting if A happens, then B will happen? Is it to show impact? Is it to show this is twice as likely if something happens? Is it to show effect or cause? All this will depend on the study design that can show these things. And we'll talk a little bit more about that. You may read articles where there's no aim, there's no objective, there's no goal. And these tend to sort of just be hard to follow. And so think about that when you read articles and make sure that that's not you. So what are the hypotheses? Are they reasonable? How will they be addressed? And is the study designed to address the hypotheses that are put forth? Study design. So you want to see what is the study design? Is it a case report? From least strong to most strong is a case report. That's one case. Case series, cross-sectional, case control, cohort, experimental, I'll go into a little bit more depth. I didn't mention ecological studies. An ecological study is a study where geographically there's an event. And so you say, okay, because of this exposure and most people have this outcome, then there's a cause and effect. So there's an area of the world where a lot of babies are being born and there are a lot of storks. So storks bring babies. Not necessarily. Maybe they do, but you can't say that just because geographically this happens. A case report and case series and cross-sectional studies are called descriptive studies. Case control and cohort are called observational analytical studies. And the randomized control trial and the before-after are called experimental studies. So what are case reports? A case report is a careful, detailed report of a single patient. Anyone think of a case report in the forever, in the past, in history? Perfect, yeah. So that's a case report. A famous case report might be PCP found in young men in 1981. There were five cases, really five case reports, and that was sort of the beginning of discovering HIV, AIDS. But it's an unusual medical event. Like you said, rabies is a little bit unusual, but certainly that presentation was unusual. So that was a good case report. Sometimes it can give you a clue of a new disease, like PCP and HIV. Case series is a group of case reports combined. That picture that you see, that's actually composed of sarcoma, also during the HIV epidemic where chaos was occurring in young men, but it usually occurs in older men, and that was very curious. So that helped us towards a definition of AIDS. It documents unusual medical occurrences. And here we have angiosarcoma and vinyl chloride, you may recall that exposure. And also COVID-19, you know, the first cases near Wuhan market and then snowballed across the world. So cross-sectional studies. That's another. I like cross-sectional studies because, you know, you can send a survey out. It's pretty inexpensive. If you get a good response rate, you can feel pretty happy about that. But there are limitations to cross-sectional studies. They're not as expensive as other studies. It looks at one point in time. So the exposure and disease outcome are determined at the exact same time. So is it the chicken or the egg? Which one comes first, you know, when it's a cross-sectional is one point in time. And there's no temporal relationship. And it generates what we call prevalence data, not incidence data. So it's prevalence survey data. But the thing is, you can study more than one outcome. You can ask as many questions as you want, hopefully not too long a questionnaire because then no one would answer it, but enough to, you know, hone down what you need, preferably a validated questionnaire that you can find that someone has validated unless you want to make up your own and pilot test it. It's easier to validate it. It's inexpensive. It raises a question. It's good for raising a question. But then you'll have recall bias if, you know, you say, well, what, your baby is born abnormal, you know, what were your exposures? And you think of every single exposure, your baby's normal, I had no exposure. So there's recall bias and non-responder bias. If your response rate is, you know, 10%, that's not cool. Why didn't people respond? If it's 60% is considered okay, but certainly higher is even better. So case control studies, the hallmark is it begins with people with disease cases and compares to people without the disease control. So you have cases and then you compare them to controls. I'm sort of blowing through this, forgive me. So cases have disease or the outcome, the controls do not, and then you look for the exposure retrospectively after you find the cases and the controls. It can be done quickly. You may need less subjects than cross-sectional studies, and it's less expensive than a cohort study. I find these difficult to do personally, but they're in the literature, if you look all over the literature. Cohort studies. Cohort studies are kind of cool to me. They're analytic. They're longitudinal. You get a group, a cohort, and you follow them over time. So whatever exposures they have, they all have the same, like coal miners study, right? You follow them over time, and they're classified as present or absent of the exposure. The researcher does not control the exposure, but you know the temporal relationship. So they start working at this place May 1st, and then 10 years later, these things happen. You can say, okay, so it's the exposure that led to this, more or less. There's no randomization. There are different labels for cohort. There's a prospective cohort. You start today and follow for 10 years or follow for two months, or I use you guys as a cohort, and I give you a questionnaire, and I follow you up over time. There's a retrospective historical cohort where you start today, but you look back to the exposures and the people who are working in the plant and come to the front. Or you can have retrospective and prospective together. You can also have a closed cohort where I'm studying you guys and nobody else, no one else can enter. Or a dynamic cohort where I'm studying this group, but I can accept people coming in with an end date. Experimental studies, the investigator controls the exposure, like drug trials, clinical trials. The RCT is the gold standard, right? There's also before and after that we tend to do in OCMAD because you can't exactly say, oh yeah, you guys work in the bad part of the plant, you guys work in the good part of the plant. So you say, oh, okay, historically this happened and now we have this, like the vinyl chloride angiosarcoma study. Before, that was a before and after study. Once they stopped the releases of the vinyl chloride, the angiosarcoma rate went down to zero, pretty much. So that was a good before and after study. Those are a lot of times done in OCMAD because that's more ethical than randomizing people. But certainly in drug trials, randomization is good. Experimental studies, the groups are randomized at random, so there's control for confounders. Because theoretically, if you have different groups and you have confounders and you're randomized, the confounders will be randomized as well. So you don't have to worry about something confounding and talk about that a little bit. But it's expensive and time-consuming. The Framingham studies, I think in the third or fourth generation right now, the original people probably have passed away, but it's still going on. It's hard in an occupational setting, as I said, and the exposure sign may be potentially dangerous. You have to be careful. So when you read the article, think these questions. Does the design make sense? How will it be conducted? Who will conduct it? Where will it be conducted? In the field? In this room? What are the strengths and limitations of the study design? And how would that affect the conclusions? Does it make claims that design can't support? For example, a case series, you cannot say cause and effect. An ecological study, you can't say storks bring babies. You can say there's an association that maybe storks bring babies. But you can't really say anything about cause and effect. So depending on your study design, be careful when the writer comes up with, yeah, I found the court cure for AIDS or whatever, when it's not necessarily true. And again, the study population defined it at the start of the study. You have inclusion exclusion criteria. So you say, okay, I'm only going to include these people and I'm excluding those people. Then you don't have enough. Then you think, oh, let me go back and change my record. You can't do that. You sort of have to stick with what you have and just keep going. And let, you know, the reader should be, see it logical, clear remains the same throughout. And then are they, do they represent the population that they are supposed to represent? Does everyone in this room represent everyone at AOHC? So can I study this group and say, oh, yeah, AOHC, people did this and that. Are you representative? We need to know that. And was a comparison group used? Are there enough subjects to affect the power of the study? The larger sample size, you're more likely if you find a difference it truly is there. But if you don't find a difference, it doesn't mean it's not there. Your project doesn't have enough power. Data collection, what's planned, who will make them, what will be collected, will a medical student collect your data or will a trained nurse or a trained doctor collect your data? So all these you need to think about. Dr. Sackett, a clinical epidemiologist, said too often the conclusion giveth, but the materials and methods taketh away. So you have a great conclusion, then you go back and you look at, oh, the study design wasn't that great and doesn't really match. So whenever you read an article, just be skeptical because when you start discussing with your patients and your colleagues, you want to let them know that, yeah, I really critically reviewed this article and this is what I came up with. Do the tables and figures support the conclusion? Are the results appropriate for the study objective? And how do the results affect the conclusions drawn? Look at the statistical analysis, you know, what's planned? What did they do? I work with a statistician. I did my own stats when I did residency, but I haven't since then, so I'm good. Although here that AI now is pretty good, although AI is one thing, but how do you interpret, right? You still need the skill set to do that. What are the conclusions? Are they justified by the results? How do they affect the, how do study limitations affect the conclusion? Are the results reproducible? And is the title appropriate? I'm just going to turn gears a little bit. And what is internal validating? I won't ask you, but it's there. But it is when the results of a study reflect the true relationship between exposure and the outcome. It's said to be internally valid. So the relationship that you say, oh yeah, storks are associated with babies, is that true, right? Then it's internally valid. If not, then it's not internally valid. I go in a little bit further on that. I saw some queries, looks on people's faces. So an association may be valid, but not causal. And just because you find an association doesn't cause and affect storks, don't necessarily bring babies. So there's an association, but it's not necessarily causal. So when you look at internal validity, you want to say, is it causal? Is the presence, is it causal? Or is it due to chance? Is it just by chance that storks bring babies? Or is it bias? Bias is a systematic error. So I can be biased against something. I'm biased against shellfish because I have people who are allergic to it, but that's not bias. Bias is a systematic error. You use the wrong scale to weigh people. Like my doctor's scale in my doctor's office is a problem because my scale at home is right. And when I go to my doctor, it's higher. So I think that's a problem. That's systematic error. Confounding is by a third party. I like confounding. I think it's really cool. I'll talk about it a little bit more. Hill's Criteria for Causation. You can look at an epidemiology book and go through all the things for the article. Consistency, strength, specificity, dose response, temporal relationship, plausibility. Chance is one reason for the outcome. And you use statistical tests to determine if it's due to chance. Is it due to chance? Bias, I said, is systematic error. There are different types. There's selection bias. So you do this cross-sectional study. And I select you guys, but you're not representative. So that's selection bias. Or I want to survey people who go to the gym and are active, but I only survey people at the gym. That's selection bias because I should be selecting a wider population. Information bias includes recall bias. Can you recall exposures from 10 years ago? Can you recall what you ate for breakfast this morning? So there are different types of bias. I'm confounding, I'll talk about. That's when a third factor is associated with the exposure and the disease. So it seems like two things are causing it, and you're mixing it up. For example, I think in the 80s, early 80s, a study came out that said women who have a lot of kids are less likely to have breast cancer. But when they went back and looked at the data, it was age at first child. It wasn't high parity. So the causal part was the age of first child. So at that time, I had no prospects. I had no kids, and I thought, well, I'm, you know, I'm doomed. But then when I heard parity, I'm like, okay, I'm doomed anyway, so it doesn't matter. I don't have to go and have 10 kids just to avoid breast cancer. It comes from the Latin confounder, or mix up. So for example, gray hair and MI, gray hair causes MI. Well, actually, a lot of older people have gray hair and have MI, but it's other things, cardiovascular stuff, and so more age is not the gray hair that is in the causal pathway. So that's confounding. Externality is pretty easy. The extent to which the study population represents a population it's supposed to represent. So do you guys represent all of AOHC? I would say no. If I did a study of just you and said this is all of AOHC, it would not be externally valid. And then finally, you have this beautiful study. It's perfect in every way. Is it important? Is it going to change what you do? Is it consistent with other literature? You know, we talk about reproducibility and replicability. And will it change what you do in your practice? So conclusion, medical literature review is expected by us, our teachers, colleagues, patients, families. Giving new knowledge is expected. It would be nice if we all published more, right, but everything takes time and writing takes a long time and it's not easy. But having a framework to at least critique what other people write will help you and add value to our work as OAM physicians. Thank you. Thanks so much for being here. I have questions for you guys. So by a show of hands, how many people are doing research? Oh, okay, I'm excited. Okay, and with a show of hands, how many people are clinicians? Ninety percent in the clinic. Oh my God, I'm blown away now. And you're here to look at the topic and figure out how you can contribute. I'm in your league as well. I'm a clinician, 90 percent of my time, 95 percent of my time is spent seeing patients. So it's hard to do research. It's hard to ask a question. So I applaud you for doing that. I want to make a case here that the things that you guys do right now in clinic can be translated into research, and that's what we need. That's what we need, and that's what the basis of this presentation is. There's a lot of questions that we're asking, and we're not asking it critical enough that we can have a critical answer and we can respond to the queries that's been asked of us. So why do we need original research? Why we need to have research in our domain? The important thing is if you don't do research in your own domain, then nobody else is doing it for you. You cannot figure out the nuance of the specialty, the new conditions that we have, the new things that's emerging right now. So it's important for effective interventions. If you want to change something, you need to do research. You need to figure out how we can change that. And then if you find an effective way to deal with things, you can actually impact policy. Your state levels, federal level, all the way up. And then the best part that I enjoy is you have a healthier working populations because you have done something that has a significant magnitude on an impact on these populations. So where do we start? People know about asbestos? Of course, show of hands, yes. Really important substance that actually induces pool thickness, mesothelioma. And the span that it impacts people is about 30, 40 plus years. But that knowledge was gained and understood over time. So we need to identify old and emerging hazards. And particularly, people have heard about countertop silica exposures? Of course. So there is a cohort of individuals or workers who have been extremely exposed with silicosis. So if you, as a clinician, does not identify these cohorts, these small things, you are not able to help these individuals get better. And all of this thing is important because it leads to evidence-based change in the workplace or policies that you're doing. Now, I'll give you an example. Recently in GOM, some of us basically looked at the new lead levels and petitioned OSHA to change lead levels because no level is considered safe, at least in my mind. And research is showing that really well. But the problem is, how do you justify lowering a level to a level that it's acceptable to industries, to OSHA, it's acceptable federally? So all these things requires research and understanding. The other thing that's important is how do we improve workplaces? I gave an example of COVID. Of course, people who work at the academia or academic medical centers or hospitals have come across with patients who are dealing with long COVID. So there's a disability associated with it. There's issues with returning back to work. These are important questions that we need to ask. So a couple of us came together and actually published one of the guidelines on behalf of the college on long COVID. The paper is out, I think, last month in GOM, Journal of Occupational and Environmental Medicine. So think about these things. You are seeing them, you are dealing with them, and how you can make things better for others. The other thing is a long-term monitoring of your patients. So people are impacted by climate change. There's an impact on exposures. How do you longitudinally look at these cohorts or populations? In my case, I look at World Trade Center population that's been affected about 20 plus years. And we collect information about them and it helps us make policy decisions and understanding of how things are evolving. And again, there are lots of things that people are doing. So in Army and in VA hospitals, there is burn pit registries that you can sign up. Why it's important? Because we need to understand what the exposures were and how it's impacting people at this point. And there's more better understanding as we understand the exposures that has happened in the past. There's other economic benefits in terms of improving health of your workers. For example, if I lower the lead level standards, there is a cost, health cost, that I'm improving for my worker population. And there's less likely that they are experiencing toxic effects of lead. There's the research that you're doing or you're presenting or you're bringing forward through your clinical work actually has a global impact. So giving you another example of COVID, COVID did not affect locally, but has a global impact. And the workers who are impacted were impacted globally. So the work that you do here and how you improve lives of your workers here can have translational effects across the globe. And the last thing that's very, very important is the education. Like once you learn about something, you can educate the policy makers, your workers, your patients, and help improve the awareness of a condition particularly the occupational medicine related condition. So how do we translate all of this, the things in front of you into a publishable research? And once you publish your first article, a manuscript, that's the high I think that you will get. That's one of the most exhilarating thing that you can witness. And we have a journal right here, Journal of Occupational Environmental Medicine. So I urge you to think about it. With this, thank you. Thank you. So, yeah, just as my colleagues emphasized, research is not easy, but it is very much worthwhile and very much needed. I don't know how many of you are familiar with Tate Shanafelt from Stanford, but he does a lot of research on wellness, you know, specifically physician wellness and burnout. And this particular piece kind of hit home with me for this talk. Individuals who spend at least 20 percent of their professional effort dedicated to the activity they find the most meaningful are markedly lower risk for burnout. So, hence, research, just as Dr. Nabeel was emphasizing, you know, when you see these people, you think, you know, how can I make a difference? Small little things can lead to bigger things, you know, and just publishing that first article, getting that first study off the ground is huge, you know, so I want you to consider that. You know, funding always comes up, and in my wellness work, you know, they don't give me dedicated time. Well, sometimes you're not going to get that funding or dedicated time right off the bat. You just have to start. There is that quote, start where you are, use what you have, do what you can, it will be enough. So, I always liked that one. It's hard, I know, but it's worth it. This is kind of something that inspired me to start on the project that I'm currently working on. This was actually a paper Dr. McKenzie was on many years ago, but as I started in family medicine and then went into occupational medicine, during my practice, I always was frustrated that we didn't take into account the workplace in family medicine. I would get, you know, notes from family med docs, I had no idea what my patients did. This whole article was based on the patient-centered medical home, which was rising in popularity at the time. It has since evolved into the project that I'm working on, which is integrating occupational and data into the electronic health record. So back to the patients we see, we see all of these people, and they don't all just stay at work, they don't all just stay at home, they don't all just stay in public health. There's an intersection there where we can reach them, and if you look at the workplace circle, you know, half of it is public health and primary care. So we need to do that outreach to public health and primary care. So the research I'm on started with this Occupational Data for Health letter in 2011. NIOSH sponsored this letter, basically that certified that we needed to do this, incorporating improved quality, safety, and efficiency of care, reduce health disparities, engage patients and families, improve care coordination, improve population and public health. They came up with 10 recommendations to advance the likelihood of incorporating occupational data for health into the EHR while protecting privacy. And this has evolved into the Occupational Data for Health Project. The big win that we had over the past several years is that it's now a requirement in U.S. CDI version 3. As of January 2026, all EHR will be required to have interoperable occupation and industry information in medical records. So this is huge for research. So now we can mine this data. So this is something I want you guys to be aware of. And if you're looking for a place to start, you know, this is really a good place to start if you see a person that, you know, again, you're patients, you want to help them, and some things we don't know. So let's do a study on firefighters or grocery store workers or whoever you happen to see. The idea, I don't have to tell you, but you know that EHRs can provide tremendous benefits. You can identify risks, track patterns of disease transmission, track trends, and provide opportunities for intervention. So the whole idea is, you know, once we find these things out, we can create clinical decision support tools. Dr. Neumil brought up, you know, asbestos and silica. How many of our silica patients are actually showing up in urgent cares? And does anyone ask them what they do? You know, maybe it comes to light when we finally get a chest X-ray, but you know, again, if we can get the clinical decision support in there based on the research we do, that's huge. I had the opportunity to write a position paper this past summer, was it? I think it came out, with Dr. McKenzie and others on ACOM supporting the position of incorporating occupational data for health. It always comes up with privacy, you know, we address some of the privacy issues. We also advocated for other things that electronic health records should contain, not just in occupational medicine, but throughout all of medicine. And during my work with NIOSH, I had the opportunity to work on a pilot project, and this just kind of tells you how we started on this pilot project. Again, trying to get the idea, what is it that you are trying to show? You know, Dr. McKenzie really highlighted, you know, what's the context, what's the whole point of the article or the study you're trying to do? So you think about the work we do, it's not just the exposures, it's everything, really. So this slide shows, you know, how it affects your retirement benefits, your sick leave, your vacation, health insurance. It affects everything. It's a social determinant of health. And that's kind of where we went with the project I'm doing now, is emphasizing how important of a social determinant of health it is. And what they've done, this is a federally qualified healthcare center that works, basically, they had a lot of dairy and cattle farm workers as patients. The call to action, once they found that out, they started onsite COVID vaccine clinics. They offered vaccines between work shifts. You know, a lot of their people were farmers. They couldn't get the vaccines during working hours. Family members were invited. And basically, again, the conclusion, knowing the risk of COVID for people with asthma, respiratory issues, the CASA staff found occupational data for health guided preventive care decisions. And this was just the data we had. It wasn't a lot. It was basically just looking back who had respiratory tract issues, asthma issues, and then we had the occupation and industry. We were able to see a lot of them were dairy and cattle farm workers. So this was a work category that can be tracked. So we're gonna give you guys some opportunity to come up and ask questions after this, but I just wanted to highlight, again, that you see these patients in your clinic, and you wanna help them. You wanna find out the answers. I know for me, another research project that I got to do, I was working in Pennsylvania back in the time when the opioid crisis was coming to light, and I was seeing a lot of my patients on opioids, and that was very frustrating. It certainly almost burned me out, but I had the opportunity to research it, and then again, get involved with policy after that. So that's huge. I always like this quote, nothing has the power to broaden the mind as the ability to investigate systematically and truly all that comes under thy observation in life. So with that, we'll let you guys answer questions. about, use the chat GPT or whatever platform you use for artificial intelligence, put your question in and ask the AI to create a sort of research protocol for you. You'll be surprised. You'll be amazed in the sense of what you can achieve with so much little time you have. It writes the abstract, it writes the question for you, and then you can figure out if this is the right thing to do or not. But that's the start. That's where we are in terms of where the world is moving. We need to use a lot of tools, we need to have understanding of how to do research, we need to understand the statistical methods. Things are converging in a way that has never happened before. So the tools are in front of you, and most of them are just on your computer. That's it. Question? Hello. Good afternoon. Thanks to all of you for your presentation. Much appreciated and great talk. My question is maybe a bit unique. I work in a major corporation, and for example, my area of interest is health and wellness of the blue-collar worker, like the manufacturing and such. And there's not much out there in terms of what are some programs to improve or maintain the health of that population. But the thing is, is that in major companies, they want to have a competitive edge. So being able to share the knowledge, if you do a research project, is maybe a little bit difficult, or a little bit challenging, just because sharing that knowledge is almost like divulging all the great stuff that we have for our employees. Do you, can any one of you or anybody here have any recommendations, or would like to join with me to do a research project where two different similar populations are looked at for like physical wellness or mental well-being? Do you have any recommendations for certain industries that might not be so interested in sharing any information that they got from like, let's say, a study, let's say? That's a great question. And I'll start off, and other people may have ideas. I was the residency director for over 20 years at Penn. And so as our projects, as our residents came through, we had them do, it was required for them to do an epi project. And we had corporate projects that they could not publish. So before I start, I would talk to whoever you need to talk to, to see whether it's something that you would be able to publish or not. But I feel your pain. And I think a lot of people have wellness initiatives that they just have not published for the same reason we talked about, right? I know I did one, and I mean, I thought it was awesome. I wanted to publish it, but there's, time is limited. So that just goes even further to say, guys, publish what you have. Even if it's a short communication, even if it's in the JOM forum, which is not as stringent in terms of study design, and it's more like a descriptive thing of what you did, you know, get it out there, because it will help other people. I know I'm not really answering your question, but what you're saying is real. Companies don't necessarily, I don't want to say one more thing, I'm really sorry. I just want to say about ChatGPT, JAMA just wrote an editorial a few months ago about if you use ChatGPT, you have to add it as an author. And you have to be very careful with ChatGPT as well. I mean, it's an excellent suggestion. But from what I understand, there are hundreds, if not thousands, of articles being submitted to journals that are not written by humans. So that's another thing we have to be careful when we read articles. And ChatGPT is not always 100% accurate, but it's a start. But I will be careful of that. Michelle, did you want to add anything? I just wanted to chime in. As one of the residents that Dr. McKenzie was talking about, those projects, they don't necessarily, it's not all is lost when it's not published. To whom you're presenting it to, i.e. the C-suite, funding comes then. And so then again, you can replicate that project in a way that it can be published. So yeah, I would encourage that type of work as well. We did a lot of those as residents, and it's a great way to, because they're your population. They're who you're seeing. You know, when I was at Penn as a resident, I didn't have any other connections other than them. So I started there, and then went, again, got more funding as time went on. So you published within the company, or? No, presented to the C-suite. Okay. And I just want to share my example. When the COVID was happening, we actually designed and built a system to collect COVID information for the hospital. At the time, hospital administration was not in favor of publishing that kind of data in terms of the numbers. But what we ended up doing is we actually published the methodology of how we built the system. So there's a ways to go around it in terms of not actually publishing the data, but publishing the methodology that you have in terms of constructing that work is also very important. Because I can use your strategies, and implement it in my population, and can figure out what I can do next, right? So there's different ways of doing it. There's always a way, right? Hi. Thank you for the enlightening talk. I have two questions. First off, my name is Chidima. The first question I have is for Dr. McKenzie. In your presentation, you mentioned that you should use key terms over and over again if you could explain what you mean by using key terms over and over again. What examples of key terms? Just explain that, please. And then second question I want to have, understand, it was actually about chargivity and using AI to write. To me, I've seen some papers online where the quotations of chargivity, and it wasn't even seen during the publication or during the journal review. You see a quotation that you clearly know that it's not a human that wrote it. So how do you, as a researcher, how do you find the fine line between authenticity as a researcher and writing it from chargivity or using AI generally? I want to use AI to help me, but I'm like, do I really trust it? Should I really? That's what everyone is doing right now, but how do I see the fine line between that authenticity and the use of AI? I'm going to respond to the chargivity question, then let Dr. Judith McKenzie respond to your first question. So do you use calculator? Do you use Google? Yes. Yes. How do you wet the results, basically? You look and basically look through what your results are. Are they, you're looking in the right direction? Is the Google or calculator giving you the right answer? All LLM models, what you're describing from chargivity to LAMA models and other things, these are different models of chats that people are using. The most important thing that has changed is it provides you with a direction of where you want to go. For example, if I have a question about statistics that I need to use certain methodology, I can start with chat GPT to answer that question. Can I use this type of statistical method or a different kind of statistical method might be used? For example, geospatial statistics is different than the regular standard statistics. So those important questions that you can start off with is where you can look towards a chat GPT and others platform to start to answer that question. And so when I go into talk to my statistical friends, I can start with, well, I know a little bit. Am I on the right track? Can I move forward? How can you wet this information to be right? So using the tool does not mean that you completely rely and it writes for you. You actually started to connect the dots together. I do that because I have research in artificial intelligence. That's the reason why I'm telling you to start to work on these things because they become as important as Googling something or putting or calculating something from an R package or a calculator. Why not? I think. But be deliberate about it, saying that these are the results. But do I believe every single thing the chat GPT is saying? Because there's an element of hallucination that exists in generative AI, which is well documented and well known. So you need to start to figure out and decipher. And that's why we're humans and we want to harness that capability of AI. That was very helpful, Dr. Chibi. Thanks. I think a second part of your question was how do you discern the articles that are published? Yes. Yeah. Because there have been articles published where the references don't line up or it's a wrong reference. And I would probably, and I'm no expert on chat GPT, but just being on the board of the GOEM and, you know, trying to stay abreast through JAMA, I would, and not just reading, I would probably try to focus more on reputable journals. There are a lot of predatory journals out there. And recently I was at a talk at ELAM with the editor of JAMA, Dr. Domingo, I think her name is. And she gave a list of journals that are predatory journals that they publish so many papers and the information is not reliable. So you just have to be discerning, you know, like Dr. Ishmael said, for yourself, be discerning. Don't believe everything chat GPT tells you because there's a hallucination. But also be discerning of the journals and what you get because, yeah. But your other question is what are key terms? So, okay. So in English class, you are told to use synonyms when you're doing a talk or whatever. You're trying to find synonyms for elaborate. What's a synonym? Don't use it again, right? In technical writing, you want to use the same words over and over. So if you're talking, writing a paper about COVID, you guys may have to help me out here. So you're at COVID, COVID-19, so you, that may be one of your key terms. So at the, whenever you publish, they say, what are your key terms? COVID-19, guys help me out. What's another keyword, you know? And just stick with those. I'm trying to find good examples, but it's. Most of the journals basically have keywords or other things that they. But you put in the keywords. I think you just wrote an article. Yeah. Yeah. And I understand. Some of my keywords, future of work, AI, use of AI, COVID-19, actually, and yeah, so some of the keywords. Now I understand. I didn't understand it when you presented, but I get it. Yeah, that's basically what that means. Okay. What are the keywords that you're using? I use over and over in paper. Oh, that makes sense. Okay. I understand it now. Thank you. And that's how you get listed, basically. So if I have an article on long COVID, if I use the keyword, then it goes into this long COVID pool that people can utilize or can retrieve, basically. If I work in AI and LLMs, it will not be put together with machine learning models because they are separately distinct areas that you're writing paper on. Yeah. I understand that. Thank you. You're welcome. I wanted to add one thing about the keywords that also inspired my research that I'm doing now because I did an earlier paper searching keywords for PubMed. And as many of you who might have done that know, it's not easy. Sometimes I come up with stuff that I didn't ask that. So the keywords are not always what you, you know, don't always pull up what you want them to. So that, again, where this occupation and industry, it's an interoperable computer understood term. So that's going to be huge in terms of, you know, when I type in PubMed firefighter, it's not going to pull up, you know, whatever, a park ranger. So that interoperable terms, you know, we're getting there. And like Dr. Naveel said, you know, the more that you put in, the more you also advance the science and, you know, getting there. Any other questions? The other thing is look at your domain, see where you are. You all are experts and you have done this so many times. Clinical medicine comes easy to us and we feel very confident about treating our patients. We don't think twice in terms of prescribing them or making a plan for them. Research is the same level. You're expert already. You just have to find the question and try to answer it. And it could be a small question that we can ask. But if you start with that, then I think there's a next level. And then people come together and they understand what you're trying to do. We have some, at least I believe in, as a clinician, we have one of the most influential people for all researchers because they look towards us for questions and they can help us answer those questions as well. Thanks. Yeah. So just to start, so if you have a, if you are working in a clinic, so if you have a case that is interesting, so or if you see some people, if you want to do research on certain populations, so first whom do you approach? If you're a clinic, do you approach your chief medical officer to ask about it, to start a research or what do you do? Yeah, it depends. So if you're in a large health center, there might be an institutional review board. I don't know that you necessarily need to ask permission. If you do, you do. But if you think of a question and you have, you may want to collaborate with someone and come up with your study design, you can then apply to the IRB. You may not have an IRB if you're a freestanding clinic. You may want to partner with someone in academics. You may want to call Dr. Kowalski or Dr. Nabil or somebody in your area. I don't know what state you're in. Washington State. Washington State. There's University of Washington. We have op docs there who are academic. So you could partner with them and get the IRB through them. The IRB actually reviews what you're doing and then they tell you whether your research is exempt, in which case you can run with it, or whether you need to do certain things in order to move ahead. You don't just want to publish something. In any event, the journal would say, where's your approval, right? And I think having to get the IRB approval is helpful because they have a standard way to apply for it and that forces you to think about your question and how you will attack it. So that would be the first step. Yeah. When you're a resident, you have the IRB in your residency program, so it's easy. But yeah, when you're at a freestanding clinic, so you need to approach somebody in academics. Right. Right. So there's two types of research. One is a qualitative research, where you're improving your practice, and one is a more quantitative sort of patient-based research. For qualitative research, you can reach out to your administrators and say, hey, I want to improve the quality of care for this subset of population. And you can do a simple collection of information and understand how the procedure or drug or your methodology impact this population. And you can do a case control or other things. So there's different ways of going about it. Whatever your practice is, you are able to collect information, but you need to wet it through ways that is acceptable to everybody. You know what you're doing. And qualitative versus quantitative research, these are the things that you have to account for. And if it's a human subject research, there's definitely an IRB involved. Most of the retrospective assessment analysis is exempt by IRB, because you're not harming somebody. The other thing I would highly recommend, if it's a data set that's been published by federal government in outside without any identifier, most of those research are exempt from IRB. So yeah. That's a great point. I neglected to talk about QI projects. But for me, growing up in academics, even if it's a QI project, I go through the IRB. I'm not taking chances. And most of what I do is exempt. However, if it's a quality improvement, if it's an administrative, you don't necessarily get an IRB approval. And there may be someone. It's hard for me to advise you on that, because I don't know your situation. So maybe one-on-one. But absolutely right. Quality improvement, you don't need IRB. Human subjects, yes. And yeah. I just wanted to give one more shout-out to ACOM. So that's some of your benefits of membership. You know, if you're not sure, the listserv is very helpful. You know, within your component, you know, the Western Occupational Environmental Medicine, there's people you can reach out to. And if you have questions, again, I myself would be happy to help. I'm sure these guys would too. And lots of people in ACOM. So thank you. Great points. Thanks so much.
Video Summary
In summary, when starting a research project, consider choosing key terms to focus on throughout your paper for better searchability. Utilize reputable journals for publication to maintain authenticity in AI-generated or chat-GPT-assisted articles. Also, involve an IRB for ethical approval, especially for human subject research. And remember, resources like ACOM and academic partnerships can provide valuable support and guidance throughout the research process.
Keywords
research project
key terms
searchability
reputable journals
authenticity
AI-generated articles
IRB
ethical approval
ACOM
academic partnerships
×
Please select your language
1
English