false
Catalog
AOHC Encore 2023
320 Important Elements to Writing, Reviewing and S ...
320 Important Elements to Writing, Reviewing and Submitting a Manuscript
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
And thank you so much for coming to this talk. We're going to talk to you today about elements to consider when writing, reviewing, and submitting a manuscript for publication, hoping that you will consider to submit or resubmit to JOEM and, of course, any other journal that you'd like. We're going to look at the perspective of the writer, the reviewer, and the editor. And my co-presenters are Dr. Jonathan Borak and Dr. Paul Brandt-Ralph. Dr. Brandt-Ralph was not able to be here today, so I will be presenting his slides. So I'm going to start, let's see. Again, my name is Judith McKenzie, and I'm a medical officer at OSHA and also a professor at University of Pennsylvania. Dr. Borak is a professor at Yale, clinical professor of medicine. And Dr. Brandt-Ralph, as we all know, is a JOEM editor and distinguished university professor and dean of the School of Biomedical Engineering, Science, and Health Systems at Drexel. And Dr. Borak and I are on the editorial board for JOEM. So why this session? We're doing this to offer guidance on writing a manuscript for publication. You may have already done many or may not have done any, and this can sort of serve as a roadmap for you. And what does it look like, the peer review process? What does that look like? How do you interact during the peer review process and the publication process? So we're going to have three different sections, writing the manuscript, what is peer review by Dr. Borak, and then Dr. Brandt-Ralph will discuss the editor's perspective, meaning I will be presenting the slides on his behalf. Okay, so first, writing a scientific paper. I have no disclosures to make. Oh, they're not moving. Oh, great. Hello in the back. The slides are not moving. Thanks for letting me know. We need some technical help, please. Okay, so I'll go back quickly. Okay, so that's the first slide that you saw, I think. And speakers, why this session? I talked about that. Section one is writing the paper, no disclosures. So I think we're caught up. Thank you for telling me. Okay, so as a result of this first section, we'll talk about the history of a scientific paper, the definition, what language to use when writing a scientific paper, and how to organize it. What are the elements, and how do you organize it? So a scientific paper is a written, published report describing original research results. So it's original research results, not a rehash of what's been said before. And it's published somewhere. So it's a permanent format that one can access. It must be an effective first disclosure. Often the first disclosure might be at a scientific meeting, such as this meeting here in Philadelphia, where there's a poster. I went to a talk yesterday on a heat surveillance program, and they presented data from an already published paper and said we're presenting data from a paper that's not yet been published. So that was their first disclosure, the paper not yet published that they plan to publish. The first journals were published in 1665 in Scandinavia and London. And since then, they've been the primary means of communication among scientists. In the 1800s, work became more critiqued, and experiments had to be described in more detail to allow reproduction. So during the time of Pasteur, people were saying, well, we don't believe you. Can we reproduce this? And at that point, you had to write the paper such that someone could redo exactly what you said and come up with the same result. And also, the journals, the paper was not exactly abundant, so they wanted a specific format so that people can go on and on and on and on and not come to the point, because there was no space in the journal for that. Still isn't. They needed enough information so that readers were reading it could see what happened. What did you do? What were the observations? They can repeat what you did, and then they can see, well, you came up with this conclusion. Is it justified? And I think that whenever we read articles today, we need to think about that. So you have some wonderful title, some wonderful abstract. This is what we did. This is what we found. And then you read the paper, and it doesn't match. So that's one of the things, when you critically review what other people have done. It should be logical, clear, and precise. No one wants to read a paper that's not clear and precise. And it usually follows a format, which is introduction, materials and methods, results, and discussion. So each journal will have their own specific requirements. But in general, this is what they all ask for. It should be published in a permanent format and be available to the scientific community, retrievable. Now, Google is a great search team. In the old days, you had to go to the library, go through all the cards, figure out what you wanted to find. Google is certainly a good start. A lot of people use that. But PubMed and the other databases that you can get from your library, now it's all electronic. It should be retrievable. Publish just once. So journals, if you look in their instructions, they'll say this was published before. You know, do not publish again. Or they ask one of the questions, was this published before? It should be effective communication. That's essential for you to convey your message. Make your point clearly. And emphasize what you want to emphasize. So sometimes you might read a paper, and the author goes sort of around and around and around the point. And you never really get the point. And it might be a really good, important point. So just try to make the point that you want to make clearly. So when anybody's done with reading it, they know exactly what you're trying to say. It might be really important for us. What is your take-home message? And write to the level of the audience. So think about, who is my audience? Is it only OEM physicians? Is it internists as well? Is it family? Like, write to the level of your audience. Is it high school students? Who are you writing to? And for the languages, be direct. It's really difficult, I think, to be direct and write short. It may take many, many revisions. I remember when I was in college, I had a friend who could write a paper the night before and get an A. And I could never do that. So know yourself. Be direct. Write short. There's this saying that goes, slash and burn. Like in the cane fields, you slash and burn. So basically go back and slash and burn the things that you really don't need to say or you can say much more concisely. And don't cause the reader to expend too much energy to figure out what you're trying to say. That's called drinking too many cups of coffee. And if you read that last bullet, that was kind of a mouthful. That could have been shortened somehow. So try not to write like that. And use the key terms over and over. Because so we're OEM physicians and, you know, we have our own lingo that we understand. And in English class, they said, you know, try to write, use different words. You don't have to use the same word all the time. And that's nice for English to be very flowery. But for science, you just need to be very precise and specific. Use the keywords. Most journals will say, what are your keywords? And those are also the searchable terms. So decide what your keywords are and reuse them. And don't feel bad that you, you know, you're deficient in the English language. So don't use elegant variation. That's for prose and poetry. Use transitional words and sentences as needed. So within sentence, you may use and or whereas because since although. Just for flow. Just think of your reader. If you were reading this, what would you like to see? And also words and phrases between sentences and clauses. For a second then, in addition, furthermore, nevertheless, therefore. So if there's not a first, second, third, then don't go for a second in addition. You know, make sure it flows with what you want to say. And other words such as overall, in general, surprisingly. The epidemiologist I work with does not like the word surprisingly. I think she just likes to keep anything we find just sort of, you know, low key. So I don't use it, but I kind of like that word. So it's up to you. And other characteristics. The introduction answers the problem that was posed. The materials and methods describe how you answered it. And it should be reproducible. Someone else should be able to go back and do those, use those materials and methods and redo exactly what you said. The results give the findings. And discussion states what the findings mean. So the discussion doesn't rehash the results. And sometimes I think when we find something, we're a little shy to talk about what it means. But these are your findings, so say what it means. Of course, don't go too far fetched and say something that you didn't find. But certainly say what you mean and the implications. The instructions for authors will tell you exactly how you need to organize the paper. And again, it's organization versus literary skills. It's really a format, a rubric. In high school to write an essay, you got a rubric. This is basically a format that you use that the instructions tell you. This is how they want it done. And organization is more important than literary skills. The title will be read by thousands of people. So use as few words as possible. Don't use abbreviations in the title. And this is a label of your paper. So the title should say what your paper is about. And I know that I said it should be as few words as possible. But sometimes titles tend to be long if you need to convey what you need to convey. For example, the article for the camera war, the title was very long. But it conveyed what it needed to convey in the paper. So maybe I should say, do as I say, not as I do. But as long as it's saying what you need to say, it's okay. And it's understandable. The abstract is a mini version of the paper. It's a brief summary of each of the sections. And again, instructions for authors. I would say that if you have something you'd like to publish, the first thing you should do is look at the instructions for authors and see what format they want. That will save you time in the long run. Because then you might write a 10-page introduction and they want two paragraphs. And also, the abstract shouldn't say, you know, you found some amazing thing. That's not in the paper. And sometimes abstracts tend to be a little bit sensational. And then you read the paper and you get really let down. It's often written last. You know, it's the shortest piece. It's hard to write short. If you can do it first, that's great. But doing it last might be more efficient. And the popular formula again is PMRC, purpose, methods, results, and conclusion, and see the journal. So for the introduction, you want to start writing. So you're going to do the research, the background research for your paper. And, you know, I think it's good to start writing before you do the research because, or while you're doing it, because you don't have to go back. In the old days, we had index cards, like boxes of index cards that we would then type in on the typewriter. You can see how old I am, right? But nowadays, we have computers. So cut and paste. Stick things where you want to stick them. And make sure you reference. You don't want to be caught with someone saying you plagiarized. So reference what you do. And give the background information. It provides a rationale for the study. State the purpose at the end of the introduction is a good spot. And say, this is what we plan to do. So it's clear. Even if someone skips your introduction, goes to the bottom, they go, oh, okay. That's what they're going to do. Okay. I get that. And you sort of, you just set up the reader expectations. I remember when I wrote my MPH thesis, I wrote a beautiful introduction. And then I submitted it to the chair of the department. And he wrote, this is a great paper with a very lengthy introduction. And that was his nice way of saying it's too long. So the background, you don't just try to keep it short. Because that's not your paper. This is not a summary paper. This is a research paper. So you're presenting your research. You want to set the stage for your research. If other people did work in the area, or if you did work in the area, you just want to say that. And if you're using specialized terms like DOT, what's DOT dot, you want to define it. You may know what that is, but other people may not. Department of Transportation, I don't know. It could be anything. Okay. So let's look a little bit more into the nitty gritty. You want to have a research question. Sometimes you may read an article, and you're wondering, well, what's the question? There's no question. There's no hypothesis. They just started writing and writing. And you have no idea. So think about what your research question is going to be. If it's just a descriptive, like you're describing a case, clearly, you know, you don't need a research question for that. But if you're trying to explain or predict something, you would want a research question, hypothesis. If it's to show impact, effects, causal relationship, yes, you would need a hypothesis or research question. So the study hypothesis can be just, the research question can be made into a simple hypothesis. Make it concise. Make it reasonable. And tell how, in your methods, how are you going to address your question? How are you going to address what you're hypothesizing? And then what type of study is it? Is it a case report, case series, cross-sectional? Those are the descriptive studies. Is it analytical, which it would be observational, which are the case control and the cohort studies? Or is it experimental, randomized control trial, or a before-after study? In order of least strong to most strong is a case report, case series, cross-sectional, case control, cohort, and experimental. So a case series, a case report is a description of a single case that might be unusual. To be published, it would have to be unusual. Maybe the first case of monkeypox was seen. That's a more recent example. A case series would be like five cases of pneumonia found in young men in San Francisco. What was that? That was only seen in older immunocompromised people, and now it's in five young men. So those things sort of give you a trigger. Something needs to be done next, and the CDC looked into that and came up with a definition of HIV, and so on and so forth, and you all know the history of that story. So a case series is a group of cases combined, and both document unusual, usually unusual medical. So one's hypertension. You could do a case report on that, but you're probably much interested to the journal unless there's something unusual about that hypertension case. Cross-sectional studies look at one point in time. I like cross-sectional studies. I like survey research. They're easy to do, in a sense, because they don't take a lot of money. I should say they're inexpensive. They can be very difficult to do for quality control. The exposure and disease or outcome are determined at the same time. There's no temporary relationship. What happened first? It's also called a prevalence study because you may not have the total N. You may send a survey to a group of people, say ACOM. Well, you have an N for ACOM, but you may not have the total N to get the incidence data from that. Also called a survey study, and you can study more than one outcome. You can ask a lot of questions and get answers to a lot of questions. It's inexpensive, and it's good to raise a question. It can't really show causality, yeah? The case control study is an analytical observational study. It looks at a relationship between exposure or disease and the outcome. So the cases have the disease or the outcome that you're thinking about, and the control comparison group does not, and they're compared. The cases are compared to controls. According to Gordis, the hallmark of the case control study, it begins with people with the disease, the cases, and compares them to people without disease, the controls. So that's a simple way to think about it. Cohort studies are super expensive, and now the analytical study type, a longitudinal study looking at people over time. A cohort was originally 10 divisions of a legion in the Roman army consisting of 300 to 600 men. So the cohort is followed until you get the disease or outcome of interest. So uranium miners and lung cancer, you follow them over a long period of time, and then see who develops lung cancer. Then you can determine the incidence of lung cancer to see if it's higher than the general population. Cohort studies are more expensive than the others. You classify the subjects on exposure. Do they have it or do they not? You do not control the exposure. The time order is clear, because they start in the cohort, they're free from this disease, and then you pick them out one by one when they develop it. Recall bias is not an issue, because you're not asking them on a questionnaire. A questionnaire study is you might get recall bias. You get incidence data. Sometimes it's a several year follow-up, like the Framingham Heart Study, right? That's what, third generation right now, following this cohort, and can be expensive and lost to follow when people move away. Experimental studies are tough in our environment. Usually the randomized control trial is the gold standard, and you see a lot of clinical trials with drugs and things with RCT. You can allocate people at random into different groups. You can control for confounders, because theoretically, whatever confounders there are, are equal within the groups that you randomize into, but it's very expensive, time-consuming, and in the occupational setting, you can't exactly say, okay, you guys, I'm gonna randomize people to the part of the plant that has a hazard and the plant that does not, because the exposure might be dangerous. However, if you do a before-after study, which is also experimental, let's say, in a sense, for us, for example, the vinyl chloride story, people who are working with vinyl chloride was found that there was cases of angiosarcoma, which is hugely unusual. The CDC got involved and did more analytical studies and found out, yes, there's a relationship. The processes were changed to reduce exposure, the vinyl chloride emissions, and the incidence went right down. So that's a before-after study that, again, helps you to show, yes, there is a relationship, and it's probably causal. We know causal today. So describe your study design in enough detail so someone else can reproduce it, and also don't make claims a design can't support. For example, if you have a case series, don't use causation. I like the word association because a lot of things that you do on the smaller scale, you can say this is associated with this, and, for example, smoking and lung cancer took many studies to show that there's a causal effect. But to start with one study showing there's an association, so it's a start. You sort of say, oh, something's going on. Let's do more studies to look into this. So your design affects your results and conclusions. The result is most important because it's new knowledge. Try to be clear and simple. Sometimes I think the result is the shortest part, and you're like, oh, I did all this work and it's so short, but it might be important, so that's okay. When you write the results, try not to repeat the data that you put in the tables and figures. This is really hard because you have the data in short form and you'd like to describe it in your results, but that just takes more time and is redundant. So I think this is a very hard part, at least for me, to do. And don't repeat the materials and methods. Sometimes you end up, it's like a soap note when you first learn to do a soap note, the subjective, objective, and initially you start putting subjective and objective and so on and so forth. So just try to put everything in the area where they belong. And don't give it excessive data. I think sometimes when we do work, we have so much data, we did so much work, we would just put everything in one paper, and it's irrelevant. It confuses everybody. So you just need to figure out, okay, what is it that needs to be here and put that in. And make sure your data is in line with your story that you're trying to tell. So tell your conclusions clearly. Summarize the evidence for your conclusions. Add your implications and speculations. Don't go and say you found something you didn't, but it implies that such and such. And realize that because you find an association doesn't mean it's causal. It's an association. It may not be causal. And there may be other explanations for it. So causal may be one, yeah, and maybe you're onto something and then other people can support what you do and then yeah, we'll say causal. But it may be your chance by random error. And how do you correct for that? Is you use statistics, p-values, and confidence intervals help you with that. Confounding by a third factor. Gray hair causes heart attacks. Gray hair does not cause heart attacks, but people, sometimes older people are more likely to have heart attacks. You might make that association, but it's actually a confounder. It's associated with older people, associated with grey hair. Grey hair is associated with both. The fact of the matter is there are other things, like genetics, and things that happen in older age, like cardiac, coronary artery disease, and so on and so forth. So confounding, I find very difficult sometimes to figure out, is it a confounder, or is there an association? What exactly is happening? And bias is systematic error. It's not, I'm biased against eating ice cream. It's a systematic error that might be found in the way you do your measurements. For example, if you do blood pressure, and your sphygmometer is inaccurate by two millimeters, whatever it is, and you do it over and over, and then you can get results, and you're off. So that's systematic. That's bias. And internal validity is when the results of your study reflect the true relationship between exposure and development of disease or outcome of interest. It's a true relationship. It's not due to chance, or bias, or confounding. It's said to be internally valid, so internally valid for you. External validity refers to the extent to which what you found, you can superimpose in another population. So you find that all people in AECOM have a certain characteristic. Is it all members of AECOM? Is that generalized with all members of the AMA? Yeah. So is it applicable to a generalizable general population? And the methods used to select study subjects can affect generalizability, and consider to, if you do like a survey study, try to find out, well, who didn't respond? Are they different in age, or gender, or whatever? That would affect your study results. Discuss your results. Don't rephrase them. If something doesn't fit, point it out. Don't bury it. Just say, hey, this came up, but it doesn't fit. And someone else may say, oh, I did the same study, and I found this or that. And I think it's great to think about your limitations, especially when you submit. You want to tell the reviewer what your limitations are, because trust me, they're going to find them. But if you find them first and sort of say why they're limitations, that helps. And then show how your work agrees with other work, other published work, and discuss the theoretical implications. So tell the story of your work clearly, succinctly. See how it answers a question you pose, how it fits into or contributes to other work in the field, and try to fulfill your reader's expectations. And now we're going to move to Dr. Borak. Thank you. Great. Clicker, that's forward. That's back. That's this part. Good afternoon, everybody. Let me just start by saying I've worked on occupational medicine publications for a number of years. I can acknowledge by surprise that one of the guests in front of me, Bob McConney, a former president of this organization, got me first into this so long ago that it's beyond any of my recent memory. It may be still in his, because he's a young man. But we were involved in the editing of something called OEM Reports, which goes back into the infancy of this organization, practically. So it's kind of a segue into what we're talking about. So I'm just delighted to recognize this historical thing. It's personal. I am, if this works, I have no disclosures. I have no disclosures. I have slides. Okay. So I'm supposed to be talking about peer review, what it is and what it isn't. And it has a certain analogy and redundancy to what Judith so well just told you about how to write a paper. This is not how to write it, although hopefully when you write a paper, you write it with the understanding that somebody's going to peer review it and will be looking, hopefully, for some of the things that she's just told you to insert deliberately. Let me start by talking about what it isn't, if I can get this clicker to click. Come on. That's my thumb. Okay. Here's a quote. It's not from last week, but I've used this a couple of times before. It comes from the American Council on Science and Health. And the statement is, a peer review is what validates data, defines science, and allows us to call anything else pseudoscience. This is junk. If there is such a thing as junk science, it's epitomized by this statement, okay? Here's a second, just a source of perspective, published a month ago in Nature, a statement that around 4% or more of 7,800 nucleotide sequences reported from 376 papers in two high-impact journals contained errors. I got more war stories, I'll tell you, more grit and blood. The fact is that peer review is not perfect. It's not even competent much of the time. And so when you're doing it, you need to be careful, and when you have been peer reviewed, if it's done well, you should be appreciative, okay? Just as a start. So what is it? Peer review is a means of achieving some quality control. Not everything, but at least something, okay? And it's a check on relevance, as I think Judith was telling you. If the submission you're reading doesn't seem relevant, then you've done your job as the peer reviewer right off the top, and you can avoid looking at the statistical methods. What is not, it's not a cross-examination of the authors, it is not a re-evaluation of the raw data, and it's not an independent literature review. It may be necessary, if you're a peer reviewer, to pull literature because you don't understand what the paper is really fully about. I get to see papers in the area of toxicology about chemicals for which I have some knowledge, but not a lot of knowledge. And before I would peer review that paper, I would invariably, I invariably pull some papers to learn about what the hell I'm reading about. Otherwise, I can't really do my job, but I do not do an attempt to do a global literature review about the subject that I'm reviewing as a peer reviewer, and you should not feel that you need to if you were invited to peer review. But you should have the energy to look at what's been published and have some sense of what you're reading and writing about. Peer review does not validate science. Science is validated by replication. So it's really important to keep in mind that as a peer reviewer, what you're doing is a quality control on a piece of written material, and there are some basic questions, not the most profound, sometimes fairly dull, but useful if you do them carefully. Start with the anatomy of a publication. Judith did that in four segments. I add two more just to keep you on your toes. And I will walk you through, if you were the peer reviewer, what should you be looking at in sequence? So we start with the abstract. And the first question is, are the methods listed in the abstract consistent with those that are listed in the report that you're reading? Surprisingly, not always. More importantly, do the results and conclusions in the abstract agree with those in the paper? It is surprisingly common that the results of a study or a paper are couched in careful and thoughtful, nuanced terms, and in the abstract, enthusiastically embraced otherwise. A finding which the paper suggests can be translated into the abstract as a finding. And the abstract often lives more widely than the paper itself, and can be often misleading. You just check and make sure. If there's no consistency, there's probably a problem. The introduction. Is there a reason for doing the damn study? If there is, it should be clear. Are the goals adequately described at the beginning? And are they realistic and reliable? If it's a small study and the author claims it to be a big study in terms of its importance, you should be suspicious. And if there's humility in the author's approach, this is a small issue, but it's one which seems like it's worth looking at. Please read my paper. That's a great introduction. There's nothing wrong with that. Not everybody can win the Nobel Prize every time. The materials and methods, are they sufficiently detailed that somebody else could replicate them? If they're not replicable, they're not adequately described. I don't mean that it has to say what size beaker or how many test tubes, but you should have clear guidelines of how to do what has been done so you understand. If you're looking at a paper, this again is a peer reviewer, but you could be the author. You could take from either way. If it's a review, a meta-analysis, or a compilation, a pooling of studies, be sure that the authors are clear about how they selected the papers that they're including. There's nothing wrong with doing a convenience sampling of papers so long as you don't claim it to be a systematic review. Just clarity. Don't mislead people, and as the peer reviewer, make sure that nobody's being misled. In research reports, you want to make sure that the methods are appropriate. This can be very difficult if it's out of your domain. If you are not an experimentalist, you may not know whether it's appropriate. Are the outcome measured objective and validated? Are the statistical methods appropriate? Is there a challenge, and was randomization appropriate and unbiased if it was used? I often, right often, I have written in the past in doing peer reviews that I am unable to evaluate such and such. I don't know whether this discussion is valid and correct. Please make sure, Mr. Editor, Mrs. Editor, that somebody who knows what is going on looks at it. If it's a level of statistics that is beyond my simple mind, I want the editor to know that I haven't validated the methodology that I don't understand. I don't think any editor will feel aggrieved at you if you are clear in declaring your own limitations as a peer reviewer. Otherwise, you're not really doing your job very well. Are the results clearly stated? Are they justified by the data? Are there conclusions which are a jump and a skip away from the data and, for example, overly generalize the small bit of data that are actually statistically meaningful? Have they been cherry-picked? A very difficult thing. Sometimes there's lots of data, lots of findings, and the conclusion only picks the two or three that happen to be at the extreme of the spectrum. That may not be exactly fair. Usually it's not. Is there evidence of bias, confounding, other methodological flaws? And finally, what about the possibility of multiple comparisons? An author does not have to do, for example, a Bonferroni correction, which is very conservative, to accommodate the fact that there have been multiple comparisons, but the author should acknowledge that there were multiple comparisons. Whether or not adjustment was made. Failure to acknowledge that is a fault of the author. Just so everybody's clear, the idea is to make this information as useful and transparent as possible. Are the references appropriate? Look, it's difficult if this is not your primary domain. I would imagine if I sent Bob McConney a paper that had to do with particulate air pollution, he would know of a hell of a lot more appropriate references than I would. And so if I'm asked, I have to look and see, do they seem reasonable? I might call Bob, but, you know. Are they representative? Are they not cherry-picked? Oftentimes, there's a literature that spans from yea to nay. Have the authors picked only yea because they have a perspective on the issue and they don't want the other perspective to come out? Have key reports been neglected, either by malice or lack of foresight? And do they say what the authors claim? This one is the hardest of all. I sometimes have to go back and read papers that have been cited to see whether they really say what the authors have said they said. It's a pain in the ass. Most peer reviewers don't do a lot of that. But it's something you should be prepared to do if you're going to do a good peer review and be useful in contributing to the science. That leads us then back to what Paul Brandtroff might have talked about and Judith will sub for later, which is the problems of the limitations of peer reviews, which include the qualifications and skills of the individuals who have been invited to do the peer review, their general knowledge, and their specific knowledge. How appropriate is the peer reviewer to the paper that is being submitted, okay? Conflicts of interest. Are you peer reviewing your colleague or collaborator or your enemy and competitor? And are you competing for grants, patents, market share, and God knows what. All of these things should be clear. Doesn't mean you disqualify the paper for that reason. But if it's not clear, it raises a lot of concerns. And then the other is commitments of time. Paul Brandtroff two days ago at a board meeting said that he's gotten up to the point of having to ask 25 people before he can find somebody prepared to do a peer review. Because people are busy. And doing it takes time. So you as the peer reviewer have to be prepared to commit time. And as the author, you have to understand that the peer reviewer is committing her or his time. And if you can write clearly and well, you make it easier for everybody, which is good for you. Notes of caution that I'm going to tell you about some more war stories. Some years ago, the British Medical Journal concerned, the editors concerned about the quality of peer review and so forth, did a number of interesting studies. One of the things they did was they made the statement editors should not assume that reviewers will detect major errors, particularly those concerned in the context of the study. In other words, people could pick up structures of the paper that were faulty but not know whether a certain test should be positive or negative and what its sensitivity and specificity would be. And short training packages seemed as though it might be useful, although they found it didn't have a hell of a big impact. Here's what they did. They took 607 BMJ peer reviewers, people who had been peer reviewers. They gave them a training program. This was before things were really online, but equivalent to that. And then they inserted into each of three papers major and minor errors, okay? And then they sent them out to be peer reviewed by their own peer reviewers as part of the study that they were doing. This was not actually for publication. And what they found is on average, the trained peer reviewers identified not much more than three out of nine major errors in each paper. Now, we're not talking about peer reviewers that you pick up on the street as you're walking along looking for somebody to do this dirty job. We're talking about people who are committed to it, committed enough to have taken the extra time to do the self-training and so forth. So you have to be prepared to recognize that even well-intentioned, well-oriented peer reviewers make mistakes. And so it becomes really difficult to know what you get out of peer review. But what you get out of it is at least hopefully what you put in. Be thoughtful when you're a peer reviewer. Be detailed. Be clear when there are problems. Do your best. And try to be the mentor you wish you had. So that if you see a problem, try to use that as a teaching point. So that the problem becomes clear to the author, who thereby will be encouraged to improve her or his writing. So let me tell you about some war stories, and I'm going to go back to Judith to deal with her Paul Brantroff impersonation. Some of these stories, they're all very recent. I pulled them out in the last week or so just because I wanted to color the story, this presentation with a few curious facts. I don't know how many of you heard about the professor, I won't tell you his name, at the University of Cordoba, happened to be in environmental chemistry, who was recently suspended without pay for the next 13 years. Because it was found that while he was full-time at the University of Cordoba, he was also on faculties in Saudi Arabia and Russia. That wasn't the problem. The problem was that between January 1st and March 31st of this year, he published 58 studies. One every 37 hours. It puzzles, it puzzles. Or the professor at the University of South Carolina, Southern California, a well-regarded oncologist who just withdrew his textbook from... This was when published a story from Medscape, after it was shown that it contained more than 95 overt acts of plagiarism. And Retraction Watch, a very interesting organization that follows the problems of corruption and misdeeds and so forth, had information that John Wally and Sons sub Hindawi withdrew 1,900 papers from 16 journals, delisted 20 journals, all for cheating on special issues. Plus one, this paragon, retracted more than 100 papers because of manipulation of peer review in the last year. And IOP publishers, they do physics and math and material science, not biologicals, retracted 494 articles. They commonly said, they've seen no evidence the reliable peer review was conducted on these articles despite the clear standards expected and yada yada. So the point is that peer review is really a critical part of turning your creative writing into scientific communications. And it's critical to make your writing accepted as scientific. And with the development of AI and chatbots, GPT and so forth, it's gonna change everything. For one, we're not gonna know who wrote things. But there are machines now that are gonna see that if you're plagiarizing, it's gonna pull that right out, okay? And if you're quoting papers that don't exist, it's gonna say it right off the top because it's gonna be very simple for the Journal of Occupational Environmental Medicine to screen papers for that kind of crap. So understand, that's really not what you should do as a peer reviewer. But be aware that peer review is really at the crux of trying to figure out whether this machine age is going to corrupt or improve publishing. Thank you. Thank you, Dr. Borak. I hope that doesn't dissuade you from submitting papers or from being a peer reviewer. All right, so I'm gonna discuss Dr. Brad Routh's slides now. He's not able to be here. He's not feeling well. No disclosures. So why publish? If your result is not published, you haven't done anything. M. Haber. And Miss Popper, I think, worked for the journal in the very beginning. Is that me? I don't think I did anything. Yeah. she was a legend. Scientific, well you can read as well as I can. That does not emanate from the minds of scientists but from the scientific method of open criticism. So the editor is the gatekeeper, as Dr. Bright-Routh would say, and helps to maintain the open criticism of scientific objectivity. And if it's, the editor is a gatekeeper. Can you hear me now? Okay. And helping to maintain the open criticism of scientific objectivity, and if it's done right, that's great. It's a big responsibility and difficult task. I don't know, I can't imagine reviewing so many journal articles that come across my desk every year, deciding which ones to triage and which ones to not, and once they're triaged, which ones to accept. And I think that's a really, really big job, a lot of cerebral work. M. Blumberg, as an author, your job is to make the editor's job as easy as possible, and Jonathan alluded to that in his speech. You know, just try to make everything easy for the people reading it. Writing matters. Study how good papers are written. So maybe look through journals, see papers that are well written and maybe try to copy that in a sense. Make it, tell a story. It's more fun when you're reading a story. It's really a story that you're telling. Be clear. Distinguish results from inferences. So what you found is what you found, and what you infer is what you infer. And don't make it, you know, too big a jump. Don't overinterpret or oversell. And revise, revise, revise. Remember we talked about that. It's very difficult to write short. Some people have that gift, but I think a lot of us don't. So we don't feel badly if you have to revise a lot. And one of the things you want to think about as you, or even after you write the paper, is go back and critique your own paper and think about Dr. Borak reading it and being the reviewer and try to catch things before he would. Yeah, edit your grammar because people reading it don't really want to read a sloppy paper. Tips from the pros, writer's tips. The key to effective writing, you are only as good as your last paper. In other words, your previous success doesn't guarantee that you're going to be accepted. You've got to hook the editor with the abstract. I think Dr. Borak sort of alluded to that. Maybe not delete your files because maybe what you wrote in a previous version might be more relevant and may help you get accepted. It's a very long learning curve. Many authors say they're still getting better after several decades. The most significant work is improved by subtraction. Keep the clutter out. Make your central message communicate. You have lots and lots of data. You don't need to put all your data in there. Declutter. Be clear. And write every day if possible. And once you've written what you want to convey, you can stop there. Another thing that Dr. Brantroth wanted to convey is that first impressions matter. So make your title and abstract appealing. So the editor will say, oh, this looks relevant and we might want to do something with this. Editors read the abstract and form it a thumbs up or thumbs down. Imagine getting hundreds of journal articles across your desk every year. How can you read everything in detail? So you want to form a positive point of view from the very beginning. Check recent content to see if your subject area is emphasized. And when in doubt, you may just want to contact the editor and say, hey, is this something you'd be interested in? I remember as a resident, I wrote a paper and I submitted it to JOM and they didn't take it. I'm like, what? Then I put it somewhere else and they took it immediately because it was more relevant to this other journal. So if it's not relevant to the journal that you're submitting to there, it doesn't mean it's not a good enough paper. They're not going to take it. And so don't just choose a journal with the highest impact factor. Choose the journal that fits what you're doing. And also your readers will be, your audience will read that journal too. And in terms of impact factor, I think I figured out what this means. Impact factor means the number of citations in the two years divided by the number of articles published in those years. So of all the articles published in those two years, how many citations did they get divided by the number of articles? And that's how you determine impact factor. So the higher ranking journals get the message out better. Matthew Stanbrook from U of Toronto tracked what happened to 12 medical journals when they published a joint statement, the exact same statement. Over the next 26 months, the highest impact factor journal received 100 times as many citations of that same statement. So it was the same statement in all these journals, but some got a lot more press or a lot more readability. So, you know, for example, JOEM may be read mostly by OEM environmental people, whereas JAMA might be read by us plus everybody else. So that helps with the impact factor in terms of your audience and that also helps with citations. So impact factor is a two-year period. And all citations are given equal weight. And the system can be gamed if someone cites themselves in every paper or cites. They write a paper and they cite 10 of their papers in that one paper. Dr. Brent Ralph likes to talk about the eigenfactor, which is over five years. The eigenfactor assumes the influence of a journal is best measured by the number of independent citations it attracts from other influential journals over an extended period of time. The citations are weighted by the importance of a citing journal using eigenvector centrality. And don't ask me to explain this, but I'll read it to you. You can read as well as I can. It's calculated recursively such that values are transferred from one journal to another in the network until a steady state equilibrium is reached, like the Google's page rank. So eigenfactor is based on citations made in a given year to papers published in the prior five years. So it's over a longer time period is the bottom line. So impact factor is over two years. The eigenfactor is over five years. So maybe look at both before you say, you know, the impact factor is too small. So follow the rules. Format your paper the way they ask you to format it. You know, it's organization, organization, organization. And make sure you have all the supporting documents and that they're complete. So when you submit, they're going to ask you for different things, get them in, get them in. Don't have the editor and his staff coming after you over and over again for things that should have gone in. It's just more work for everybody. And write a nice cover letter. Concisely tell the editor why is it you think your paper should be published. And he says here, you want to recommend reviewers. Dr. Jonathan just mentioned it's getting harder since COVID to get reviewers, and not just for GOM. I think it's affecting a lot of journals. And so if you have reviewers in mind, go ahead and suggest them. And don't give up. Just because your paper wasn't accepted the first time doesn't mean you can't revise it and resubmit it or maybe submit to a journal that's more relevant to your topic. Revision is a norm. It's very rare that the editor will say, oh, hey, this is perfect. You know, we're taking it as is. So if you get feedback, you know, hopefully it will be feedback from, as Dr. Borak said, a mentor you wish you had. And it's said very nicely and not harshly. And then respond to the criticism. You may not always agree with the reviewer, and you can say that. But say it in a, you know, in a nice way, right? You can challenge them. They could be wrong. And make your letter clear, succinct, so that the editor can make his own decision rather than having it re-reviewed. Because if you don't answer all the questions, whether you agree or disagree, it's fine to disagree. But answer all the questions, because then they may have to go to review again and prolong the process, not just for you, but for everybody. And you can also volunteer to be a reviewer. Now that you got Jonathan's talk, you have a lot of tips on how to be a reviewer. You can also write letters. If there's a paper that you have comments on, you may want to write a letter to the editor if you have issues with it. And so that's the end of Dr. Brent Ralph's talk. Before I end, I just wanted to bring up ChatGPT. I think I would be remiss if I didn't bring it up. How many have heard of ChatGPT? Yeah, it's big in the news now, right? You see it all the time. JAMA just put out an editorial on ChatGPT in January talking about non-human authors. And, you know, I think it's still being debated right now. Some journals say if you, ChatGPT cannot be an author. So you can say author is Tom Brown plus, Janet Brown plus ChatGPT. The authors have to be responsible for what's in the paper. And, but if you do use ChatGPT for anywhere, your methods or anywhere else, you need to acknowledge that you use ChatGPT. If you, you can read the JAMA editorial, January 31, 2023, for more on that. I think, you know, I think, you know, this is the future, right? And also, you know, there are reports that note that ChatGPT can be wrong. Wrong in terms of what's referenced. Wrong in terms of the information. So as an author, you are responsible for everything in the paper. I don't know where this is going. I mean, we can, you can talk to your friends and neighbors about this. But I just thought I would be remiss to not bring that up. So that is the end of our presentation. And we have like five minutes for questions, if there are any. Thank you. Any questions? Good afternoon, my name is Shalom Shem from Israel. I wanted to raise a question which was not discussed much in this lecture about to which paper to send your article. And why the question is important in the field of occupational medicine? Because other expertise, which are not larger from occupational medicine, has their own part or their own list of papers, which include only papers in their subject, like rehabilitation, neurology, hematology. Occupational medicine is together with public health and environmental medicine. Now if you take a journal at the impact factor of four, in rehabilitation, it's in the upper quadrant. But in occupational medicine and public health and environmental health, it is in the third quadrant. So if you have a paper you want to send, and you want your colleagues to read it in occupational medicine, you will send to an occupational medicine journal. But if you have to take into account the academic demands that say that you have to publish several papers in upper quadrant, you send it to a group of papers which is out of occupational medicine. I have two papers on last year in cardiology and occupational medicine. I published it in rehabilitation. Now a score of four points, four and a half points impact factor, are the third and the fourth journal out of 61. While in occupational medicine, together with all the groups, it's only in Q3 and not in Q1. My question is, is it possible to divide and to get a category only for occupational medicine? Or we will continue not in a fair fight, meaning environmental medicine, public health, and research institutes, many, while we are as clinicians, lack of this opportunity. I'm not exactly sure your question. I couldn't understand a lot of what you said. I think part of your question is that the impact factor that you should try to publish in journals with higher impact factors. I'm not really sure. I'm sorry. A score of four points, four points impact factor. So your question is, should you try to publish in a journal with a higher impact factor? No, my question is, is it not the time that occupational medicine has a different category other than public health and environmental health? That the journal should be in a different category? Yes. I will ask Dr. Brent Ruff. I don't know. Thank you for the question. Thank you. Tanya Walker, chief medical officer at Netflix. Question. During the pandemic, we found that we were relying quite heavily on articles that were still in preprint and hadn't been peer reviewed just yet. And it was helping to guide patient care, policy, strategy. So I'm just curious. If you could predict the future, which none of us can, what's going to be the role of the preprint in the future? Now that even the lay public acknowledges that there's this preprint and this peer review, and what is the meaningfulness of it moving forward? Yeah, that's a good question. I don't necessarily know the answer. I know that I also relied heavily on the media Jonathan, do you want to come up here? Sure. Thanks. The issue of preprints has become very important. They're not going to go away. The journal has the question of whether the journal would allow a preprint to be used as a reference in an article that's not in the journal. And I think that's a very important question. To be used as a reference in an article that is now submitted to be peer reviewed. And the answer was, as of this weekend, no. But you could reference it as a web link. Let me do that. In the same way that you might, in the context of writing an article, cite an organizational web page, you can cite the preprint as a web resource, which might not be in the reference list, it might be in a footnote. That's getting into a lot of details I'm not clear about. But as of this weekend, Paul's view was he did not want preprints to be included in the formal reference list at the back of the article. Thank you. So my question, also maybe a suggestion for the next iteration, just something to add, is about our community of peer reviewers, who are also our community of authors. And peer reviewers are very busy, unpaid volunteers. And it's a professional obligation that we have to one another. But I get probably three or four requests a week to review. And on the same time, I have papers that are languishing for lack of a response. So one thing that I think we need to all do is just say quickly, yes, I can review, or no, I can't. You get two weeks to decide if you're going to be able to often, if you're going to review it. And if you don't respond to at least say, I'm too busy right now, then that poor author is sitting there for two weeks waiting for it to go out to get another tier, the next peer review. So let's all respond quickly. But secondly, what can we do? Do you have any thoughts on what we can do to further incentivize careful and good peer review? I've received comments from reviewers that say things like, nice paper. I changed the title, and that's it. So that's like, yay, but that doesn't really help me improve the paper. And then others that are that mentor that I wish I had, but I know that person spent two hours on that paper. How do we better incentivize? I mean, this is a highly skilled group of people volunteering their time out of just the good of society. How do we incentivize it more? The journal publisher is arranging a mechanism in which CME credits will be given to reviewers. In principle, the editor has to give the peer review a grade. We gave him a lot of crap because he didn't have criteria yet. But it was like, if your peer review got over a 70, then you got three CEU credits. That was an effort to respond to what you're raising, which is that there's a lot of effort and there's not necessarily a lot that comes back. It's a problem. Yeah, also the journal, at least JOEM, has a very rapid turnaround. I can say for other journals, I have languished with other journals. But at least with JOEM, there's a very rapid turnaround. But I definitely see your point. And with the CME being offered, that should help incentivize good peer review, graded CME. I think good peer review also comes from knowing your topic and knowing your EPI and your body stats, which we should in OCMED. There's a lot that goes into it, but I feel your pain. I was not disparaging JOEM. My languishing did not happen with that particular journal. Thank you. So I have a question for Dr. Borak. And this may be a simple question or maybe not. The issue of multiple peer reviewers, does a journal need to have multiple peer reviewers? What's the reason for having multiple peer reviewers? And how is that decided? I love this idea of how it was decided. Let me start that way. If you have had the situation as an author to get back multiple peer reviews, then you know that they are rarely duplicative. And they are often partially redundant. But it's not unusual, at least in my experience, to have somebody who is critically attuned to one section of what I've written and not something else. And a second author who points out that second area but not the first. And so I think part of it has to do with the fact that we're all limited. And so the idea is that with several people, you cover the waterfront better, so to speak. There are some journals, Critical Reviews and Toxicology, invites five reviews. And these are usually very, very, very long, state-of-the-art, systematic reviews. So you could spend an incredible amount of time. When you think about the amount of investment that gets into it, it's incredibly inefficient. And it is likely that ChatGPT is going to replace all of us as peer reviewers. It would not take a very smart machine to say that your grammar is lousy. It would not take a very smart machine to tell you that your statistics was wrong. And you could also probably get ratings on relevance and so forth. So it may be that we're looking at the end of a process. I don't know that. I don't mean to be bad for agnostic here. John, I would like to add a little bit to that. I just served as special editor in chief of a special issue of the journal in Frontiers in Public Health, entitled Particles in Health. And the journal had a system so that we published 20 papers as part of that special issue with 70 authors. And the journal insisted on three peer reviewers per paper. They also had an electronic system when a request for review went out. If the person did not respond within two weeks, just as that other person recommended, they were off the list. And they had a stable, this Frontiers publication system. There's Frontiers in Public Health. There's Frontiers in Neurology, Frontiers in this, that, whatever. But they had a stable of about 500 or 600 peer reviewers that they circulate among the various journals. But part of it is left to the discretion of the editor, as Jonathan will tell you. Some editors will accept a paper on their own. Some will have one, two, and three reviews. It really varies. But I found that Frontiers system with the circulating system to stay on top of it, which you make a good point. No author wants to submit the paper in six months for a decision. Thanks very much for coming. I think our time is over. Thank you. Thank you.
Video Summary
In this video, Dr. Jonathan Borak and Dr. Paul Brandtarav discuss elements to consider when writing, reviewing, and submitting a manuscript for publication. They cover the perspective of the writer, the reviewer, and the editor. They emphasize the importance of clear and concise writing, organizing the paper effectively, and being mindful of the journal's specific requirements. They also address the peer review process, highlighting the need for quality control, relevance, and the role of the editor as the gatekeeper. Dr. Borak talks about the challenges and limitations of peer review, including the qualifications and skills of reviewers, conflicts of interest, and time commitments. He emphasizes the importance of thoughtful and detailed peer review and suggests being a mentor to authors by providing constructive feedback. Dr. Brandtarav discusses the significance of choosing the right journal for publication, considering impact factors and suitability for the target audience. He also advises authors to follow formatting guidelines, write a clear and appealing abstract, and revise their papers thoroughly before submission. Dr. Borak adds that first impressions matter and encourages authors to make their titles and abstracts appealing to editors. He also recommends studying well-written papers and being aware of the goals and expectations of scientific communication. Overall, the speakers emphasize the importance of clarity, quality, and relevance in scientific writing and the need for ongoing improvement and revision.
Keywords
scientific writing
manuscript publication
clear and concise writing
organizing paper effectively
journal requirements
peer review process
quality control
editor's role
challenges of peer review
choosing the right journal
×
Please select your language
1
English