Finding, Evaluating and Applying Baseball Research (Part 2): Guest Blog By Dr. Ed Fehringer

Comments are off for this post.


We came up with several questions about baseball related research and presented them to Dr. Ed Fehringer, a highly regarded Orthopedic Surgeon and researcher. We included his answers to the first three in our last article. Here, we’ll cover the next three.

Our R&D Coordinator, Jordan Rassmann will handle question 7 in a couple of days. He’ll discuss the most commonly used statistical tools and he promises to make it surer easy to understand.

Dr. Fehringer  will wrap it up in a few days with the answers to question s 8-11.  Here are the questions. The answers to 4-6 are below the list of questions.

1. Why do we need good research in baseball?
2. Why is it important for a coach/instructor to be able to review, understand and critique research.
3. How do I find relevant research (online search hacks)?
4. What are the parts/design of a typical study?
5. How do I evaluate the quality of a study?
6. What are the dangers of only reading the abstract.
7. What are the typical statistical tools used and what do they mean?
8. How do I evaluate the authors’ methods, discussion and conclusions?
9. What are the most common pitfalls in reading research?
10. How do I begin to apply the results of research to help my players/team?
11. What are the possible consequences of not learning to search out and evaluate quality research?



4.  What are the parts/design of a typical study? 

Typically, studies are made of 4-5 parts. In the first (Introduction), the author(s) present history about the topic and usually point out an area of relative research weakness that they’re proposing to study. The authors ask a question. A hypothesis and/or purpose is usually stated. The Material and Methods (Methods) describes “meat” how the research was conducted. The Results section speaks for itself. In the Discussion, the authors discuss their most significant findings and relate or compare them to prior work(s). In the Conclusion, the conclusion(s) are stated and they must be supported by the data in the Results.

Medical literature is often written such that it appears complex to the untrained reader. Yet, like so many areas in life, the genius is in simplicity. Extremely thoughtful authors care about the readership. They want the readership to understand every sentence. However, if research is not presented well, often the problem is not the level of understanding or education of the reader but the lack of understanding and/or clarity from the author.

5.  How do I evaluate the quality of a study?

This is a difficult question to answer. As mentioned previously, typically the better articles are in better journals. Yet, one must keep in mind that no research is perfect. No reviewer is perfect. No editor is perfect. No journal is perfect. As one becomes more adept at evaluating research, it becomes easier and easier to put holes in anyone’s research, whether it is basic science (typically in the lab) or clinical (typically in doctor’s/therapist’s office or operating room/training room). But simply putting holes in research does not make one an expert or even a researcher. But recognizing holes and yet finding that one or two or three nuggets may exist within each study….that’s where the money lies, in my opinion.

Perfect research is impossible because of the infinite number of human variables (as well as many other variables). Ideally, in clinical research, a group of subjects are similarly sized, have similar health profiles, have similar health habits, have the same diagnoses and then, of course, have the exactly same treatment with the same follow-up, etc. As one can begin to see quickly, this is impossible. It would also be ideal that every clinical study has a control group. A control group would be a group that is extremely similar to the treated group, but they are not treated during the study period or they are treated with something that is known to have no effect on the control study group beyond a placebo effect. Completely voluntary control groups, while ideal, are nearly impossible in orthopaedic medicine. Few patients will volunteer to take part in something along these lines (if a procedure is performed on the treated group, as an example). While one could state that one could hire a group of subjects to act as a control group, the payment alone enters a bias into the study as those receiving payment, especially if they have any inkling as to the study design or purpose, may partake in ways the research will not bear out.

When evaluating the introduction, does the author ramble? As a general rule, the more the authors present their case, the more concerned I get. Often, the simpler and more directly the study is introduced, the higher the degree of believability. Baby steps are made with quality research projects. When the author tries to do too much or solve too many problems, they generally fail. Their goal should be to try to answer one question. If they are able to do it, that is great. Sometimes they answer more than one question or pose additional questions. Sometimes they simply cannot answer the question posed.

If a clinical study is prospective, that means the question is being asked before the study is being designed and the data is collected as the study goes along. Prospective studies are much stronger than retrospective studies, which are prone to many more biases. Retrospective studies are not useless but asking the question about how an intervention affected the outcome is difficult to truly answer when it’s performed in the proverbial rearview mirror. Yet, most of our medical literature is replete with retrospective studies. They are not useless but they ask a question after the treatment has been rendered. So, one must be careful about making claims or drawing conclusions that are unsupported by the data or are supported by data that is part of a retrospective study, which happens frequently.

Data collected on subjects after an intervention is referred to as outcomes data. Data collected before the intervention is referred to as ingo data. Studies that have both ingo and outcomes data are more powerful than those with only outcomes data because the ingo represents a baseline or “before intervention” data. (“If we don’t know where we are we cannot know where we are going.”)

The most powerful clinical research studies are prospective, randomized, controlled studies where the subjects are randomly assigned a treatment protocol unknown to the subject (blinded) or even unknown to the subject and provider (double blind) prior to the treatment applied. As one can imagine, while these sound great in theory, they are often impractical. Few patients want randomization. So we are often left with much-less-than-perfect clinical studies.

So, in my opinion, different articles have different strengths and weaknesses. Again, trying to sort through the research to identify what can and cannot be taken from it is often up to the discretion of the reader.

6.  What are the dangers of only reading the abstract? 

An abstract tells a condensed version of the story of the research. Unfortunately, it’s often like reading the Cliff Notes. One can gather the general gist of the research, but it can be easy to “hide” some variables in the body of the manuscript that won’t be evident in the abstract. I use abstracts to help determine if it’s a subject matter that I want to read in more detail. If I do, then I read the remainder of the work.

Thanks again Doc.  You nailed it.

In a couple of days, our R&D Coordinator Jordan Rassmann will discuss the most common statistical tools in research and he’ll make them easy to understand.  Then Dr. Fehringer will wrap up this series with the answers to questions 8-11.

See you in a couple of days.

Randy Sullivan, MPT, CSCS

Share this article