We’ve all heard the saying that the randomized controlled trial is the “gold standard” method of conducting a research study. What does that mean? And if the randomized controlled trial is so good, why does anyone ever do studies any other way?
In this series on understanding research, we will look at the various methodologies for conducting studies, why studies are done with the various different methodologies, and we’ll discuss some of the strengths and weaknesses of each.
In general, research methodologies can be broken down into two major categories: quantitative and qualitative. Quantitative research seeks to use statistical techniques to clearly show that certain outcomes happen because of certain variables. Qualitative research seeks to understand, find common themes, and explain the way people and societies think, behave and function. More and more studies are taking a mixed approach, using both qualitative and quantitative approaches in the same study.
One common theme that drives the way researchers choose to conduct their methodology is the idea of validity. A well done study will be done in a way that there isn’t any problem with the validity. There are three main kinds of validity that have to do with methodology:
Construct validity is the idea that the study really does measure, describe, or explain what it claims to. This is the main area where the researcher is carefully considering methodology in order to avoid problems. Amy Romano posted a discussion of a study where she challenged the construct validity here, where she raised the question about whether microtrauma was a valid measure of perineal function.
Internal Validity is the accuracy of the study in showing clearly that the conclusions are connected. Researchers sometimes go to great lengths to try and avoid problems with internal validity. Control groups, blinded studies, and observational studies where the researcher does not identify themselves are all ways that researchers try to create greater internal validity. A classic example of this is a study that uses a pretest and a posttest to determine how effective an education program is. If you give the same test before and after, the students will know ahead of time which questions are on the test, and are more likely to remember those answers for the posttest, artificially inflating the measured effectiveness.
External validity describes how easily the study can be applied into practice. Because readers or other researchers may be using the results in different places, times and populations, the results may or may not be applicable. This is why studies need to carefully describe the included population and settings, and readers need to carefully consider if the results are applicable to their practice. Essentially, if a study has external validity, you would get the same results if you ran the study again in a different population or different time.
As a side note, there is another kind of validity, statistical validity, that has more to do with the statistical analysis afterward than the methodology we will cover in this series. It questions whether the statistical techniques used were appropriate and accurately support the conclusions. While researchers do generally plan for their analysis when designing studies, we’ll be talking more about statistics in a future Understanding Research series.
When a researcher sets up a new research study, they need to choose the methodology that creates the most validity within the constraints of ethics, expenses, and other practicalities. Understanding the basics of why different methodologies may have been chosen can help you understand the strengths and weaknesses of the studies better.
Creswell, J. (2009) Research Design: Qualitative, Quantitative and Mixed Methods Approaches. (3rd Edition) Sage Publications, Thousand Oaks, CA
Greenhalgh, T. (2010) How to Read a Paper: The Basics of Evidence-Based Medicine (4th ed.).
Rees, C. (2003). Introduction to Research for Midwives (2nd ed.). London. Elsevier Limited.