Introduction to Evidence Based EMS – Part 1
by Jason Merrill. Last modified: 31/01/14
“Evidence based” has been a buzzword in EMS for a while, but in a lot of places there doesn’t seem to be much understanding of what “evidence based EMS” actually means. In this article we’ll examine what it means to be “evidence based,” what some types of evidence are, how to understand and evaluate evidence, and ultimately how to apply it to EMS practice.
What Does It Mean to Be “Evidence Based?”
When we talk about an EMS practice being “evidence based,” what we’re really saying is that there is scientific evidence, also known as “empirical” evidence, that the practice does what we want it to. Scientific evidence is a very specific type of evidence which is not based on theory or “common sense,” and understanding science requires a change in thinking from how we understand things in popular culture. Specifically, scientific evidence is evidence that comes from observations made using the scientific method, but we’ll cover that in greater detail in a moment. The idea that health practices should be based on observations isn’t a new one: the word “empirical” actually comes from a group of ancient physicians who believed that medicine should be guided by observational evidence (Harper, 2001), but the current movement towards evidence-based health practices only began to ramp up in the 1970s. The words “evidence based medicine” didn’t actually appear in medical literature until 1991! (1)
The fact that scientific evidence comes from observations has some important implications. First, it means that in evidence based EMS, our theories about how or why something works don’t matter as much as observations about whether it actually works or not. For a real-world example of this, we can examine the practice of giving all chest-pain patients with a suspected cardiac origin 100% oxygen by non-rebreathing mask (NRB). In school, I was taught that every patient who has suspected cardiac chest pain should be given 100% oxygen by NRB because this would cause more oxygen to be dissolved in the blood and therefore available to injured cardiac muscle. I think it’s fair to say that most of us were probably taught this, and it’s an idea that’s still showing up in brand-name paramedic textbooks (AAOS, 2012).
I think it’s also fair to say that this is a very common-sense theory: myocardial infarctions happen when an area of heart muscle isn’t getting enough oxygen-rich blood and starts to die, so it makes sense to try to get more oxygen to that muscle by increasing the amount of oxygen the patient breathes in. Unfortunately, the scientific evidence on giving 100% oxygen by NRB to cardiac patients has shown that it is actually harmful to them (Canadian Prehospital Evidence Based Practice Project, 2014). Patients who receive high-concentration oxygen for uncomplicated myocardial infarction (MI) do not do any better than other patients during the first 24 hours after the MI (2), and uncomplicated MI patients who are given 100% oxygen have larger infarcts (3). In evidence based EMS, the theory that 100% oxygen by NRB helps patients experiencing MI matters less than the scientific evidence that shows it’s harmful.
The Scientific Method
Scientific observations are different from the ordinary observations we make in our everyday lives because scientific observations are made using something called “the scientific method.” The scientific method is a structured way of making observations about the world that allows us to gain a highly accurate, reliable understanding of the world around us, and it has four distinct steps.
Step 1: make an observation about the world.
As a historical example, in 1847 a physician named Jakob Kolletschka died after nicking his finger during an autopsy on a woman who had died from an infection after childbirth, called puerperal fever. Kolletschka’s good friend, Ignaz Semmelweis, observed that the signs and symptoms that Kolletschka experienced before his death were nearly identical to those of women who died of puerperal fever who he had performed the autopsy on.
Step 2: form a hypothesis to explain the observation.
This basically means to make a guess about why what we’ve observed is happening. In our example, Dr. Semmelweis hypothesized that Dr. Kolletschka died of the same disease as the woman he had performed an autopsy on because tiny particles of matter had carried the disease from her body into his. (It’s important to remember that while this probably seems like an obvious, common-sense conclusion to us today, that’s because we have the benefit of a hundred and fifty years worth of work to confirm and disseminate the idea, and in 1847 the idea was new and not well-accepted.)
Step 3: make a prediction based on the hypothesis.
At the time of his friend’s death, Semmelweis was the director of an obstetrical clinic at a hospital in Vienna. The hospital had two clinics for the delivery of children. Semmelweis was in charge of one which was run by physicians and medical students. The other was run by midwives and midwife trainees. As many as 18% of the mothers who gave birth in the clinic run by physicians and medical students died, but on average only about 2% of the mothers who gave birth in the midwives’ clinic died. Based on his hypothesis that tiny particles transferred from person-to-person were the causative agent of these infections, Semmelweis predicted that if the physicians washed their hands, the mortality rate of patients in his clinic would drop and be similar to that of the midwives.
Step 4: test the prediction by comparing two groups.
Semmelweis’s prediction was fairly simple to test: he initiated a mandatory hand-washing policy for all the physicians and medical students in his clinic. Mortality among his clinics’ patients dropped to about 2%, compared with 13-18% in patients before the handwashing policy was put in place, supporting Semmelweis’s hypothesis.(4,5) Note that the comparison of two different groups is an absolutely key feature to this step. Thus we have the basic anatomy of a scientific, or empirical, observation.* If an observation or study doesn’t include all four elements, it might still be true, but it can’t be called scientific evidence. For example, if Dr. Semmelweis had drawn his conclusion about an infection being transmitted from a cadaver to his friend and left it at that, he still would have been correct, but he wouldn’t have had scientific evidence to support the idea. An important addendum to the scientific process is that the result is published in a peer-reviewed journal or book and the test of the hypothesis is repeated by others for confirmation. Peer review is a way of screening out studies that have flaws in how they are designed which are serious enough to prevent scientifically valid conclusions from being drawn from them, or sometimes studies which have ethical problems, and peer-reviewed journals are how EMS and other medical evidence is usually communicated.
Once in a while an important bit of evidence will come from a non-peer-reviewed but still authoritative source, such as a government body, but when evidence shows up in a non-peer-reviewed source it has to be treated with somewhat more suspicion than when it’s published in a peer-reviewed journal. Repetition of research by others is a way of confirming that the results are actually valid, and in an ideal world it would always happen. However, in real life that doesn’t always happen: sometimes studies are too expensive to repeat, for example, or there isn’t enough interest in a given topic.
Evaluating Evidence Quality
Things in the real world are often messy and contradictory. Thinking back to school, I remember doing a lot of scenarios that were very calm and organized from start to finish, where every sign and symptom I found pointed towards a single, neat conclusion about what was going on, patients seldom had more than one problem at once, and it was always possible to figure out what was wrong with a particular patient from my assessment and the tools available in an ambulance. Real-life EMS isn’t like that, however. Scenes are usually not calm or organized until we arrive and impose order on them, patients often have contradictory signs and symptoms, and very often it takes a lot more than what I can do in the back of an ambulance to figure out what’s wrong with a given patient. Because science is based on evidence instead of theory it’s a lot more like real-world EMS, which is messy and sometimes contradictory, than it is like classroom scenarios, which are usually nice and neat.
As an example, I remember doing a call once where I was dispatched for a male lying on the sidewalk, apparently intoxicated. When my partner and I arrived we found a patient who made regular use of EMS and was frequently transported in an intoxicated state, so it was a pretty reasonable guess by the dispatcher that he was probably intoxicated that day, too. However, it was a very hot summer day, and I found that he was wearing a heavy winter coat. Because of this, I guessed that he might have been experiencing a heat-related injury. I moved him to the ambulance and took his coat off, and discovered a sucking chest wound, apparently from being stabbed.
Science can be a lot like this: just like the dispatch information that this patient was intoxicated didn’t turn out to be the reason he needed an ambulance, in science the first theory isn’t always right, even if it seems like a good guess. Also, just like as I assessed more and found new things my clinical impression changed from ETOH intoxication, to heat-related injury, to penetrating trauma, very often in science we find new evidence that changes what we think about EMS practices.
Not all evidence is created equal.
In the example above about my stabbing patient, the evidence was fairly easy to understand, and it’s a good example of different evidence having different worth. The first piece of evidence was the dispatch information. Anyone who’s worked in EMS very long knows that dispatch information is pretty weak evidence: we’ve probably all done calls like this one where we were told one thing by the dispatcher and arrived to find something completely different. The fact that the patient was wearing a coat on a hot day is pretty good evidence that he might have a heat-related injury, but it’s not perfect. He could have just put the coat on a few seconds before we arrived there. For all I knew at that point, he could have just stumbled out of a walk-in freezer and he might actually have had hypothermia!
Once I discovered the presence of an open wound on his chest with blood and bubbles coming out of it, however, it would have been hard to argue that he didn’t have a sucking chest wound. Basically, each of these pieces of evidence had different levels of quality: the dispatch information could be called low-quality evidence, the presence of a coat on a hot day could be called medium-quality evidence, and the presence of a bleeding chest wound with bubbles coming out of it could be called high-quality evidence.
*It is important to note that in certain disciplines such as philosophy, observations don’t have to follow the scientific method to be considered “empirical observations.” However, in science and medicine we can generally consider “scientific observations” and “empirical observations” to mean the same thing.
- Harper D. empiric (adj.) [Internet]. Online Etymology Dictionary. 2001 [cited 2014 Jan 30]. Available from: http://www.etymonline.com/index.php?term=empiric&allowed_in_frame=0
- American Academy of Orthopedic Surgeons, Caroline NL, Elling B, Smith M. Nancy Caroline’s Emergency Care in the Streets. Jones & Bartlett Publishers; 2012. 2506 p.
- Canadian Prehospital Evidence Based Practice Project. Suspected Cardiac Origin [Internet]. EMS Prehospital Evidence Based Protocols. [cited 2014 Jan 30]. Available from: https://emspep.cdha.nshealth.ca/LOE.aspx?VProtStr=Suspected%20Cardiac%20Origin%20%20%20&VProtID=144#High flow oxygen
Claridge JA, Fabian TC. History and development of evidence-based medicine. World J Surg. 2005 May;29(5):547-53. PMID: 15827845.
Rawles JM, Kenmure AC. Controlled trial of oxygen in uncomplicated myocardial infarction. Br Med J. 1976 May 8;1(6018):1121-3. PMID: 773507.
Wijesinghe M, Perrin K, Ranchord A, Simmonds M, Weatherall M, Beasley R. Routine use of oxygen in the treatment of myocardial infarction: systematic review. Heart. 2009 Mar;95(3):198-202. PMID: 18708420.
Best M, Neuhauser D. Ignaz Semmelweis and the birth of infection control. Qual Saf Health Care. 2004 Jun;13(3):233-4. PMID: 15175497.The following two tabs change content below.Jason is a Primary Care Paramedic/EMT-A in Western Canada. He has worked in EMS and critical care since late 2000, in settings ranging from high-volume urban systems and acute care hospitals in the United States to remote/wilderness and SAR settings in Canada.
Introduction to Evidence Based EMS – Part 1
Get weekly email updates!
Cast Your Vote
- Blogs (40)
- Case Studies (8)
- Featured Article (23)
- How-To (47)
- FOAM (8)
- News (117)
- Conference Tweets (31)
- Pharmacology (1)
- Adrenaline (1)
- Research (180)
- Advanced Practice (2)
- Airway Management (6)
- Anaphylaxis (2)
- Cardiac (15)
- Community Paramedic (2)
- Critical Care Paramedic (4)
- Diagnostics (3)
- Dispatch (1)
- ECG (4)
- Education (10)
- EMS Operations (2)
- End-of-Life Care (3)
- Geriatrics (9)
- Guidelines (22)
- HEMS (4)
- Mass Casualty (2)
- Medical Conditions (5)
- Mental Health (7)
- Military & Tactical (2)
- Neonatal (1)
- Neuro (11)
- Obstetrics (1)
- Paediatrics (4)
- Pain Management (4)
- Poster Presentations (5)
- Professionalism (5)
- Remote, Industrial & Austere (6)
- Respiratory (5)
- Resuscitation (27)
- Rural (3)
- Safety (3)
- Sepsis (7)
- Shock (2)
- Simulation (7)
- Sports Medicine (1)
- Trauma (20)
- Reviews (6)
October 3 - October 7
- Free access: Resuscitation Today Vol 3 Issue 2 (27/06/16)
- Free CPD at the Emergency Services Show (23/05/16)
- Canadian Paramedicine Feb/Mar 2016 – Open Access Issue (11/04/16)
- Free access: Resuscitation Today Volume 3 Issue 1 (01/04/16)
- Introducing the Irish Journal of Paramedicine (22/11/15)
Latest How-To Articles
- Paramedic students…write something! (28/04/16)
- Understanding diagnostic tests 2: likelihood ratios, pre- and post-test probabilities and their use in clinical practice (30/01/15)
- Understanding diagnostic tests 1: sensitivity, specificity and predictive values (11/12/14)
- GRADE guidelines – best practices using the GRADE framework (22/11/14)
- How to get started with EMS research – JEMS (16/05/14)
academic ACS AED airway management ambulance AMI Australia Canada cardiac cardiac arrest case study CCP clinical management computer conference consensus CPG CPR CPR UL critical care CVA database ECG education elderly EMS evidence based FOAM FPHC geriatric guide guideline guidelines haemorrhage HEMS Ireland journal medication mental health neurology news OHCA online paediatric pain management paramedic prehospital PTSD reference research resuscitation review safety sepsis simulation social media software spinal STEMI stress stroke Student Paramedic study TBI training Translational Health Sciences trauma Twitter UK USA