Skip to Main Content

Evidence-Based Practice Guide

A guide to evidence-based practice.

Step 1: Ask clinical questions in PICOT format

Inquiries in this format take into account patient population of interest (P), intervention or area of interest (I), comparison intervention or group (C), outcome (O), and time (T). The PICOT format provides an efficient framework for searching electronic databases, one designed to retrieve only those articles relevant to the clinical question. Using the case scenario on rapid response teams as an example, the way to frame a question about whether use of such teams would result in positive outcomes would be: "In acute care hospitals (patient population), how does having a rapid response team (intervention) compared with not having a response team (comparison) affect the number of cardiac arrests (outcome) during a three-month period (time)?"

Step 2: Search for the best evidence

The search for evidence to inform clinical practice is tremendously streamlined when questions are asked in PICOT format. If the nurse in the rapid response scenario had simply typed "What is the impact of having a rapid response team?" into the search field of the database, the result would have been hundreds of abstracts, most of them irrelevant. Using the PICOT format helps to identify key words or phrases that, when entered successively and then combined, expedite the location of relevant articles in massive research databases such as MEDLINE or CINAHL. For the PICOT question on rapid response teams, the first key phrase to be entered into the database would be acute care hospitals, a common subject that will most likely result in thousands of citations and abstracts. The second term to be searched would be rapid response team, followed by cardiac arrests and the remaining terms in the PICOT question. The last step of the search is to combine the results of the searches for each of the terms. This method narrows the results to articles pertinent to the clinical question, often resulting in fewer than 20. It also helps to set limits on the final search, such as "human subjects" or "English," to eliminate animal studies or articles in foreign languages.

Step 3: Critically appraise the evidence

Once articles are selected for review, they must be rapidly appraised to determine which are most relevant, valid, reliable, and applicable to the clinical question. These studies are the "keeper studies." One reason clinicians worry that they don't have time to implement EBP is that many have been taught a laborious critiquing process, including the use of numerous questions designed to reveal every element of a study. Rapid critical appraisal uses three important questions to evaluate a study's worth.

  • Are the results of the study valid? This question of study validity centers on whether the research methods are rigorous enough to render findings as close to the truth as possible. For example, did the researchers randomly assign subjects to treatment or control groups and ensure that they shared key characteristics prior to treatment? Were valid and reliable instruments used to measure key outcomes?

  • What are the results and are they important? For intervention studies, this question of study reliability addresses whether the intervention worked, its impact on outcomes, and the likelihood of obtaining similar results in the clinicians' own practice settings. For qualitative studies, this includes assessing whether the research approach fits the purpose of the study, along with evaluating other aspects of the research such as whether the results can be confirmed.

  • Will the results help me care for my patients? This question of study applicability covers clinical considerations such as whether subjects in the study are similar to one's own patients, whether benefits outweigh risks, feasibility and cost-effectiveness, and patient values and preferences.

After appraising each study, the next step is to synthesize the studies to determine if they come to similar conclusions, thus supporting an EBP decision or change.

Step 4: Integrate the evidence with clinical expertise and patient preferences and values

Research evidence alone is not sufficient to justify a change in practice. Clinical expertise, based on patient assessments, laboratory data, and data from outcomes management programs, as well as patients' preferences and values are important components of EBP. There is no magic formula for how to weigh each of these elements; implementation of EBP is highly influenced by institutional and clinical variables. For example, say there's a strong body of evidence showing reduced incidence of depression in burn patients if they receive eight sessions of cognitive-behavioral therapy prior to hospital discharge. You want your patients to have this therapy and so do they. But budget constraints at your hospital prevent hiring a therapist to offer the treatment. This resource deficit hinders implementation of EBP.

Step 5: Evaluate the outcomes of the practice decisions or changes based on evidence and Step 6: Disseminate

After implementing EBP, it's important to monitor and evaluate any changes in outcomes so that positive effects can be supported and negative ones remedied. Just because an intervention was effective in a rigorously controlled trial doesn't mean it will work exactly the same way in the clinical setting. Monitoring the effect of an EBP change on health care quality and outcomes can help clinicians spot flaws in implementation and identify more precisely which patients are most likely to benefit. When results differ from those reported in the research literature, monitoring can help determine why.

Clinicians can achieve wonderful outcomes for their patients through EBP, but they often fail to share their experiences with colleagues and their own or other health care organizations. This leads to needless duplication of effort, and perpetuates clinical approaches that are not evidence based. Among ways to disseminate successful initiatives are EBP rounds in your institution, presentations at local, regional, and national conferences, and reports in peer-reviewed journals, professional newsletters, and publications for general audiences.

References