Skip to main content

Increasing system-wide implementation of opioid prescribing guidelines in primary care: findings from a non-randomized stepped-wedge quality improvement project

Abstract

Background

Clinician utilization of practice guidelines can reduce inappropriate opioid prescribing and harm in chronic non-cancer pain; yet, implementation of “opioid guidelines” is subpar. We hypothesized that a multi-component quality improvement (QI) augmentation of “routine” system-level implementation efforts would increase clinician adherence to the opioid guideline-driven policy recommendations.

Methods

Opioid policy was implemented system-wide in 26 primary care clinics. A convenience sample of 9 clinics received the QI augmentation (one-hour academic detailing; 2 online educational modules; 4–6 monthly one-hour practice facilitation sessions) in this non-randomized stepped-wedge QI project. The QI participants were volunteer clinic staff. The target patient population was adults with chronic non-cancer pain treated with long-term opioids. The outcomes included the clinic-level percentage of target patients with a current treatment agreement (primary outcome), rates of opioid-benzodiazepine co-prescribing, urine drug testing, depression and opioid misuse risk screening, and prescription drug monitoring database check; additional measures included daily morphine-equivalent dose (MED), and the percentages of all target patients and patients prescribed ≥90 mg/day MED. T-test, mixed-regression and stepped-wedge-based analyses evaluated the QI impact, with significance and effect size assessed with two-tailed p < 0.05, 95% confidence intervals and/or Cohen’s d.

Results

Two-hundred-fifteen QI participants, a subset of clinical staff, received at least one QI component; 1255 patients in the QI and 1632 patients in the 17 comparison clinics were prescribed long-term opioids. At baseline, more QI than comparison clinic patients were screened for depression (8.1% vs 1.1%, p = 0.019) and prescribed ≥90 mg/day MED (23.0% vs 15.5%, p = 0.038). The stepped-wedge analysis did not show statistically significant changes in outcomes in the QI clinics, when accounting for the comparison clinics’ trends. The Cohen’s d values favored the QI clinics in all outcomes except opioid-benzodiazepine co-prescribing. Subgroup analysis showed that patients prescribed ≥90 mg/day MED in the QI compared to comparison clinics improved urine drug screening rates (38.8% vs 19.1%, p = 0.02), but not other outcomes (p ≥ 0.05).

Conclusions

Augmenting routine policy implementation with targeted QI intervention, delivered to volunteer clinic staff, did not additionally improve clinic-level, opioid guideline-concordant care metrics. However, the observed effect sizes suggested this approach may be effective, especially in higher-risk patients, if broadly implemented.

Trial registration

Not applicable.

Peer Review reports

Background

Chronic non-cancer pain is a common, disabling condition, [1] for which opioids have been increasingly prescribed over the past decades, despite a paucity of research on long-term benefits and strong evidence for dose-dependent harm [2, 3], compounded by inappropriate opioid prescribing practices [4, 5]. Implementation of evidence- and guideline-based management of opioid therapy in chronic non-cancer pain can reduce inappropriate prescribing and harm from opioids [4,5,6,7,8,9,10,11]. Adopting guideline-concordant management of opioid therapy in primary care is particularly important because primary care clinicians prescribe approximately half of opioid analgesics [12]. However, research on effective ways for dissemination and implementation of practice guidelines is insufficient, shows mixed results, and does not provide clear guidance on methods for successful adoption of guideline-recommended practices, including in primary care [13,14,15]. In general, implementation of guidelines has been challenging, with those characterized by high clinical complexity and low research evidence support, as is the case with opioid guidelines [3, 16,17,18,19], showing low adoption rates [13, 14].

In February 2016, one academic health system implemented a guideline-driven [16,17,18] opioid therapy management policy (opioid policy) in primary care via informational sessions for clinicians and clinic teams (routine rollout) [20]. Our project goal was to assess if a multi-faceted, clinic-level quality improvement (QI) intervention in addition to routine rollout, compared to routine rollout alone, would improve guideline- and policy-concordant care and associated metrics [16,17,18], assessed via de-identified clinic-level electronic health record (EHR) data.

Methods

This report follows the SQUIRE 2.0 standards for QI reporting [21]. Details on the routine rollout, QI protocol development, components and implementation are described elsewhere [20]. The Institutional Review Board deemed the project a QI initiative, not constituting human subjects research, as defined under 45 CFR 46.102(d).

Context and setting

The academic health system in Wisconsin, USA included 35 primary care (internal medicine and family medicine) clinics in rural, suburban and urban settings, caring for 200,000 adult primary care patients. In February 2016, it introduced a guideline-driven opioid policy on the management of long-term (≥ 3 months) opioid therapy in primary care adult outpatients with chronic non-cancer pain (“target patients”). The opioid policy provided multiple guideline-concordant recommendations for safety, treatment response monitoring, and management in this target population [20]. The system-wide routine rollout of this policy first involved pilot-testing by the health system of the rollout methods with 3 clinics in the fall of 2015, prior to the system-wide rollout in February 2016 [20]. The health system’s policy rollout consisted of an one-hour in-person meeting for clinicians; a one-hour online training session for clinic staff; and two follow-up tele-meetings to address comments/questions from the clinic staff [20]. The health system’s leadership approved this QI project to determine if augmenting the system’s routine rollout would improve patient outcomes.

Clinic selection

The health system included 35 primary care clinics. Nine of these clinics were involved in other opioid QI initiatives and were excluded from the recruitment pool, yielding 26 clinics eligible for the QI project and included in the analyses. Among these 26 clinics, those with the highest number of target patients were approached first. Three of the approached clinics declined (“too busy for new QI efforts”). The first 9 consenting clinics (convenience sample) were enrolled into a non-randomized stepped-wedge QI project. The QI intervention was initiated immediately after the health system’s routine policy rollout in February 2016. It was delivered over 4–6 months at each clinic and implemented in three waves (March–July 2016; September–December 2016; January–June 2017), with 3 clinics per wave; the QI wave assignment was non-random, based on each clinic’s preference. The non-random approach to both clinic selection and the timing of each clinic’s participation in the QI intervention resulted from the health system and clinics’ requests to allow maximum flexibility for each clinic’s involvement.

Participants

The QI participants were volunteer clinical staff (prescribers, nurses and others) at each intervention clinic. The evaluation subjects (target patient population) were identified by the search of EHR-based data from the problem list, encounter, and billing records, using the health system-developed criteria: age ≥ 18 years old; active-patient status (seen at the clinic in the past 3 years); primary care provider within the health system; no diagnosis of malignant neoplasm (except non-melanoma skin cancer) or palliative or hospice care status; and meeting at least one of the two criteria: 1) ≥1 opioid prescription issued in the prior 45 days and ≥ 3 opioid prescriptions issued in the prior 4 months; or 2) ≥1 opioid prescription issued in the prior 45 days, and presence of a chronic pain diagnosis and a controlled substance agreement. For the analyses, buprenorphine was excluded from the “eligible opioid” list due to its primary utility locally as a treatment for opioid use disorder.

QI intervention

The intervention was developed by the project’s multidisciplinary team, following the recommended approach to QI in primary care [15], and with input from the health system leadership and clinicians to ensure alignment with opioid prescribing guidelines and the health system’s policies, procedures and resources, as detailed elsewhere [20]. Briefly, the QI intervention at each clinic consisted of: a) one 1-h academic detailing session, delivered by the project physicians, outlining the project, the national guidelines, and the health system’s opioid policy recommendations; b) two 20–21 question online educational modules: one on shared decision-making in the context of opioid therapy for chronic pain, and another on the guideline and health system policy recommendations for opioid therapy management; and c) six 1-h practice facilitation (PF) sessions delivered at each clinic over 4–6 months by the project’s trained facilitators. The PF sessions focused on optimizing clinical workflows to promote clinician adherence to the guideline and health system policy recommendations with measurable outcomes (“QI targets”). The selection of QI targets for the PF sessions was driven by each clinic team’s preference [20]. Participating clinical staff were eligible for up to 23 American Medical Association Physician’s Recognition Award Category 1™ Credits for completing the intervention.

Measures

Process measures

Quantitative and qualitative process or explanatory measures related to the QI intervention were collected to better understand the processes underlying the hypothesized change in the main outcomes. They included attendance and completion rates of the intervention components, pre-post intervention surveys of the participating clinicians/staff about their confidence, attitudes, barriers and facilitators toward the recommended management of patients with opioid-treated chronic pain, and surveys of participating clinicians/staff about the intervention components.

Outcome measures

Were extracted and assessed monthly from the EHR (Epic Systems Corporation) from baseline (January 2016) through project end (December 2017). De-identified EHR data were analyzed at the clinic level. Outcome measures were selected based on the opioid prescribing guideline [16,17,18] and policy recommendations, and availability of the corresponding EHR data entered as a part of the health system’s routine care [20]. The percentage of target patients with a “current” (signed within the past 12 months, and assessed monthly) treatment agreement was chosen as the primary outcome based on the health system’s opioid policy, which recommended its routine completion, followed by annual updates.20 Secondary outcome measures were: a) “current” urine drug testing (UDT) and b) “current” depression screen with a two- or nine-item Patient Health Questionnaire (PHQ) [22, 23] (a positive screen using a two-item questionnaire automatically triggered completion of a nine-item questionnaire); c) “current” prescription drug monitoring program (PDMP) database check; d) completion of the opioid misuse risk screen with the Diagnosis, Intractability, Risk, Efficacy (D.I.R.E.) tool [24]; and e) the rate of opioid-benzodiazepine co-prescribing in at least one of the past 3 months. Secondary outcomes a-d were based on the health system’s policy. The co-prescribing measure was not a part of the health system’s policy, but was included based on guideline recommendations advising against co-prescribing [16,17,18]. Additional measures of interest commonly used to assess opioid interventions [15,16,17,18] included: a) percentage of target patients relative to the total adult clinic panel; b) daily opioid dose, calculated per target patient as an average morphine-equivalent dose (MED; milligrams/day) by adding up the doses of all opioids (except buprenorphine) prescribed for outpatient treatment in the prior 90 days and dividing the sum by 90; and c) percentage of target patients prescribed MED ≥90 mg/day (past 90 days). Opioid and benzodiazepine prescription data were extracted from the medication list.

Statistical analysis

Nine clinics were predicted to provide 80% power to detect a clinically meaningful (20%) increase in the use of a treatment agreement (see Additional File 1 for a detailed sample size discussion). Primary and secondary outcomes were defined a priori. SAS® Version 9.4 was used for statistical analyses. Baseline (January 2016) and project end (December 2017) data were collected using clinic-level averages, weighted by target patient panel size per clinic, so that the change in the clinic-level variables was independent of the clinic’s target patient panel size. Descriptive statistics were used to describe these data, using numbers (percentages), and means with standard deviation (SD) or standard error (SE). Single and two-sample means tests evaluated outcome changes between baseline and exit data within and between the intervention and comparison clinics, respectively. For the PDMP outcome measure, the end date was changed from December to March 2017 due to changes in state law and health system requirements, which led to approximately 100% PDMP check documentation across the clinics starting in April 2017. The primary evaluation of intervention impact was conducted using a mixed-effects regression analysis model. By identifying the three distinct project periods (pre-, during, and post-intervention) for each QI clinic, we were able to evaluate and contrast these distinct, specific periods across the project duration. Therefore, the model leveraged the monthly EHR data and accounted for the timing of intervention delivery in the QI clinics by contrasting clinic-level pre-intervention data, with data collected during and then after the intervention (stepped-wedge analysis). The stepped-wedge analysis was further augmented by adding comparison clinics’ monthly data during the same assessment period. Linear curves were fitted to the monthly outcomes as fixed effects, with baseline values and slopes of change separately estimated for QI intervention and comparison clinics. Additional fixed effects were included to allow the slopes of fitted curves for intervention clinic outcomes to change in relation to the intervention and post-intervention periods. Random effects were included at the levels of both the primary care provider (PCP) within each clinic and the clinic as a whole to account for correlation among monthly observations from the same PCP or the same clinic. Observations were also weighted by the number of target patients within each PCP’s monthly panel. Estimates of the differential slopes (pre-intervention, intervention and post-intervention) for the QI clinics and a single, study-long slope for the comparison clinics were used to assess the specific impact of the intervention on the QI clinics’ outcomes. Baseline differences between QI and comparison clinic characteristics, and between clinic and PCPs within each study group, were accounted for in differential baseline intercepts and slopes, and with random intercept and slope effects, respectively. See Additional File 2 for details of the mixed effect model and result interpretation.

A subgroup analysis was conducted among target patients treated with MED ≥90 mg/day, because of their increased risk for opioid-related harm [18].

The significance and magnitude of changes were assessed with p values (significance level: two-tailed p < 0.05), 95% Confidence Intervals (CIs), and/or Cohen’s d effect size (ES, 0.2–0.4: small; 0.5–0.7: medium; ≥0.8: large) [25].

Results

Process-related findings

A total of 215 unique health care providers, including 73 prescribers and 142 other clinic staff from the enrolled 4 family medicine and 5 internal medicine clinics completed at least one component of the QI intervention (QI participants; Table 1). Among the QI participants, 48.4% completed half or more of the intervention components; 44.7% completed at least 4 of the 6 in-person practice facilitation sessions; 31.2% completed the opioid prescribing and 23.2% completed the shared decision making online modules (Table 1). The intervention participation was voluntary, and not all clinic health care providers participated; although data on the total clinic staff were not collected or available, it was estimated that fewer than 50% of each clinic’s staff received the intervention.

Table 1 Completion of the intervention components among the quality improvement (QI) intervention clinics’ staff

Other process-related findings are detailed in Additional File 4. Briefly, pre-intervention, the clinicians/staff identified responsible opioid prescribing practices, shared decision making, and the management of patients with chronic pain as areas of educational need. Based on the post-intervention evaluation, this need was met through the QI intervention’s components. Overall, the intervention participants identified learning through the project about the conduct of a QI process and its impact assessment, and working better as a team as important outcomes of the project and useful skills that are “transferrable” toward other initiatives.

Baseline characteristics of the target patient population

Across the 26 evaluated primary care clinics, 3148 target patients (58.1% women, mean age 53.3 (SD 13.8) years) were identified, with 1431 in the QI and 1717 in the comparison clinics. These patients comprised 1.9% of the QI and 2.0% of the comparison clinics’ adult patient panels. A comparison of weighted, clinic-level baseline characteristics showed that the target patients in the QI and comparison clinics (Table 2) did not differ in a statistically significant way in relatively high daily MED doses and the rates of opioid-benzodiazepine co-prescribing. They also did not differ in their overall low rates of “current” treatment agreements, urine drug testing, completed opioid risk assessment, and documented PDMP check. However, the QI clinics, relative to the comparison clinics, had higher rates of completed depression screening (8.1% (SD 10.4) vs. 1.1% (SD 1.3), p = 0.019) and of patients prescribed MED ≥90 mg/day (23.0% (SD 8.8) vs. 15.5% (SD 7.3), p = 0.038). No statistically significant differences in baseline characteristics were noted in a subgroup of target patients treated with MED ≥90 mg/day in the QI (N = 359) and comparison (N = 283) clinics.

Table 2 Baseline characteristics of the target patient population

Primary outcome analysis: mixed-effects regression analysis

Regression analysis of monthly outcomes by clinic and prescriber recognized variations in the timing of the QI intervention from clinic to clinic within the QI clinic group. This analysis did not reveal statistically significant changes in outcomes in the stepped-wedge pre-post analyses, which contrasted the pre-intervention period with the combined intervention and post-intervention periods. Augmentation with data from the comparison clinics did not impact these findings. See Additional File 3 for detailed results.

However, when the evaluation specifically focused on the intervention period, separating it from the post-intervention period, several statistically significant changes were noted in the QI clinics, after accounting for the trends in the comparison clinics (see Additional File 3). The completion rate of new treatment agreements (incidence rate) increased in the QI clinics during the intervention months both in the overall target population (by 9.4%; p = 0.023; 95%CI = [0.028,0.159]) and in the subgroup of patients treated with MED ≥90 mg/day (by 15.9%; p = 0.044, 95%CI = [0.029,0.289]); these differences were not sustained post-intervention in either group. In addition, in the QI clinics, among the overall target population and in the high-dose subgroup, MED decreased post-intervention (by − 2.75 and − 6.50 mg/day, respectively), but not during the intervention period (0.80 and 12.43 mg/day, respectively).

Secondary outcomes: means tests

Both the QI and comparison clinics improved on all outcomes among the target population (Table 3), except the prevalence of opioid-benzodiazepine co-prescribing, which improved in the comparison clinics only (p = 0.006). When comparing the change in outcomes between the QI and comparison clinics, no statistically significant differences were noted. However, Cohen d effect sizes favored the QI clinics, except for opioid-benzodiazepine co-prescribing (a small ES in the intervention and a moderate ES in the comparison clinics).

Table 3 Target Patient Population: Change in Outcomes

Among the subgroup of target patients prescribed MED ≥90 mg/day (Table 4), all outcomes tended to improve in both the QI and comparison clinics. Comparing the change in outcomes, prevalence of urine drug screening increased twice as much in the QI clinics (38.8% (SE 4.4) vs. 19.1% (SE 7), p = 0.020). While there were no other statistically significant differences between the QI and comparison clinics, Cohen’s d effect sizes again favored the QI clinics.

Table 4 Target Patient Population Treated with ≥90 mg/day of Morphine-equivalent Opioid Dose: Change in Outcomes

Discussion

The goal of this project was to rigorously evaluate if a multi-component, QI augmentation of “routine” system-level implementation of opioid prescribing policy would help increase clinician adherence to the policy recommendations and improve opioid prescribing practices, assessed via clinic-level EHR data. Strengths of this study include its large sample size, breadth of application, pragmatic approach to testing under actual clinical conditions, and conservative approach to outcome analysis conducted on the global clinic level. Those strengths, however, also led to the primary limitations of this QI project, which did not account for the potential confounders, such as differences between the QI clinics in their staff engagement (percentage of each clinic staff participating in the QI initiative; completion by the participating staff of the intervention components), leadership’s support or the selected improvement targets at each clinic. Not all prescribers received the intervention; yet, because all target patients within a clinic were evaluated as an aggregate EHR data set, we were unable to separate the specific outcomes of patients whose prescribers and other health care providers received the intervention from patients whose clinicians did not receive it. In addition, the participation numbers indicate that prescribers were less engaged in the QI intervention than other clinical staff. These limitations though make all the more remarkable the statistically significant difference in urine drug screening among higher-risk patients (treated with high-dose opioids) and the overall favoring by Cohen’s d effect sizes of the change in outcomes in the QI clinics, relative to the comparison clinics.

Another limitation was the absence of randomization in clinic assignment to intervention versus comparison groups. Randomization would have avoided potential selection bias. However, because clinic participation was voluntary, a convenience sample approach was necessary. Although we excluded clinics actively engaged in other opioid-related QI initiatives, clinics we approached that declined to participate may have done so because they had more fully embraced the concept of implementing the system-wide policy changes, such that their addition to the comparison group artificially elevated that cohorts’ average changes. In addition, each intervention clinic selected their own targets for focused improvements, potentially further diluting the intervention effects on any one of the measures.

State-wide legislative changes during our QI project represented an additional potential confounder, which could have substantially impacted clinician behavior [26]. In April 2017, a state law went into effect requiring clinicians to check a patient’s PDMP record before prescribing any controlled substances, including opioids. As a result, many health systems across the state, including the evaluated health system, implemented a system-wide requirement and related workflows to document PDMP check prior to issuing prescriptions for controlled substances. This led to an essentially 100% adherence to this practice starting in April 2017. In addition, in 2017, the State Medical Examining Board introduced a new requirement that all prescribers complete 2 h of approved Continuing Medical Education (CME) on opioid prescribing guidelines for chronic pain. While we ameliorated the impact on our PDMP metric by changing that measure’s end date to March 2017, there is little question that these two external factors and the general increase in public discourse relating to opioid-related harms during the QI period likely contributed to a dilution of our intervention effects. The legal and licensing changes, and the PDMP check requirements can lead to a decrease in the number of prescriptions for opioids and prescribed opioid doses [24]. However, these changes do not necessarily translate into reduction in overdose admission rates, which have in fact increased in the state [25]. This suggest that legislative and other system-level environmental changes with “coercive” components (e.g., potential for legal or disciplinary consequences in the absence of PDMP check documentation prior to providing a prescription for opioids) can alter clinician behavior, a hypothesis supported by the behavior change framework developed by Michie et al. [26].

Our baseline and exit data demonstrated that there was substantial room for improvement in the monitoring of patients treated with long-term opioids, indicating that developing methods to increase clinician adherence to opioid prescribing guidelines remains an important area of research to improve patient outcomes. This QI project did not show statistically significant impact of our intervention on the clinic-level primary or secondary outcomes in a non-randomized stepped-wedge analysis. However, the intervention did yield a marked, statistically significant increase in urine drug screening rate among higher-risk patients treated with high-dose opioids. It also yielded a transient increase in the incidence of new treatment agreements during the intervention period. Further, the magnitude of change in outcomes for all target patients, as assessed by Cohen’s d effect size values, suggests that the intervention may help reduce high-dose opioid prescribing in primary care patients treated with long-term opioids for chronic pain. In addition, the QI intervention was well-received and rated as useful by the participating clinicians and other clinic staff, with completion rates suggesting though that in-person PF sessions may have higher utility compared to the self-directed, online educational modules. Intervention participants also identified learning how to formally conduct the QI initiative and assess its impact, and working better as a team as important outcomes of the project and skills that are “transferrable” toward other, future QI initiatives. Primary care clinics with QI orientation and skills are more likely to continue working toward improving their care delivery and patient outcomes [15]. For these reasons, our results offer the chance of conclusive success through a similar intervention, favoring in-person delivery with a broader, entire-clinic implementation, tested under more rigorous conditions, for example, by randomly assigning clinics to the intervention arm and applying a QI intervention to all clinic prescribers of moderate-dose and high-dose opioid-treated patients or evaluating the results only for patients whose prescriber(s) received the intervention.

Both guideline content and construct can impact implementation [27]. Complex guidelines with low “implementability” characteristics, such as the opioid prescribing guidelines and the recommendations targeted by this QI project, have a reduced likelihood of successful implementation [27]. Revising the opioid prescribing guideline with attention to the implementability of recommendations would likely increase its adoption.

Conclusions

Augmenting routine policy implementation with targeted QI intervention, delivered to volunteer clinic staff, did not additionally improve clinic-level, opioid guideline-concordant care metrics. However, the observed effect sizes suggested this approach may be effective, especially in higher-risk patients, if broadly implemented.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to institutional policy on patient data sets, but may be available from the corresponding author on reasonable request.

Abbreviations

CI:

Confidence interval

EHR:

Electronic Health Record

LB, UB:

Lower bound, upper bound

MED:

Morphine equivalent dose

MEDD:

Morphine-equivalent daily dose

PC:

Primary care

PCP:

Primary care provider

PDMP:

Prescription drug monitoring program

PF:

Practice facilitation

QI:

Quality Improvement

SAS:

Statistical analysis system

UDT:

Urine drug testing

References

  1. 1.

    Institute of Medicine (IOM) of the National Academies. Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research. 2011. Accessed on Jul 20, 2020 at:https://www.nap.edu/catalog/13172/relieving-pain-in-america-a-blueprint-for-transforming-prevention-care.

    Google Scholar 

  2. 2.

    Agency for Healthcare Research and Quality (AHRQ). The Effectiveness and Risks of Long-term Opioid Treatment of Chronic Pain. AHRQ Publication No. 14-E005-EF. Rockville, MD: AHRQ; 2014. Accessed on Jul 20, 2020 at: https://effectivehealthcare.ahrq.gov/topics/chronic-pain-opioid-treatment/research.

    Google Scholar 

  3. 3.

    Manchikanti L, Abdi S, Atluri S, et al. American Society of Interventional Pain Physicians (ASIPP) guidelines for responsible opioid prescribing in chronic non-cancer pain: part I--evidence assessment. Practice Guideline. Pain Physician. 2012;15(3 Suppl):S1–65.

    PubMed  Google Scholar 

  4. 4.

    Volkow ND, Jones EB, Einstein EB, Wargo EM. Prevention and Treatment of Opioid Misuse and Addiction: A Review. JAMA Psychiatry. 2018. https://doi.org/10.1001/jamapsychiatry.2018.3126.

  5. 5.

    Rose AJ, Bernson D, Chui KKH, et al. Potentially inappropriate opioid prescribing, overdose, and mortality in Massachusetts, 2011-2015. J Gen Intern Med. Sep 2018;33(9):1512–9. https://doi.org/10.1007/s11606-018-4532-5.

    Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Cochella S, Bateman K. Provider detailing: an intervention to decrease prescription opioid deaths in Utah. Pain Med. 2011;12(Suppl 2):S73–6. https://doi.org/10.1111/j.1526-4637.2011.01125.x.

    Article  PubMed  Google Scholar 

  7. 7.

    Franklin GM, Mai J, Turner J, Sullivan M, Wickizer T, Fulton-Kehoe D. Bending the prescription opioid dosing and mortality curves: impact of the Washington state opioid dosing guideline. Am J Ind Med. 2012;55(4):325–31. https://doi.org/10.1002/ajim.21998.

    Article  PubMed  Google Scholar 

  8. 8.

    Chen JH, Hom J, Richman I, Asch SM, Podchiyska T, Johansen NA. Effect of opioid prescribing guidelines in primary care. Medicine. 2016;95(35):e4760. https://doi.org/10.1097/MD.0000000000004760.

    Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Fox TR, Li J, Stevens S, Tippie T. A performance improvement prescribing guideline reduces opioid prescriptions for emergency department dental pain patients. Ann Emerg Med. 2013;62(3):237–40. https://doi.org/10.1016/j.annemergmed.2012.11.020.

    Article  PubMed  Google Scholar 

  10. 10.

    Quanbeck A, Brown RT, Zgierska AE, et al. A randomized matched-pairs study of feasibility, acceptability, and effectiveness of systems consultation: a novel implementation strategy for adopting clinical guidelines for Opioid prescribing in primary care. Implement Sci. 2018;13(1):21. https://doi.org/10.1186/s13012-018-0713-1.

    Article  PubMed  PubMed Central  Google Scholar 

  11. 11.

    U.S Department of Health and Human Services. Healthy People 2010: Understanding and Improving Health. 2nd Edition. http://www.health.gov/healthypeople.

  12. 12.

    Levy B, Paulozzi L, Mack KA, Jones CM. Trends in opioid analgesic-prescribing rates by specialty, U.S., 2007-2012. Am J Prev Med. 2015;49(3):409–13. https://doi.org/10.1016/j.amepre.2015.02.020.

    Article  PubMed  PubMed Central  Google Scholar 

  13. 13.

    Agency for Healthcare Research and Quality (AHRQ). Translating Research Into Practice (TRIP)-II: Fact Sheet. Rockville, MD: AHRQ; 2001. Accessed on Jul 18, 2020 at: http://archive.ahrq.gov/research/findings/factsheets/translating/tripfac/trip2fac.html.

    Google Scholar 

  14. 14.

    Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Systematic Review. Health Technol Assess. 2004;8(6) iii-iv:1–72.

    Article  Google Scholar 

  15. 15.

    Agency for Healthcare Research and Quality (AHRQ). Quality Improvement in Primary Care. Content last reviewed April 2020. AHRQ, Rockville, MD. Accessed on July 18, 2020 at: https://www.ahrq.gov/research/findings/factsheets/quality/qipc/index.html.

  16. 16.

    Manchikanti L, Abdi S, Atluri S, et al. American Society of Interventional Pain Physicians (ASIPP) guidelines for responsible opioid prescribing in chronic non-cancer pain: part 2—guidance. Practice Guideline. Pain physician. 2012;15(3 Suppl):S67–116.

    PubMed  Google Scholar 

  17. 17.

    Nuckols TK, Anderson L, Popescu I, et al. Opioid Prescribing: A Systematic Review and Critical Appraisal of Guidelines for Chronic Pain. Ann Intern Med. 2014;160(1):38.

    Article  Google Scholar 

  18. 18.

    Chou R, Fanciullo GJ, Fine PG, et al. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. Practice guideline. J Pain. 2009;10(2):113–30. https://doi.org/10.1016/j.jpain.2008.10.008.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain - United States, 2016. MMWR Recomm Rep. 2016;65(1):1–49. https://doi.org/10.15585/mmwr.rr6501e1.

    Article  PubMed  Google Scholar 

  20. 20.

    Zgierska AE, Vidaver RM, Smith P, et al. Enhancing system-wide implementation of opioid prescribing guidelines in primary care: protocol for a stepped-wedge quality improvement project. BMC Health Serv Res. 2018;18(1):415. https://doi.org/10.1186/s12913-018-3227-2.

    Article  PubMed  PubMed Central  Google Scholar 

  21. 21.

    Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (standards for QUality improvement reporting excellence): revised publication guidelines from a detailed consensus process. Jt Comm J Qual Patient Saf. 2015;41(10):474–9.

    PubMed  Google Scholar 

  22. 22.

    Kroenke K, Spitzer RL, Williams JB. The patient health Questionnaire-2: validity of a two-item depression screener. Med Care. 2003;41(11):1284–92. https://doi.org/10.1097/01.MLR.0000093487.78664.3C.

    Article  PubMed  Google Scholar 

  23. 23.

    Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. Research support, non-U.S. Gov't validation studies. J Gen Intern Med. 2001;16(9):606–13.

    CAS  Article  Google Scholar 

  24. 24.

    Belgrade MJ, Schamber CD, Lindgren BR. The DIRE score: predicting outcomes of opioid prescribing for chronic pain. J Pain. 2006;7(9):671–81. https://doi.org/10.1016/j.jpain.2006.03.001.

    Article  PubMed  Google Scholar 

  25. 25.

    Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Erlbaum; 1988.

    Google Scholar 

  26. 26.

    Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42. https://doi.org/10.1186/1748-5908-6-42.

    Article  PubMed  PubMed Central  Google Scholar 

  27. 27.

    Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment Fidelity in health behavior change studies: best practices and recommendations from the NIH behavior change consortium. Health Psychol. 2004;23(5):443–51. https://doi.org/10.1037/0278-6133.23.5.443.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Drs. June Dahl, Nathan Rudin, Rodney Erickson and France Légaré for reviewing and providing feedback on the content of educational modules and other components of the intervention.

Funding

This work was supported by an unrestricted researcher-initiated grant from Pfizer (Pfizer Independent Grants for Learning and Change, #16213567, January 2015 – December 2017). The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.

Author information

Affiliations

Authors

Contributions

All authors substantially contributed to this work in accordance with editorial guidelines. AEZ contributed to project conception, design, and conduct, and drafted the manuscript. JMR contributed to design, analysis, interpretation of data, and edited the manuscript. RMV, PS, MWA, KN, DB, WJT, and DLH contributed to project conception, design and execution, and edited the manuscript. RPL contributed to the manuscript preparation and write-up. All authors read and approved the final version and agree to responsibility for their contributions.

Authors’ information

AEZ is a Professor who is board-certified in family medicine and addiction medicine, and provides both primary and specialty care; her research focuses on improving care and outcomes in patients with opioid-treated chronic pain or opioid addiction. JMR is Director and Senior Scientists at the Center for Health Systems Research and Analysis. RPL is an Associate Professor who is board certified in family medicine; his research focuses on patient application and policy interventions to reduce the burden of chronic pain and reduce opioid overuse. PDS is a Professor who is board-certified in family medicine, and provides primary care; his research centers on patient-provider communication. KN is a program manager and MWA the director for an organization with an over 100-year history of providing advanced medical education. DB is a research coordinator at a primary care research network with experience in practice facilitation. WJT is a database analyst specializing in extracting complex clinical information from electronic health records. RMV is a doctorally-trained program manager for a primary care research network with experience in primary care study design and implementation. DLH is a senior scientist who is board-certified in family medicine and directs a primary care research network.

Corresponding author

Correspondence to Aleksandra E. Zgierska.

Ethics declarations

Ethics approval and consent to participate

The University of Wisconsin-Madison Institutional Review Board determined that this project was a QI initiative, not constituting human subjects research as defined under 45 CFR 46.102(d). The need for consent was, therefore, deemed unnecessary.

Consent for publication

Not Applicable.

Competing interests

The funding for this study came from a competitive peer-reviewed researcher-initiated unrestricted grant from Pfizer, awarded to the Interstate Postgraduate Medical Association in partnership with the University of Wisconsin; Pfizer was not involved in the design and conduct of this project or the manuscript write-up. Some of the authors (AEZ, JMR, PDS, DB, WJT, RMV, DLH) were employed during the project by the University of Wisconsin (UW); the UW Health primary care clinics were the target for the described QI intervention. In addition, some of the authors have received funding support for scientific projects in the past 3 years from the following external agencies: National Institutes of Health (AEZ), Patient-Centered Outcomes Research Institute (AEZ, WJT), U.S. Department of Justice (AEZ, JMR), Wisconsin Department of Health Services (AEZ), Wisconsin Department of Transportation (JMR), and Pfizer (AEZ).

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Sample Size Estimation

Additional file 2.

Mixed effect model and sample results

Additional file 3.

Summary of mixed effects model results

Additional file 4.

Process measures and related findings

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zgierska, A.E., Robinson, J.M., Lennon, R.P. et al. Increasing system-wide implementation of opioid prescribing guidelines in primary care: findings from a non-randomized stepped-wedge quality improvement project. BMC Fam Pract 21, 245 (2020). https://doi.org/10.1186/s12875-020-01320-9

Download citation

Keywords

  • Quality improvement
  • Primary care
  • Chronic pain
  • Health care delivery
  • Physician behavior
  • Opioids