Building capacity for continuous quality improvement (CQI): A pilot study.

BACKGROUND AND OBJECTIVE
Little is known about the feasibility, effectiveness, and sustainability of CQI approaches in substance use disorder treatment settings.


METHODS
In the initial phase of this study, eight programs were randomly assigned to receive a CQI intervention or to a waitlist control condition to obtain preliminary information about potential effectiveness. In the second phase, the initially assigned control programs received the CQI intervention to gain additional information about intervention feasibility while sustainability was explored among the initially assigned intervention programs.


RESULTS AND CONCLUSIONS
Although CQI was feasible and sustainable, demonstrating its effectiveness using administrative data was challenging suggesting the need to better align performance measurement systems with CQI efforts. Further, although the majority of staff were enthusiastic about utilizing this approach and reported provider and patient benefits, many noted that dedicated time was needed in order to implement and sustain it.


Introduction
Substance use disorders (SUDs) are a significant health problem. An estimated 21.7 million individuals in the United States aged 12 or older needed substance use treatment in the past year (Center for Behavioral Health Statistics and Quality, 2016). Substance use disorders have numerous health, legal, and social consequences; the National Institute on Drug Abuse (NIDA) estimates substance use disorders cost more than $700 billion annually in health care, crime, and productivity costs (National Institute on Drug Abuse, 2015).
In 2006, the Institute of Medicine (IOM) recommended the institution of quality improvement strategies for improving SUD care (Institute of Medicine, 2006). Continuous quality improvement (CQI) is one potential strategy for sustainable quality improvement. Definitions of CQI vary, three minimum features are systematic data guided activities, design with local conditions in mind, and iterative development and testing (Rubenstein et al., 2014). CQI could be an effective strategy for improving SUD treatment because examining data on an ongoing basis could help motivate program staff to try new strategies to improve outcomes and tailoring these approaches to local conditions could increase staff buy-in and engagement in the process (Roosa, Scripa, Zastowny, & Ford, 2011;Schierhout et al., 2013).
Although CQI has been a successful approach to improving quality and outcomes in the U.S. manufacturing sectors where it was initially developed, mixed results have been found when practiced in health care. Systematic reviews report varying levels of effectiveness (i.e., Schouten et al., 2008;Nadeem et al., 2013). Some researchers have reported that CQI initiatives fail about half of the time (Humphreys et al., 2012). Furthermore, it is unclear what CQI entails. One review examined studies of Plan-Do-Study-Act (PDSA) cycles and found few studies adhered to an iterative cycle framework, prediction-based and small-scale testing, use of data over time, and documentation (Taylor et al., 2014).
Moreover, there is relatively little known about CQI effectiveness in nontraditional health care settings, such as community based SUD treatment. One exception is the well-studied Network for the Improvement of Addiction Treatment (NIATx) approach (Gustafson et al., 2013;Hoffman, Ford, Choi, Gustafson, & McCarty, 2008;McCarty et al., 2007;Quanbeck et al., 2011). NIATx sought to improve specific treatment processes (i.e., wait times, program admissions, and retention) by using five principles to support organizational change. The five NIATx principles include PDSA cycles along with the following: understand and involve the customer; fix key problems; pick a powerful change leader; and get ideas from outside the organization (Hoffman et al., 2012). The NIATx approach was shown to reduce wait times and increase retention but not to increase admissions indicating some success in utilizing CQI methods to improve treatment processes (Gustafson et al., 2013;McCarty et al., 2007). However, the NIATx effort required data tracking on a number of different processes which was reported as burdensome by many participating agencies (Wisdom et al., 2006) indicating concerns about feasibility. Also, little is known about howCQI influences longer-term provider and patient outcomes in SUD treatment settings, and how often CQI is sustained once a researcher-led intervention ends. be particularly relevant as previous studies have noted the importance of adapting CQI efforts to local priorities to enhance effectiveness (Schierhout et al., 2013).
Our study was guided by a stage-based treatment development approach (Rounsaville, Carroll, & Onken, 2001). The CQI intervention had been previously developed (Chinman, Hunter & Ebener, 2012) but had not been rigorously studied. Key research questions were whether this CQI approach was feasible and sustainable and whether preliminary evidence of its effectiveness could be generated. To determine feasibility, CQI training participation, the extent of CQI implementation, and staff perceptions of the approach were assessed. To measure preliminary effectiveness, a group randomized pilot study where programs received one year of CQI training and technical assistance were compared to programs that did not receive the intervention. Study hypotheses were that the CQI intervention would lead to staff and patient level improvements. CQI sustainability was explored by examining whether CQI activities were continued one year following the end of the CQI intervention. Figure 1 outlines the study design. Eight treatment programs were randomized to receive either an immediate or delayed CQI intervention that lasted for one year (Hunter, Ober, Paddock, Hunt, & Levan, 2014). Randomization was stratified by service modality with two residential and two outpatient programs assigned to either immediate intervention (labeled "Cohort 1") or to the wait-list control condition (labeled "Cohort 2"). Preliminary effectiveness was examined by comparing Cohort 1 and Cohort 2 outcomes after the first intervention year. Feasibility was examined utilizing data from both cohorts pre-and postintervention. Sustainability was explored by examining data collected from Cohort 1 participants one year after the intervention had ended.

Study setting
The study setting was a large non-profit SUD treatment agency in Los Angeles County that received a mix of public and private funding. The programs served a diverse patient population (i.e., 60% male; 26% White, 29% African American, 40% Hispanic, and 5% Asian and/or other) with each program treating an average of 197 residential patients or 320 outpatients annually. Staff from eight of the agency's largest SUD treatment programs were asked to participate in the study representing four residential and four outpatient programs.

Participants
As part of the intervention, two staff from each program were asked to attend monthly CQI meetings held at the agency's headquarters, one member representing administrative staff (program director and/or clinical supervisor) and one clinical team member (i.e., counselor). Following randomization, program directors were asked to select a counselor who had been employed at least one year to attend the meetings. The size of the clinical staff at the eight sites ranged from 2 -12 (median = 7).

CQI intervention
This intervention focused primarily on the use of Plan-Do-Study-Act (PDSA) cycles for quality improvement. The intervention incorporated an empowerment evaluation approach (Fetterman & Wandersman, 2005), where clinical program staff primarily led the development and execution of the "CQI Actions", that is, specific improvement strategies, rather than organizational leadership or outside entities. The CQI actions were based on a systematic assessment of process and outcome data as part of the "Plan" phase of PDSA. Participating staff were supported by monthly in-person meetings facilitated by study investigators (SBH and AJO) and agency leadership (i.e., the Quality Assurance Coordinator).
The process and outcome data included admission rates, patient length of stay, patient satisfaction, patient discharge status along with agency or program funder benchmarks (e.g., 80% of patients will stay in treatment at least 30 days). The "plan" phase consisted of two 90-minute monthly meetings to discuss the process and outcome data from the past two fiscal years. In these meetings, program staff reviewed program data, compared it to other agency programs and to established benchmarks to identify areas in which the program was performing better, at average or worse than expected guided by worksheets that prompted staff to engage in these activities.
At the third monthly meeting, staff were instructed to identify a process or outcome indicator to address with an improvement strategy (labeled a "CQI Action"). The identification process included an assessment of how feasible it would be to execute a change in their program over the next 90 days. Next, staff were instructed to specify key CQI Action tasks, including who would be responsible for the tasks, what resources would be needed and timeline. Extra meeting time was allocated for program staff to describe their CQI Action plan with the other meeting attendees before implementation (i.e., 180 minutes was allocated for this meeting).
Following a decision to execute the CQI Action, participating staff continued to meet monthly with the CQI support team (study PIs and agency leadership) and staff from other participating programs, to discuss progress, share lessons learned and receive guidance on the next PDSA phases. During these monthly meetings, staff from each program were asked to report on what changes they had attempted, how staff and/or patients had responded to the changes, what their next steps were and whether they needed further assistance.
During each meeting, staff had access to worksheets from the CQI implementation toolkit (Hunter, Ebener, Chinman, Ober, & Huang, 2015) that helped to document their program's progress in each PDSA phase. Once staff documented completion of the "Act" phase, they were congratulated and asked to consider attempting to develop a new "CQI Action" using the process and outcome data they had initially reviewed or more recent information about program performance. More information about the CQI actions are documented elsewhere (Hunt, Hunter, & Levan, 2017). first cohort received the intervention. A year later, Cohort 2 received the intervention. Staff data were collected annually, immediately prior to the start of the intervention and at the end of the delivery. Length of stay data was examined prior to when the study began and compared to the intervention period when the CQI Actions were taking place. Discharge status was available by the agency fiscal years (i.e., July-June).

Measures
We first present the measures related to the feasibility and sustainability study aims and then the staff and patient outcome measures related to the effectiveness aim.

Feasibility
2.5.1.1. CQI meeting attendance: Attendance at the monthly in-person CQI meetings was monitored. For programs receiving the intervention, whether an administrative representative and at least one clinical staff member from each of the programs attended the monthly CQI meetings was tracked.

Number of PDSA cycles accomplished:
Investigators coded how many of the PDSA phases (1-4 representing the Plan, Do, Study and Act phases, respectively) and how many total PDSA cycles program staff achieved. This was accomplished by reviewing the worksheets that staff completed as part of the CQI intervention. The information was verified using field notes from the monthly meetings where staff reported on progress and from the semi-structured interviews completed by trained field interview staff at the 12-and 24-month periods that asked about status in regards to the PDSA cycle.

Staff perceptions of CQI:
As part of the interviews, staff rated their enthusiasm about CQI participation on a one to ten-point scale where "1" represented "not at all enthusiastic" and "10" represented "very enthusiastic". After the intervention, staff rated how difficult it had been personally to work on the program's CQI Action with "1" representing "easy" to "10" representing "most difficult". Following these ratings, an openended question to explain reason for rating was asked. After the intervention, participants were also asked what worked well, whether they thought CQI had an impact in their programs and what key issues or barriers arose.
Following the interviews, staff were prompted to complete a web-based survey. The survey questions included items from the Innovation Attribute scale (Moore & Benbasat, 1991) which was designed to measure perceptions of adopting a new innovation. The following sub-scales were used: Relative advantage over usual practices (5 items; α = 0.90; e.g., "Using CQI improves the quality of the work I do"); compatibility (3 items; α = 0.86; e.g., "Using CQI is compatible with all aspects of my work"); the ease of use (4 items; α = 0.84; e.g., "CQI is clear and understandable"); and observability/demonstrability (4 items; α = 0.79; e.g., "The results of using CQI are apparent to me"). These subscales are consistent with intervention feasibility definitions (e.g., see Proctor et al (2011)). Response options ranged from "1" (representing "extremely disagree") to "7" (representing "extremely agree"), with higher values representing perceptions that CQI was more advantageous than existing practices, was compatible with their agency, CQI was easy to use, and CQI's impact was easily observable/demonstrated.

CQI sustainability-At
24-months, Cohort 1 staff were asked in the interviews whether the different components of the CQI intervention had continued, such as maintenance of their CQI Action, the development of new CQI Actions, and CQI meeting participation.

Staff attrition:
Staff employment rates at each of the programs were monitored to track attrition rates throughout the study using the agency employment records. Prior to data collection and randomization, staff rosters at each of the eight selected programs were obtained. Attrition was calculated as the percentage of workers employed by the program at pre-intervention who were no longer on the rosters at 12-months.

Job satisfaction:
The job satisfaction scale consisted of six items that were part of the Texas Christian University's Survey of Organizational Functioning (TCU SOF) instrument (Lehman, Greener, & Simpson, 2002). An example item is "You are satisfied with your present job". Response options ranged from "1" for "Strongly Disagree" to "5" for "Strongly Agree". Average scores were multiplied by 10 to rescale final scores from 10 to 50 (e.g., an average response of 2.6 became "26"). (Maslach & Jackson, 1996) were used to assess job morale, the emotional exhaustion subscale and the personal accomplishment subscale. The emotional exhaustion scale consisted of nine items (e.g., "I feel emotionally drained from my work"). The personal accomplishment scale consisted of eight items (e.g., "I deal very effectively with the problems of my clients"). Response options ranged from "0" representing "never" to "6" representing "every day". Responses were summed and scores were categorized into low, average and high levels based on scale norms (i.e., for the exhaustion scale, Low <=16; Average = 17-26; High >=27; for the accomplishment scale, Low >=39; Average = 38-32; High <=31). Low morale is characterized by high scores on the exhaustion scale and low scores on the personal accomplishment scale.

Patient outcomes
2.5.4.1. Length of stay: Patient length of stay refers to the time from treatment admission to the date of last service. The proportion of clients staying 3 days or more, 30 days or more, and 90 days or more were examined. Three days or more refers to those patients that were successfully engaged following admission. Thirty and 90-day stays refer to measures that were aligned with reporting requirements for both the outpatient and residential programs. Also, these criteria were used rather than mean length of stay because the average value may be subject to bias due to censoring issues (i.e., some patients had not completed treatment by the end of the observation period).

Treatment completion status:
Treatment completion status at discharge was coded by each patient's primary treatment counselor. Positive treatment compliance was derived by summing the proportion of patients that successfully completed the program and the proportion of patients that left the program prior to completion with satisfactory progress. These data were aggregated at the program level each fiscal year.
2.6. Data collection procedures 2.6.1. Administrative data-Data were collected from the agency on staffing, patient length of stay and treatment completion status across pre-specified 12-month intervals. The staffing information was aggregated and shared with the research team before the intervention, and at the 12-month time point.
2.6.2. Staff interviews-Staff selected to participate in the CQI intervention were interviewed by trained field interview staff by phone at baseline, 12-and 24-months. The baseline interviews were conducted before randomization.
2.6.3. Staff surveys-Program administrative staff and clinical staff were asked to complete a web-based survey at three time-points: baseline, 12-months, and 24-months. The baseline surveys were conducted before randomization.
Following the three data collection periods, participating staff received $25. Study procedures were approved by the research organization's Institutional Review Board prior to data collection. 2.7. Analytic strategy 2.7.1. Staff attrition-Attrition was evaluated using a difference in differences approach to compare changes in attrition between cohorts from baseline to 12-months using t-tests.

2.7.2.
Qualitative data-All interviews were recorded by professional field interview staff and transcribed. Next, two members of the research team independently reviewed each transcript. The researchers derived themes that were common within and across the interviews and resolved discrepancies before summary analysis was conducted. The main themes were based on the interview questions and apriori study hypotheses and sub-themes were derived from participant responses. Definitions of the main and sub-theme codes were documented.

Survey data-Responses
from staff who participated in the CQI monthly meetings were examined. CQI feasibility was evaluated using the difference from pre-to postintervention responses across both cohorts. CQI effectiveness was evaluated using a difference in differences approach to compare changes in responses from baseline to 12months by cohort for job satisfaction using t-tests. Proportion of responses in the different job morale categories were compared by cohort over time.
and 2) a first intervention period when CQI Actions were taking place in Cohort 1 (February-August 2013). Chi-square tests were used.

Patient treatment completion status-
The percentage of patients with positive compliance was compared across Cohorts using Chi-square tests. The data was available by agency fiscal year (July -June).

Programs and participants
Representatives from all eight programs participated in interviews at baseline, 12-and 24months. Among the 24 administrative and clinical staff employed in the four Cohort 1 programs at pre-intervention, 9 (37.5%) participated in CQI meetings and interviews (i.e., 4 program directors and 5 clinical staff members). Among the 33 staff employed in Cohort 2 programs at 12-months, 9 (27.2%) participated in CQI meetings and interviews (i.e., 4 program directors and 5 clinical staff members). One Cohort 2 administrative staff left during Year 2 which limited the sample of respondents with repeated assessments.
Fourteen out of the 18 staff that participated in the CQI intervention completed the pre-and post-surveys for a 78% response rate. There were no statistically significant differences in response rates between cohorts.

Intervention feasibility
Assessment included CQI meeting participation, PDSA completion rates and information from interviews and surveys. Illustrative quotes related to feasibility are displayed in Table  1.
3.2.1. Participant attendance-As planned, we found that typically two staff members from each program attended the monthly CQI meetings. Ten group meetings were held over the course of the 12-month intervention period. The average number of meetings attended by participants was 6.62 (SD = 3.07), with no differences found between Cohort 1 and 2.
3.2.2. Number of PDSA cycles completed-Seven of the 8 programs completed at least one PDSA cycle during the intervention period with half of the programs completing two PDSA cycles. The mean number of PDSA phases completed, where one cycle is comprised of four phases (i.e., Plan, Do, Study and Act) was 6.63 (minimum = 2, median = 8 and maximum = 9). The one site that did not complete a PDSA cycle had the Program Director turn over twice during the intervention period.

Perceptions of CQI-Prior
to the intervention, enthusiasm was relatively high (M = 8.50; SD = 1.85). Following the intervention, enthusiasm decreased but still remained relatively high, significantly above the mid-point (M = 7.93; SD = 2.24). Regarding perceptions of difficulty, most participants expressed that implementing CQI was relatively easy (M = 4.09, SD = 2.30). Table 2 shows pre-and post-intervention values on the Innovation Attribute measure. Pre-intervention perceptions were somewhat positive indicating CQI was perceived as more advantageous, compatible, easy to use and demonstrable. Post-intervention, perceptions were statistically significantly different from pre-intervention on all scales, demonstrating more positive attitudes. On average, the perceptions increased by one point on the seven-point scale. The greatest increase was seen on the results demonstrability subscale.
CQI facilitators that emerged were: 1) Staff and leadership buy-in; 2) CQI format and technical assistance provided; 3) team work; and 4) understanding of patient benefit. CQI barriers were: 1) Time; 2) staff resistance; and 3) resources (other than time). Staff from at least two programs mentioned each of these facilitators and barriers.
When staff were asked about their perceptions of impact, three themes emerged: 1) patient satisfaction and patient retention (reported by 7 programs); 2) staffing (reported by 6 programs); and 3) service delivery improvements and a sustainable process (i.e., PDSA cycles; reported by 5 programs). More specifically, several respondents stated that CQI had benefitted patients by improving retention in treatment and patient satisfaction. Participants also thought that the CQI actions had a positive impact on program staff, such as holding them accountable to activities, raising awareness about resources and finances, and creating a better work environment. With regards to the CQI process, respondents discussed desire to use PDSA cycles again.

Intervention effectiveness
Intervention effectiveness was examined by comparing change from baseline to 12-months between the Cohort 1 and Cohort 2 programs.

Morale. Emotional exhaustion-At
baseline, 63% of the Cohort 1 respondents' exhaustion ratings were in the "low" category whereas 80% of Cohort 2 respondents were in the "low" category. Only one person in each group changed their ratings over time, one respondent in Cohort 1 shifted from a "high" rating to an "average" rating and one respondent in Cohort 2 shifted from an "average" to "low" rating.
Personal accomplishment: For this measure, we examined the percentage of respondents who changed from a "low" as compared to an "average" or "high" score between cohorts. At baseline, 38% of the Cohort 1 ratings were in the "low" category whereas 80% of Cohort 2 ratings were in the "low" category. Only one person in each group changed their ratings over time, one respondent in both groups shifted to a lower rating at the 12-month time point. Table 3 presents the percentage of patients in each study condition over time that met the three retention criteria. Patients in both conditions had similar and relatively high rates meeting the 3 days or more criteria and this rate changed little from baseline to the 12-month period. However, patients in the Cohort 1 programs had significantly higher rates meeting the 30 days or more and 90 days or more retention rate at baseline. The Cohort 1 rates did not change significantly over time whereas the rates among the Cohort 2 programs improved, but never reached the levels achieved by the Cohort 1 programs.

Patient treatment completion status-
The average percentage discharged with positive compliance was initially higher in the Cohort 1 programs (baseline: Cohort 1 = 53.9%; Cohort 2 = 39.6%). The average difference-in-difference though was small with the Cohort 1 programs slightly decreasing over time but still remaining significantly higher than Cohort 2 at 12-months (Cohort 1 = 49.9%; Cohort 2 = 40.1%).

Sustainability
At the 24-month time point, respondents from the Cohort 1 programs reported sustaining six out of the eight activities that were developed as part of the intervention. The sustained actions included: increased gender-specific programming, a more systematic intake process, new procedures for transitioning patients to aftercare, creation of a standardized curriculum, establishment of monthly staff meetings, supervision and trainings, and development of a family group. The two discontinued CQI Actions were the new patient orientation group and monitoring units of service. Staff reported discontinuing the two actions because they were no longer needed.
When asked how CQI could be sustained, respondents reported: ongoing training, monitoring and reinforcement; staff support and buy-in; creation of policies and procedures; and allocating staff time (see Table 1). Barriers to continuing CQI included time, resources, and staff support.

Discussion
This study found that CQI is feasible to implement in community based SUD treatment settings. Participating staff remained enthusiastic after the intervention and rated CQI as relatively easy to implement. Most programs implemented more than one PDSA cycle over oneyear. Aspects such as staff and organizational buy-in, technical assistance, teamwork, and recognizing benefits to patients were cited as CQI facilitators. Participants also identified a few barriers including lack of time, resources, and staff resistance. Regarding effectiveness, there were no significant changes over time in staff outcomes between the two experimental conditions. Similarly, changes in patient outcomes, also remained similar across time in both groups, although the intervention group had higher rates both pre-and post-intervention.
This study contributes to the literature in several important ways. First, in accordance with recommendations generated from systematic reviews (Nadeem et al., 2013;Schouten et al., 2008), this manuscript provides specific details about the CQI intervention to facilitate identification of the key components. The detailed description of the intervention and related implementation toolkit will support future research and replication efforts. Additionally, this study used an innovative study design that maximized the opportunity to examine feasibility, effectiveness and sustainability within a relatively short study timeframe (i.e., two years). This design is highly applicable to the National Institutes of Health's intervention testing funding mechanism (R34) that provides three years of research support. Examining feasibility, effectiveness and sustainability within one trial may reduce the time between intervention development and translation into real world practice settings, which is well documented to be slow (Balas & Boren, 2000), especially with regard to community based SUD treatment (Lamb, Greenlick, & McCarty, 1998).
This study is also unique in that we examined staff and patient outcomes and used mixed methods to examine feasibility and effectiveness. In a previous review, approximately half of CQI studies used only self-reported staff measures (Schouten et al., 2008). The varying understanding of CQI and the pressure organizations feel to participate in such efforts may result in biased assessments (Counte & Meurer, 2001). We utilized two strategies proposed by Counte and Meurer (2001) to address this concern, we included respondents at both the administrator and clinician levels, and conducted assessments of the extent of CQI implementation through program documentation, staff reports, and in-depth interviews conducted by trained field staff rather than the CQI researchers, therefore reducing the potential for bias. Moreover, we collected survey and administrative data at the organizational and patient level to explore whether the intervention influenced staff and patient outcomes.
The feasibility results show that CQI can be implemented in SUD treatment settings. All but one program accomplished at least one PDSA cycle with most programs completing two PDSA cycles within the one-year intervention period. The one program that did not complete a PDSA cycle experienced administrative turnover twice during one year preventing consistent participation and support.
In general, staff perceptions were positive and remained enthusiastic after the intervention. Roosa et al. (2011) also reported that SUD treatment providers found that participating in quality improvement was of high value to patients and staff. The positive experience with CQI may have an additional benefit down the line, as Schierhout et al. (2013) reported that prior positive experiences with CQI contributed to collective efficacy, an important mechanism for CQI effectiveness. Although this intervention did not appear to impact staff retention, a longer-term study with a larger sample might be better positioned to examine this question.
In terms of developing CQI for SUD treatment settings, staff identified several feasibility barriers. For example, staff suggested that there needed to be dedicated and billable time for CQI, which would require an administrative change in terms of how contracts are organized and performance standards are measured. Staff resistance was also noted as an issue, particularly for programs that were already undergoing changes or periods of disruption due to leadership changes or staff turnover. Moreover, during the study period the agency was making broader procedural modifications as a result of Medicaid changes. These barriers highlight the need for CQI to be internalized into the process of care, not to be viewed as something outside the current practice or in addition to the current workload.
Regarding sustainability, 75% of the CQI Actions were continued for another year demonstrating that the activities appeared valuable to program staff and the changes were maintained without continued external support. However, staff suggested more assistance would increase CQI activity in their programs and argued for allocated time to help support its use. More specifically, the current environment in which CQI is not a "billable" activity prevented some administrative staff and especially clinical staff, from devoting more time to CQI activities. These findings are consistent with the emerging literature suggesting that external contextual factors such as reimbursement as well as internal contextual factors including staff training and organizational support are needed to sustain evidence-informed practices in community service settings (Aarons, Hurlburt, & Horwitz, 2011;Hunter, Han, Slaughter, Godley, & Garner, 2017).
Although our findings are not overwhelmingly conclusive, this is not surprising given the modest intensity of the intervention, the exploratory design with small sample size and the available data. For example, the CQI intervention lasted one year with the launch of the CQI Actions occurring approximately the third or fourth month of that period, after staff had studied existing process and outcome data and developed a vetted plan to make a change. One might not expect that changes in patient outcomes would be immediate given the average time in treatment was over 90 days. Moreover, despite favorable self-reported ratings of CQI, more "objective measures" such as attrition rates, job satisfaction and morale ratings did not seem to be influenced by the intervention, perhaps due to the CQI Actions not specifically designed to address these issues.

Limitations
There are several study limitations. First, the use of patient outcome data to explore CQI effectiveness proved challenging. For example, pinpointing the time points most closely associated with CQI's hypothesized impact was difficult. Moreover, the program data were often aggregated making it difficult to tease apart for such purposes. For example, discharge data was only available by the agency fiscal year due to limitations in the electronic record systems, which did not align with intervention timing. A second study limitation was that many of the data from staff were based on self-report. Third, despite randomization, there were baseline differences between the two experimental conditions. We used difference-indifference approaches to account for this but it does not address potential ceiling or floor effects. Moreover, lack of clinical improvements is not uncommon -a recent systematic review of quality improvement collaboratives using 24 articles found that the greatest impact was at the provider level and patient-level findings were less robust (Nadeem et al., 2013).

Conclusions
It is feasible to implement CQI in community based SUD treatment settings. Detailed documentation of the CQI approach used in this study is available for future use and replication efforts as recommended in prior research. Clinical as well as program management staff were enthusiastic about engaging in CQI and reported that they thought it improved patient experience. However, allocating the time and staffing to conduct CQI in these typical treatment settings were noted as potential barriers. Moreover, demonstrating CQI's impact on providers and patients using administrative data sources was challenging. These findings suggest more work may be needed in order to align performance measurement systems to CQI efforts in order to empirically demonstrate effects.

Feasibility: Facilitators
Staff and Leadership Support and Buy-in "Getting the complete and total buy in from program directors. Directors having the ability to disseminate to the staff … once everyone fully understood that it wasn't just extra work, possibly decrease some things, then everybody got involved." Format and Technical Assistance "This [CQI project] gave a guideline with the follow-up meetings and the communication and working with other team members, or sites, that were on the project it allowed them to maintain focus on the change that they were trying to make. That was the most helpful. A lot of times we put stuff down on paper, and it doesn't always pan out like that. So, to be able to make those adjustments, it had a lot to do with the feedback from the assistance we received from [the CQI facilitators] and other team members on the project." Understanding of Patient Benefit "It might sound pretty funny, but I feel awesome. I truly do because, I got to do something that not only is going to change my clients' outcome of their recovery, but it's something that my company is willing to implement company-wide, and I get to be a part of that. That's pretty awesome for me."

Feasibility: Barriers
Time "In this particular field, it's hard to make the time for it. Even though it was once a month, it was still an hour and a half out of the day where we could be running a group, I could be meeting with clients, I could be doing intakes or writing notes. Meeting with my clinical director and talking about it again, everything takes away from the client. That's the bottom line." Staff Resistance "We are overwhelmed with what we're already doing. Too much change is coming on. We're overwhelmed. We're a little resistant to change right now. I think that we just need to stay afloat, keep on working on the [CQI Action], and then get feedback." Resources (other than time) "Always fiscal challenges that prevent from moving as quickly as you'd like. Imagine fiscal challenges will always be there. The process of things that needs to happen because we're part of a huge organization.
Steps that need to happen. Takes longer than you would like."

Perceptions of CQI Impact
Impact on Program/Staff "From what I saw, staff awareness. Bringing them in the loop of what their services and how their services are translated financially. Gave them opportunity to see where they were at. Staff awareness." Impact on Patient Outcomes "It helped with retention … I could stay in touch with those clients better and they could get to know me better and gain some trust with me and could retain them." "Improvement in patient satisfaction Satisfied patients. Patients are getting the services they need, working with counselors. They're being treated well. They're lighting up while they're here." Created Sustainable Process or Action "We ask questions like "is there anything that you feel can be done different?". We use the CQI techniques that we have learned from [CQI facilitators], and we have adapted it in-house; getting others input, seeing questions that we may have wanted to ask/other topics that we might have wanted to bring up for the [CQI Action] and how we can improve it."

Sustainability: Facilitators
Ongoing Training and Reinforcement "When we have turnover, maybe do a training again … maybe a refresher (for continuing staff). Constant refreshing. Similar to retail staff or hospital staff where they have mandatory training." "Probably the PDSA cycles need to be reinforced with the staff-the positive results. If there's more pressure from the corporate office, follow-through. We have so many projects we have to work on, so it is like which ones do I have to focus on. I think it is possible to maintain CQI at our agency, we just need a little extra pushing. " Hunter et al. Page 20 Table 2 Staff perceptions of CQI's innovation attributes pre-and post-intervention (n = 14)