Does place matter in the implementation of an evidence‐based program policy in an Australian place‐based initiative for children?

Abstract Policy‐mandated requirements for use of evidence‐based programs (EBP) in place‐based initiatives are becoming more common. Little attention has been paid to the geographic aspects of uneven market development and urbanicity in implementing EBPs in large place‐based initiatives. The aim of this study was to explore geographic variation in knowledge, attitudes, and experiences of service providers who implemented an EBP policy in Australia's largest place‐based initiative for children, Communities for Children. A cross‐sectional online survey of Communities for Children service providers was conducted in 2018–2019, yielding 197 participants from all of Australia's eight states and territories. Relationships between two measures of ‘place’ (thick and thin market states; urbanicity: urban, regional and remote) and study‐designed measures of knowledge, attitudes, and implementation experiences were analyzed using adjusted logistic and multinomial regressions. Participants from thin market states (outside the Eastern Seaboard) were more resistant to the policy and experienced greater implementation challenges than those from thick market states (Eastern Seaboard). Regional participants reported greater knowledge about EBPs but experienced greater dissatisfaction and implementation challenges with the policy than both urban and remote participants. Our study found that place does matter when implementing EBPs in a place‐based initiative.

ties between the North and South.
In Australia, state-based differences in resources and governance interact with the market-based approach to public service delivery used by governments to promote choice and competition (Carey et al., 2017). Macro-level influences have resulted in geographic variations in the 'market thickness' of public services (Girth et al., 2012).
'Market thickness' refers to the level of competition in the market and can apply to the workforce, services, and products. 'Thin markets' are sometimes referred to as 'noncompetitive markets ' (Girth et al., 2012;Warner, 2006). When applied to public services, thick markets have a large and diverse array of services and staff from which to choose, while thin markets have more difficulty accessing the services, staff, and technical support necessary to build capacity.
In Australia, thick markets are more likely located along the Eastern Seaboard comprising the most heavily populated states and largest cities (New South Wales, Victoria, Queensland) and the centre of the federal government (Australian Capital Territory). Thin markets are more likely to occur in the remaining states and territories (Northern Territory, South Australia, Western Australia, Tasmania), which have a larger combined land mass, smaller populations, and greater distance from the federal government centre (O'Neill & McGuirk, 2002).
Established by the Australian government in 2004, CfC is a place-based initiative to support children (birth to 12 years) and What is known about this topic • Policy mandates can help accelerate the use of evidencebased interventions; however, implementation requires careful consideration to ensure program effectiveness and sustainability.
• Uneven geographic development in Australia has created 'thin' markets, resulting in a scarcity of resources to support implementation in some areas.
• Little is known about how 'place' influences the implementation of evidence-based programs (EBPs) in placebased initiatives.

What this paper adds
• Place is a key factor in service providers' knowledge, attitudes, and experiences of implementing EBPs in Australia's largest place-based initiative for children.
Services located in geographically 'thin' market states and non-urban areas face greater barriers to policy implementation than those in 'thick' market states and urban locations.
• The policy environment must recognize and respond to the unique service delivery context in 'thin' markets brought about by uneven geographic development.
families in 52 disadvantaged communities ('sites') across Australia ( Figure 1). On average, CfC sites are characterised by higher rates of unemployment and lone parent households, lower education levels, and greater cultural diversity including high numbers of indigenous families (Katz et al., 2007). In 2015, a policy was introduced requiring 30% of direct service delivery funding be spent on approved EBPs, rising to 50% by 2017. Sites could meet this using pre-approved programmes listed in the CfC Guidebook, or by collecting evaluation evidence for their home-grown programmes to be classified as 'Promising', see Table 1 for criteria. Sites are responsible for selecting and funding programmes, including any associated staffing and training. Prior to the introduction of the policy, there were no programme stipulations, other than they met identified local needs. Place-based initiatives such as CfC are founded on an understanding of the health impacts of local place-based variations in socioeconomic factors. Specifically, they focus on disadvantaged communities and seek to improve child health outcomes by providing programmes and services selected to meet identified community needs. When implementing such approaches, scant attention has been given to the potential effects of broader geographic differences, such as macro-level variations by state or urbanicity.
Additionally, few studies have directly examined the experiences of service providers implementing EBPs in place-based initiatives via a public policy mandate and those that do are largely confined to checking programme fidelity (Goff et al., 2013). For example, in CfC, the Australian government has mandated that while communities can take a place-based approach to program choice, 50% of funding must be spent on EBPs. Our recent qualitative study  found that government personnel tasked with overseeing implementation of the EBP policy in CfC perceived implementation as more challenging outside Eastern Seaboard states and in regional and remote areas due to limited staff availability and high staff turnover and the concentration of training and support services in major cities on the Eastern Seaboard ( Figure 1). Participants were eligible to participate if they worked in a CfC Facilitating Partner or Community Partner organisation. Given their occupation, it was assumed participants were over the age of 18 with sufficient English to complete the survey. Participation was anonymous, and potentially identifiable data were coded before analysis to avoid inadvertent identification.

| Recruitment
A cascade approach to recruitment was used, where DSS sent email invitations, including the survey link, to the 52 Facilitating Partners for further distribution to their staff and to their Community Partners. Consent was obtained electronically within the survey.

| Measures
The survey comprised study-developed measures to assess service providers' knowledge, attitudes, and implementation experiences of the CfC EBP policy (Table 2). These measures were directly informed by our earlier qualitative study and refined through consultation with key DSS personnel. Measures were knowledge about EBPs; understanding of policy rationale; attitude towards the policy and the 50% target; programme fidelity confidence; programme fidelity capacity building support; adequacy TA B L E 1 Criteria for programmes to be included as part of 50% evidence-based quota in communities for children a. Evidence-based program criteria A programme needs to meet the following criteria to be classed as evidence-based in CfC: • The objectives of the programme are in line with the objectives of the CfC Facilitating Partner model. • The programme is primarily targeted at children aged 0-12 years and their families. • The following documented information about the programme is readily available: • aims, objectives and a theoretical basis for the programme; • a programme logic or similar; • the target group for the programme is clearly articulated; and • elements/activities of the programme and why they are important. • The programme should include a training manual or documentation that allows for replication within Australia. • Evaluation of the programme has been undertaken with the following characteristics: • Impact: At least one high-quality evaluation has been conducted that showed positive impacts on the desired outcomes of the program(s), and no negative effects were found. The programme must have been evaluated in a cultural setting that is similar to Australia. • Design (one or more of): • A randomised controlled trial or quasi-experimental design that has a sample size of at least 20 participants in each of the intervention and control groups. • High-quality qualitative evaluation that includes at least 20 participants. The assessment of quality relies on availability of information about factors such as the selection/inclusion/ recruitment processes, the nature and representativeness of the sample, the process for administering data collection tools, and the degree of independence from the programme developer/ implementer. • A high-quality combination of the above (mixed methods).
b. Promising program criteria Promising programmes must meet the following 5 criteria, representing minimum standards for a quality programme: • A programme must have a documented theoretical and/or research background. • A programme must have a clear program logic, which reflects good practice both in terms of logical pathways from activities to outcomes and in meeting the needs of the intended target group. • The activities undertaken in the programme are documented, and activities generally match good practice in addressing the needs of the target group. • One or more evaluations of the programme have been conducted (with a minimum total of 20 participants) that establishes the programme as having positive benefits for the target group, with at least pre-AND post-testing of participant outcomes (ideally with a validated outcomes measurement tool), and a report is available. • Staff members that run the programme are sufficiently qualified and/or trained.

Knowledge
Knowledge about evidence-based programs 2 items: "I understand what is meant by the term 'evidence-based' in Communities for Children (CfC)"; "I am aware of which programs in my CfC meet the evidence-based program requirement". Rated: 1 "Not at all", 2 "Somewhat/partially", 3 "Yes -good knowledge". r = 0.59 Summed item scores: Good (6); 76.4% Other (1-5); 23.6% Understanding of policy rationale Single item: "I understand the rationale for this policy". Rated: 1 "Strongly disagree" to 5 "Strongly agree".

Attitudes
Attitude towards the policy 5 items: E.g. "I support this policy"; "This policy has been a change for the better". Rated: 1 "Strongly disagree" to 5 "Strongly agree".

Implementation experiences
Program fidelity confidence 3 items: e.g. "I know how programs can be adapted and implemented with fidelity". Rated: 1 "Not at all" to 5 "Very great extent" Mean item scores: Not at all to moderate (<4); 46.1% Great to very great (≥4); 53.9%

Program fidelity capacity building support
Single item: "I would like more information and support about program adaptation and implementation". Rated: 1 "Not at all" to 5 "Very great extent" n/a Single item scores: Not at all to moderate (1-3); 62.2% Great to very great (4, 5); 37.8%

Adequacy of Guidebook
program range 3 items: e.g. "We have had no trouble choosing programs from the Guidebook that meet our community's needs"; "The Guidebook does not have enough suitable programs for our families" Rated: 1 "Strongly disagree" to 5 "Strongly agree". Two items reverse coded so that higher score indicates greater adequacy. α = 0.84 Mean item scores: Range is not adequate (≤3); 81.2% Range is adequate (>3); 18.8%

Limitations in Guidebook program fit
Single item: "Meeting the 50% evidence-based program requirement sometimes means selecting programs from the Guidebook that are not an ideal fit for our community's needs" Rated: 1 "Strongly disagree" to 5 "Strongly agree" n/a Single item scores: Strongly disagree to neither agree or disagree (1-3); 38.9% Agree or strongly agree (4, 5); 61.1%

Adverse impact on program offerings
Three items about the impact of the policy on the range of programmes on offer to communities: e.g., "This policy has meant we have ceased providing programs that worked for our families". Rated: 1 "Strongly disagree" to 5 "Strongly agree". n/a a Frequency count of items rated agree or strongly agree: Little impact (count = 0-1); 61.7% Adverse impact (count = 2-3); 38.3%

Staff and training challenges
Five items about staff recruitment, turnover, and cost/ availability of training with the item stem "To what extent has your site/organisation experienced the following challenges to implementing evidencebased programs?" E.g. "Finding suitable staff with the required skills to deliver evidence-based programs"; "Limited availability of training" Rated: 1 "Not at all" to 5 "Very great extent" n/a a Frequency count of items rated great or very great: Low challenges (count = 0-1); 28.7% High challenges (count = 2-3); 71.3% a Internal consistency not applicable for checklists. sites with minor modifications made to improve clarity and reduce length.
Two measures of place were used. State was classified as Eastern Seaboard (New South Wales, Australian Capital Territory, Victoria, Queensland) or Other (South Australia, Western Australia, Northern Territory, Tasmania). Urbanicity is measured by the 'relative remoteness' of the site location using the Accessibility and Remoteness Index of Australia (Australian Bureau of Statistics, 2018), derived by measuring the road distance to the nearest urban centre in five population ranges (major city, inner regional, outer regional, remote, very remote). The two regional and two remote categories were combined to create three population ranges for sites: urban (located in suburbs in or adjacent to major cities), regional (located in medium to large country towns and nearby communities), or remote (dispersed population located a vast distance from other cities or towns) (Figure 1

| Participants
Of 245 people who consented and commenced the survey, 48 (19.6%) dropped out prior to questions about the EBP policy and were excluded from analysis. The final sample comprised 197 participants: 70 from Facilitating Partner organisations and 127 from Community Partner organisations.

| Data preparation and analysis
Statistical analyses were performed using Stata 16 SE. Knowledge, and attitudes and implementation experiences were assessed using multi-item and single-item measures (Table 2). Two multi-item measures were derived as frequency counts: adverse impact on programme offerings (three items) and staff and training challenges (five items). For the remainder, internal reliability was examined using Cronbach's alpha and inter-item correlations. Imputation (into the neutral-or mean-level) was undertaken to account for item-level missing data (if <30% of items missing per measure). Tests for normality showed most measures were skewed with values close to a natural limit. Continuous and ordinal Likert scales were converted to two-or three-group categories and cut-points determined based on response distributions and to allow for meaningful interpretation (Table 2). All dependent variables used in analysis are categorical.
Summary descriptive statistics were used to describe the place-based, organisational, and individual characteristics of the sample, stratified by organisation type (Facilitating Partner and Community Partner). Relationships between the two place-based factors (state and urbanicity) and all measures of knowledge, attitudes, and implementation experiences were initially analysed using chi-square test. For multi-item measures, logistic regressions were conducted to further evaluate differences by place. Binary logistic regressions were performed separately for dichotomous dependent variables (knowledge about EBPs, fidelity confidence, guidebook programme range, adverse impact on program offerings, staff, and training challenges), and a multinomial regression was performed for one variable (attitudes towards policy change: positive, negative and ambivalent), with 'ambivalent' as the comparison group. For each regression, place-based factors (state, urbanicity) were entered into the model (Model 1) and then adjusted for relevant individual and organisation characteristics (role, hours per week working at CfC, years working at CfC, organisation type and education level; Model 2). Regression data are presented as adjusted odds ratios (aOR) and adjusted relative risk ratios (aRRR) with 95% confidence intervals (CI). All models were repeated with inclusion of an interaction term between the two place-based factors. No significant interactions were found and results are omitted for brevity.
Qualitative responses to open-ended questions were provided by 78 respondents (61%) and exported to NVivo for directed content analysis, reported elsewhere (Burgemeister, 2022). This is a deductive method that helps extend knowledge or understanding of existing theory or prior research (Hsieh & Shannon, 2005). The lead author reviewed all responses and applied the codes and categories generated by our previous qualitative study of government-level CfC staff to the data . Text that could not be coded with the predetermined coding scheme was given a new code, and some codes and categories were re-worded to better-reflect survey responses. Coding was confirmed for 10% of responses by the third author, with a high degree of agreement (~80%). Content illustrating the constructs examined in this study were extracted, and verbatim quotes are presented throughout the result section to elucidate participants' views and experiences.

| Sample characteristics
Characteristics of participants from Facilitating Partner organisations (n = 70) and Community Partner organisations (n = 127) are presented in Table 3. Most were female, born in Australia and had a Bachelor degree or higher. Half (53%) had been working in CfC for 3 years or less and around 20% were direct service providers.
Participants from Facilitating Partner organisations were more TA B L E 3 Sample characteristics by organisation type

| Knowledge, attitudes and experiences: Multiitem measures
In this section, we describe the geographic variations in knowledge, attitudes, and experiences for all multi-item measures, with proportions presented in Table 4 and logistic and multinomial regressions in Table 5. Findings for single-item measures of knowledge, attitudes, and experiences are presented in the subsequent section.
'Good' knowledge about EBPs was reported by three-quarters of participants (76%) (Table 4), with statistically significant differences by urbanicity but not by state. After adjustment for other factors (Table 5), the odds of reporting good knowledge were three times higher for regional participants than urban participants (aOR: 3.00; 95% CI: 1.14-7.89; p = 0.03). Resistance towards the evidencebased policy was expressed by a small proportion of participants

Education level
Year 12 a Prefer not to say/missing categories omitted; some percentages do not total 100%. b Urbanicity is based on whether participants were working in remote/regional sites. Where participants worked across more than one site with differing densities, remote was prioritised first then regional then urban. c CfC, Communities for Children. d Participants could select more than one category. e Includes evaluation data submitted to the Australian Department of Social Services (DSS), pro bono evaluation support, assistance from the Australian Institute of Family Studies (AIFS), those who did research and evaluation in-house without a dedicated research unit, and participants who were unsure. f Non-binary, intersex, unspecified omitted.
g Includes admin, data entry, research and evaluation, project/program support, community development, and contract support.

TA B L E 3 (Continued)
with differences by state but not by urbanicity. In the adjusted multinomial regression ( EBP is a little bit like One Size Fits All.

(Eastern Seaboard State, Regional Facilitating
Partner) Confidence about programme fidelity and adaptation was reported by 54% of participants (Table 4). Differences by place did not remain in the adjusted model (Table 5). Participants commented in the openended questions that adaptations were necessary for programmes to succeed, but they lacked specific evidence and government support to do this. Ongoing support was reported to be valuable but costly: The most successful implementations with strong fidelity have been where there is capacity to "check in with experts" post training for questions of adaptation and its impact on fidelity during implementa-tion…This is a more expensive model…but the quality and results are more reliable.
(Eastern Seaboard State, Urban Facilitating Partner) The 'adequacy' of the Guidebook program range was reported by only 19% of participants (Table 4). In the adjusted logistic regression model ( (Other State, Regional Facilitating Partner) Adverse impacts of the policy on programme offerings were reported by 38% of participants (Table 4). Both state and urbanicity were predictors in the adjusted logistic regression model ( A high level of staff and training challenges (defined as two or more challenges) associated with implementing the policy was reported by 71% of participants (Table 4). In the adjusted logistic regression analysis (Table 5), the odds of reporting a high number of challenges were 2.42 times greater for regional than urban participants (95% CI: 1.23-6.96; p = 0.02).
Many respondents commented about staffing and training issues such as staff turnover, high training costs, the background skill set of staff which did not prepare them for this type of work, and difficulty accessing external support to supplement internal skill deficits: Unforeseen staff turnover and the high cost of retraining new staff in EBPs has been difficult for CfC and our funded community partner organisations to manage.
(Eastern Seaboard State, Urban Facilitating Partner) Some programmes in the Guidebook limit Community Partners due to the qualifications/training required by an individual to be able to implement the program, eg, music therapist, university degree, and psycholo-

gist. Not for profit organisations in rural communities
can struggle to afford employing someone with these qualifications (Other State, Regional Community Partner) CFC FPs are predominately Community Development backgrounds and not evaluators but have had to adapt very quickly to this huge shift. We received limited training and support around this.
(Other State, Regional Facilitating Partner)

| Knowledge, attitudes, and experiences: Single-item measures
Proportions for single-item measures of knowledge, attitudes, and experiences are presented in Table 6. 'Good' understanding of the rationale for the policy was reported by most participants (90%), with no difference by place. While 'resistant', 'ambivalent', and 'positive' attitudes towards the 50% target for EBPs were each reported by around a third of all participants, remote sites had a significantly smaller proportion of 'resistant' participants (10%) than their urban (32%) and regional (41%) counterparts (p < 0.0001). Overall, 38% of participants wanted more information and support about program adaptation and implementation, with no difference by place. Over 60% of participants agreed or strongly agreed that there were limitations with the fit of some Guidebook programmes, again with no difference by place.
All open-ended comments about the 50% target expressed concern that it adversely impacted service provision or that an increase beyond the current target was on the policy horizon. Some participants voiced a preference for an "evidence-informed" rather than "evidence-based" approach to service delivery to allow greater flexibility in the types of programs that could be delivered: 50% impacts way too much on our service delivery… Evidence informed practice is where we need to focus.
(Other State, Regional Facilitating Partner) …[50%] is a good benchmark and I would not like to see it go any higher to leave room for innovation and place-based programme development.
The resistance reported in our study may speak to broader attitudes and experiences from service providers due to the centralisation of federal government services in Eastern Seaboard cities and the limited evaluation and program implementation support available locally. Previous research has shown that when resources are centralised, it reduces opportunities for all to participate in training and education activities (Schmidt et al., 2020).
In terms of urbanicity, participants in regional sites were more likely than urban participants to report good knowledge of EBP but were also more likely to report staffing and training challenges, and a greater proportion were resistant to the 50% EBP target. In contrast, participants from remote sites were less likely to report adverse programme impacts and more likely to report the programme range was adequate than those in urban locations, and a smaller proportion were resistant to the policy target. This is somewhat surprising as we had expected remote areas to report the greatest implementation challenges due to difficulties attracting and retaining a skilled workforce and accessing training and implementation support. It is possible that remote areas are used to working with limited resources, are more familiar with implementing programs in challenging environments, and are therefore more resourceful and adaptable. For example Fixsen et al. (2016) suggest that some organisations are able to use certain implementation drivers to compensate for deficiencies in other drivers. Thus, it may be that remote sites employed other drivers of implementation such as strong, adaptable leadership and an enthusiasm for using evidence in practice, where other drivers such as training and coaching and sharing of knowledge were less prevalent. Further exploration of these hypotheses is required.
Our findings highlight the importance of recognising and responding to broad geographical differences within a complex national initiative that is administered centrally but delivered locally.
The first step is to address the knowledge and skill deficit outside the Eastern Seaboard and in non-urban areas due to the limited availability of qualified staff and high turnover. Previous studies have found community-based personnel in regional and remote areas are less qualified to use evidence-based approaches (Patelarou et al., 2013). This suggests a need for regular and repeated provision of EBP education delivered in formats that all can access irrespective of location. There is also the need to find meaningful and sustained ways for producers (i.e. academics or researchers) of EBPs to interact and collaborate with service providers evenly across the initiative (Rycroft-Malone et al., 2011).
Three elements critical to program sustainability, especially in disadvantaged communities are 'good fit' for the target community, available workforce, and ability for programs to be adapted to suit local context (Hodge & Turner, 2016). Previous studies have shown that adherence to key features of effective programmes (such as program length and structure) can be poor when EBPs are transferred to real-world settings (Bumbarger & Perkins, 2008 (Li et al., 2019). This will help build workforce and community capacity in areas where staff availability is low and turnover is more common. Careful planning and additional support should be considered to assist in the development of regional networks and partnerships for the sharing of skills and expertise and additional intensive support provided where required.
We note some limitations of our study. As there is no central record of the number of services and people who work at CfC, we relied on a snowballing approach for recruitment. Overall, however, we had good participant representation across all states, with adequate sample sizes for both Facilitating Partner and Community Partner organisations. Measures were mostly study-designed, and all were self-reported with the possibility of reporting bias: participants self-rate knowledge higher than objectively assessed knowledge (Snibsøer et al., 2018), and attitudinal self-ratings can also be unreliable (Lavrakas, 2008). The small sample and wide confidence intervals indicate caution is required when interpreting the findings about remote differences. Nevertheless, the consistency in findings for remote participants, both within this study and against our previous work, tends to support the robustness of the results. Future studies could explore the experiences of community stakeholders and programme recipients (families) to understand their views about the range of programmes on offer.
The notion that place matters when considering the distribution of health services and its impact on health outcomes has been well-studied. However, many studies tend to focus on local context and ignore broader geographical differences which may also impact on health. Our study shows implementing EBPs in a national place-based initiative requires careful consideration of geographic factors. The goal of place-based initiatives is to close the inequality gap between intervention areas and the rest of the population. Yet, well-intended policies have the potential to entrench or exacerbate inequalities if attention is not also given to the broader scale contextual factors that influence implementation.

AUTH O R CO NTR I B UTI O N S
All authors contributed to the study conception and design. The survey was constructed by FB in Qualtrics, with support provided by SH. Quantitative data analysis was performed by FB, with support provided by SH and JN. Qualitative data analysis was performed by FB, with support provided by SC. The draft manuscript was written by FB, and all authors reviewed and edited. All authors read and approved the final manuscript. Supervision for all aspects of this study was provided by SH, SC, NH, and JN.

ACK N OWLED G EM ENTS
The authors are appreciative of the assistance and support of this

Fiona Burgemeister has an Australian Government Research Training
Program PhD scholarship. No funding was received for conducting this study.

CO N FLI C T O F I NTE R E S T
The authors declare that there is no conflict of interest.

DATA AVA I L A B I L I T Y S TAT E M E N T
Data available on request due to privacy/ethical restrictions

E TH I C A L A PPROVA L
Ethical approval for this study was granted by the La Trobe University Human Research Ethics Committee (HEC18198). Study approval was also obtained from the Australian Department of Social Services.

CO N S E NT TO PA RTI CI PATE
Informed consent was obtained from all individual participants included in the study. Informed consent was obtained from all individual participants to publish their data.