Subjective Outcome Evaluation of the Project P.A.T.H.S. (Secondary 2 Program): Views of the Program Participants

A total of 196 secondary schools participated in the Secondary 2 Program of the Full Implementation Phase of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes). After completion of the Tier 1 Program, 30,731 students responded to the Subjective Outcome Evaluation Form (Form A) to assess their perceptions of the program, instructors, and perceived effectiveness of the program. Based on the consolidated reports submitted by the schools to the funding body, the research team aggregated the consolidated data to form a “reconstructed” overall profile on the perceptions of the program participants. Findings demonstrated that high proportions of the respondents had positive perceptions of the program and the instructors, and roughly four-fifths of the respondents regarded the program as beneficial to them. Correlation analyses showed that perceived program and instructor characteristics were positively associated with perceived benefits of the program.


INTRODUCTION
In the wake of quality assurance, with greater pressure on service providers to measure service effectiveness and outcomes, client satisfaction has emerged as an important outcome indicator in the evaluation of human services [1]. Client perceptions have, therefore, been afforded prominence when evaluating services provided. Reviews of the existing literature indicate that using the client satisfaction approach in evaluation or subjective outcome indicators has a long history in human services in different cultures [2,3,4,5]. A subjective outcome evaluation or client satisfaction survey is commonly utilized to assess the perceived benefits of a program and the degree of satisfaction that the participants have regarding the different aspects of the program. Although there are arguments against the use of subjective outcome assessment, there is mounting evidence that subjective outcome measures were correlated with objective outcome measures [6,7,8,9,10,11,12]. As one aspect of program evaluation is about how the program can be improved, the information-rich data from the clients help to evaluate the effectiveness of the program and to effect any substantial improvements in service delivery [13].
Interest in adolescent positive youth development programs has increased markedly following reports that adolescents face a number of developmental problems, such as abuse of psychotropic substances [14], adolescent suicide [15], deliberate self-harm [16], and school violence [17] in the context of Hong Kong. Added to this, with a notable lack of positive youth development programs in the territory, there is a desperate demand for adolescent positive youth development programs [18,19,20]. Against this background, a territory-wide project entitled P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme was initiated by The Hong Kong Jockey Club Charities Trust, with an earmarked grant of HK$400 million. "P.A.T.H.S." is an acronym for Positive Adolescent Training through Holistic Social Programmes. With an aim to develop a multiyear universal positive youth development program to promote holistic adolescent development in Hong Kong, a research team with researchers from five local universities was formed, with the first author being the principal investigator [21,22].
In the Project P.A.T.H.S., there are two tiers of programs (Tier 1 and Tier 2). The Tier 1 Program is a universal positive youth development program where students in Secondary 1 to 3 participate in the program, normally with 20 h of training in the school year at each grade, involving 40 teaching units. In the Tier 1 Program, 15 positive youth development constructs are embedded in the 40 teaching units. These constructs include promotion of bonding, cultivation of resilience, promotion of social competence, promotion of emotional competence, promotion of cognitive competence, promotion of behavioral competence, promotion of moral competence, cultivation of self-determination, promotion of spirituality, development of self-efficacy, development of a clear and positive identity, promotion of beliefs in the future, provision of recognition for positive behavior, provision of opportunities for prosocial involvement, and fostering prosocial norms [20].
Since the Project P.A.T.H.S. is a huge project in terms of financial and manpower resources, as well as the number of participating schools in the territory, program evaluation is of paramount importance for several reasons. First, it is imperative to prove to the program funder and the Government that the project is of great benefit to students. Second, program implementers, particularly teachers and social workers, are only motivated to teach the program that is found to be effective. Finally, reviews of the literature indicate that there is a pressing need to accumulate research findings on the effectiveness of psychosocial intervention programs. For instance, in the Western context, among the 77 programs under review, only approximately one-third of them were effective [23], whereas in the Chinese context, Shek et al. [24] highlighted that evidence-based social work practice was very weak in Hong Kong.
To provide a comprehensive picture pertaining to the effectiveness of the project, numerous evaluation strategies, including objective outcome evaluation utilizing a randomized group trial [25]; subjective outcome evaluation based on quantitative and qualitative data collected from the program participants and instructors [26,27]; qualitative evaluation based on focus groups involving students and instructors [28,29]; in-depth interviews with program implementers, student logs, and student products [30]; process evaluation involving systematic observations of delivery of the program [31]; and interim evaluation [32] are employed. The aforementioned mechanisms consistently provide strong evidence that the Project P.A.T.H.S. has a beneficial influence on students [33,34,35].
With specific reference to subjective outcome evaluation, quantitative findings based on the Secondary 2 Program of the Experimental Implementation Phase [36] demonstrated that program participants (n =7,406 students from 49 schools) in the academic year 2006/07 perceived the program positively, including the clear objectives of the curriculum (79.4%), systematic planning of activities (77.9%), peer interaction among students (77.5%), and active involvement of students during class (76.6%). Moreover, a high proportion of the students (84.9%) had positive evaluation of the instructors. About 86% of the respondents responded that the instructor was very involved and approximately 85% indicated that the instructor was well prepared for the lessons, encouraged students to participate in the activities, and was ready to provide assistance to students in need. Further, about four-fifths of the respondents perceived that the program enhanced their overall development, including the ability to resist harmful influences (79.4%), ability to distinguish between the good and the bad (81.4%), competence in making sensible and wise choices (79.7%), and compassion and caring about others (79.3%). Overall, about 80% of the participants were satisfied with the course and more than 70% of the participants would recommend the program to their friends who have similar needs.
As the preceding sections illustrate, evidence from different evaluation strategies supported the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. It is imperative to examine whether such findings can be replicated across time and participants. In the current study, the data collected from 196 schools that joined the Secondary 2 Program of the Full Implementation Phase were examined. As the Project P.A.T.H.S. was financially supported by The Hong Kong Jockey Club Charities Trust, each participating school had to submit an evaluation report with the consolidated subjective outcome evaluation profile of the school to the funding body. Such reports were then used by the research team to "reconstruct" the overall profile of the subjective outcome evaluation data. The major advantage of this strategy is to promote evaluation in the field and to conduct secondary data analyses of the reports submitted concurrently. After the completion of the Tier 1 Program, the participants were invited to respond to the subjective outcome evaluation questionnaire (Form A), which was developed by the research team. A total of 30,731 students (mean =156.80 students per school, range: 12-243) responded to Form A. The data collection was normally conducted at the last session of the program. On the day when the evaluation data were collected, the purpose of the evaluation was explained and the confidentiality of the data collected was repeatedly emphasized to all students. The students were asked to indicate their wish if they did not want to respond to the evaluation questionnaire (i.e., "passive" informed consent was obtained from the students). All participants responded to all scales in the evaluation form in a self-administration format. Adequate time was provided for the participants to complete the questionnaire. To facilitate the program evaluation, the research team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data [37]. In addition, adequate training was provided to the workers during the 20-h training workshops on how to collect and analyze the data collected by Form A.

Instruments
The Subjective Outcome Evaluation Form (Form A) designed by Shek and Siu [37] consists of the following parts: Participants' perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondents' participation during class (10 items) Participants' perceptions of the workers, such as the preparation of the instructor, professional attitude, involvement, and interaction with the students (10 items) Participants' perceptions of the effectiveness of the program, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items) The extent to which the participants would recommend the program to other people with similar needs (1 item) The extent to which the participants would join similar programs in future (1 item For the closed-ended questions, the workers who collected the data were requested to input the data in an EXCEL file developed by the research team that would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the reports, they were also requested to submit the soft copy of the consolidated data sheets. In the reports prepared by the schools, the workers were also required to estimate the degree of adherence to the program manuals (i.e., the extent to which the program was implemented in accordance with the program manuals). After receiving the consolidated data from the funding body, the research team aggregated the data to "reconstruct" the overall profile based on the subjective outcome evaluation data. Based on the closed-ended questions, several salient findings can be highlighted. First, roughly threequarters of the respondents perceived the program in a positive manner (Table 1). For instance, 82.26% of the students indicated that the program objectives were very clear and 81.24% felt that there was much peer interaction in the program. Second, a high proportion of the students had positive evaluation of the instructors ( Table 2) as 87.68% of the respondents showed that the instructors were ready to provide assistance to them when needed and 87.58% of the respondents found that the instructors were very involved. Third, as shown in Table 3, roughly four-fifths of the respondents perceived that the program promoted their development, including social competence (80.92%), ability to resist harmful influences (81.16%), moral competence (82.72%), ability in making sensible and wise choices (81.89%), and overall development (81.74%). Fourth, while roughly three-quarters of the participants would recommend the program to their friends with similar needs, only a simple majority of them (62.79%) would join similar programs in the future (Table 4). Finally, roughly four-fifths of the respondents demonstrated that they were satisfied with the program (Table 4). Regarding the degree of program adherence estimated by the workers, the mean level of adherence was 86.63%, with a range from 34 to 100%.

Reliability analyses with the schools as the unit of analyses demonstrated that Form
The total scores based on items on perceived program characteristics were positively correlated with different measures of perceived benefits of the program. Similarly, the total scores based on items on perceived instructor characteristics were positively correlated with different measures of perceived benefits of the program. Table 5 shows the related correlation coefficients.  However, despite the fact that most of the participants had positive perceptions of the program and the workers, and perceived the program to be effective, as well as about three-quarters of the participants who claimed that they would recommend the program to peers with similar needs, contrary to our expectations, only 62.79% of the participants responded that they would join similar programs in the future. To unfold the puzzle, further analyses of the qualitative data are needed. As expected, evidence suggested that there was a substantial correlation between perceived program characteristics and items on perceived benefits of the program. Results provide clear insights of how participants perceived the program is associated with the perceived benefits of the program. Clear program objectives, carefully planned activities, meticulous design of the curriculum, pleasant classroom atmosphere, and high peer interaction are all related to the quality of program implementation. With all these program features, it is more likely to bring positive program outcomes and have beneficial effects on program participants. Further, the examination of correlates of items on how participants perceived instructors was significantly associated with different measures of perceived benefits of the program. Evidently, the characteristics of effective or highly rated workers could be reflected by both (1) the professionalism of the workers, in terms of his/her knowledge of the curriculum, good teaching skills, and preparation for the lessons; and (2) his/her personal qualities of the workers, such as caring, encouraging, and being helpful as reflected by the items on the characteristics of instructors. These findings offer a few glimpses of what the important features of the program and characteristics of instructors may be. Of course, one alternative explanation for the high correlation among the different domains may be due to common method variance. Of note is our finding that concerns the degree of program adherence in this study. The mean estimated degree of adherence in the present study (86.63%) was highly comparable to the previous findings on program adherence of the Project P.A.T.H.S. [38]. The estimated level of program adherence was relatively high in this study. In contrast, in the limited Western studies on the quality of program implementation, studies generally demonstrated that the degree of program adherence was not high. For instance, Ringwalt et al. [39] reported that one-fifth of the workers implementing the program did not utilize the curriculum guide and only 15% of them followed it very closely. It is probable that the training provided to the workers as well as the commitment of the workers (as reflected by the perceptions of the students) contributed to this high level of program adherence. This finding lends further support to the notion that it is not essential to make much modification for the Tier 1 Program among different adolescent populations. Furthermore, despite the fidelity and adaptation debate, the finding also has pointed to the importance of maintaining a high degree of program fidelity [40] so as to ensure the effectiveness of the program.  substantially reduce the power of statistical analyses. A second caveat is, while the present findings are interpreted in terms of the positive program effects and experiences of the program participants, we need to exercise caution as there are several alternative explanations. The first alternative explanation is that the students were afraid that they would be punished by the workers if they did not respond in the favorable direction. Nevertheless, this alternative explanation can be partially dismissed because the students responded anonymously. Another possible explanation of the positive findings is that the students consciously acted in a "nice" manner so as to help the workers to illustrate positive program effects. Yet, this alternative explanation could be partially dismissed because negative ratings were recorded (e.g., whether the participants would join the program again) and the students responded in an anonymous manner. The third explanation is that the high proportion of positive responses is, in fact, random responses (i.e., the students did not respond seriously). This explanation can also be dismissed because, as shown by reliability analyses, the entire scale was internally consistent. In sum, despite these limitations, the findings of this study lend further support to the effectiveness of the Tier 1 Program and most importantly, program participants perceived the program to be beneficial to their development. These positive findings are crucial because they suggest that the program can successfully engage the students in the program implementation process. From a program evaluation perspective, as systematic evaluation of social services is still in its infancy in different Chinese contexts, this paper constitutes a model based on which future subjective outcome evaluation studies can be conducted [24,41]. Since client satisfaction is crucial in program implementation, we are continuing to carry out research on the quality of the program and to work with program participants to use the results productively to improve and maintain the high quality of the program.