Abstract

An integration of the qualitative evaluation findings collected in different cohorts of students who participated in Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) (𝑛=252 students in 29 focus groups) was carried out. With specific focus on how the informants described the program, results showed that the descriptions were mainly positive in nature, suggesting that the program was well received by the program participants. When the informants were invited to name three metaphors that could stand for the program, positive metaphors were commonly used. Beneficial effects of the program in different psychosocial domains were also voiced by the program participants. The qualitative findings integrated in this paper provide further support for the effectiveness of the Tier 1 Program of Project P.A.T.H.S. in promoting holistic development in Chinese adolescents in Hong Kong.

1. Introduction

There are two contrasting research approaches in social sciences [1]. Having its root in positivism, the quantitative approach of research design has several characteristics. First, it relies on empirical methods with clear rules and procedures, deductive methods, and hypothesis testing. Second, value neutrality (i.e., suspension of judgment of the researchers) is strongly emphasized. Third, representativeness and generalization of the findings to explain social phenomena and predict outcomes are upheld. Fourth, quantification of the results is emphasized with the use of mathematical models, statistical analyses, and presentations. Fifth, validity, reliability, and objectivity are hallmarks of positivistic research [2, 3]. While the quantitative research approach has been the “mainstream” approach in the past decades, and its strengths are appreciated by disciplines, particularly those in the biomedical field, it has been criticized on its ontological and methodological assumptions. Ontologically, the assumption that the reality is “objective” and “out there” is questioned. For example, Patton [4] criticized the quantitative-experimental approach in terms of its oversimplification of the real world, the fact that it misses major factors of importance that are not easily quantified, and its failure to examine the holistic impact of a program. In addition, quantitative research is criticized as not being able to examine the essence of life of human beings. Finally, with its artificial nature, quantitative research is criticized as neglecting subjective experiences and interpreted meanings of the “actors.”

Because of the limitations of positivistic research, there is a growing emphasis on qualitative research in social sciences [5]. Qualitative research is defined as “an umbrella term for an array of attitudes toward and strategies for conducting inquiry that are aimed at discerning how human beings understand, experience, interpret, and produce the social world” ([6, page 893]). Unlike quantitative research that has a homogenous philosophical base, the qualitative approach includes a variety of philosophical positions and methodological approaches arising from different foundations. There are several attributes of qualitative research. First, a wide range of research methods (e.g., interviews, focus groups, observations, documentation) are commonly used. Second, the impossibility of value neutrality is acknowledged and usually addressed in a disciplined manner. Third, idiographic and uniqueness of individual cases rather than representativeness and generalization of the findings are emphasized. Fourth, there is weak reliance on “numbers,” while real-life data, such as narratives and lived experiences, are focused upon. Fifth, reliance of credibility, authenticity, and world views of the informants are hallmarks of qualitative research [7]. Of course, there are criticisms that qualitative research may lack methodological rigor and that it is a relatively “softer” form of research.

These two main approaches of research are also seen in the field of evaluation. In the biomedical fields, the experimental and quantitative evaluation method is commonly regarded as the “gold” standard in assessing the outcomes of a program. In contrast, in social service settings, such as the fields of social work and education, the nonexperimental and qualitative evaluation method is commonly used to understand the process of implementing the program and the lived experiences of the program participants. As pointed out by Patton [8], there is a general consensus in the field of evaluation that a sole reliance of either a quantitative or qualitative method may not be adequate in understanding the effect of a program.

Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) is a youth enhancement program that attempts to promote holistic youth development in Hong Kong [9]. There are two tiers of programs (Tier 1 and Tier 2 Program) in this project. The Tier 1 Program is a universal positive youth development program based on 15 positive youth development constructs [10] in which students in Secondary 1 to 3 take part. To date, many evaluation studies have been conducted in order to examine the effectiveness of the program. For example, adopting a randomized group trial based on experimental design, research findings showed that participants in the experimental group had better development, but less problem behavior, than did the control group participants [11–13]. Similarly, subjective outcome evaluation utilizing quantitative rating scales had been used in order to understand the perceptions of the program participants and implementers [14–16]. The findings generally revealed that program participants and implementers had positive perceptions of the program and implementers, and they regarded the program as beneficial to the development of the program participants. While the above evaluation findings based on quantitative methods are valuable, it is equally important to understand the views of the program participants via a qualitative approach. As such, qualitative methods, such as focus groups, are valuable tools for understanding the views of the program participants.

In a pioneering focus group study conducted by Shek et al. [17], five focus groups based on 43 students recruited from four schools were conducted in order to generate qualitative data to evaluate the program. With specific focus on how the informants described the program, results showed that the descriptors used were mainly positive in nature. When the informants were invited to name three metaphors that could stand for the program, the related metaphors were basically positive in nature. Finally, the program participants perceived many beneficial effects of the program in different psychosocial domains. Similarly, Shek and Lee [18] conducted 10 focus groups comprising 88 students recruited from 10 schools in order to understand the perceptions of students participating in the Tier 1 Program of P.A.T.H.S. Project. Results showed that a majority of the participants described the program positively and they perceived beneficial effects of the program in several aspects of adolescent lives. Similar findings were shown based on other cohorts of students [19].

As a research methodology, focus groups have emerged as a popular tool for generating qualitative data and are used across a wide variety of disciplines and applied research areas [20]. Since the 1980s, there has been a growing use of focus groups, particularly in health research [21]. In his review of online databases, Morgan [22] reported that focus groups appeared in 100 academic journal articles per year throughout the decade, and he also observed that focus groups were always used in conjunction with other research methods. According to Morgan and Spanish [23], “as a qualitative method for gathering data, focus groups bring together several participants to discuss a topic of mutual interest to themselves and the researcher” (p. 253). Similarly, Basch [24] defined focus groups as “a qualitative research technique used to obtain data about feelings and opinions of small group of participants about a given problem, experience, service or other phenomenon” (p. 414).

There are several advantages of focus groups [25]. Primarily, the dynamic group process and interaction of group members can generate useful information for the researchers [26]. Likewise, Twinn [27] stated that the synergism created by the interaction of group members is important to the generation of ideas that could be difficult to obtain through individual interviews. Focus groups are also advantageous in handling complicated topics in a relatively short period of time, particularly when the objective of focus groups is to collect nonconsensual data [28], and they can gather data at a lower cost than any other qualitative research method [29].

Interestingly, in spite of its current popularity in different fields of social sciences, little has been documented about the use of the focus group methodology in program evaluation. Ansay et al. [30] highlighted that “although focus groups continue to gain popularity in marketing and social science research, their use in program evaluation has been limited” (p. 310). To date, there is sparse scientific evidence on the use of focus groups within the Chinese adolescent population in program evaluation, despite the fact that focus groups are considered to be an effective qualitative data technique that is readily understood by program funders [31]. This paper therefore attempts to fill this gap in the literature with specific focus on P.A.T.H.S. Project. Based on several cohorts of data collected via focus groups in the project, the present study attempts to integrate the findings in the existing cohorts and produce an integrated picture on the views of the program participants.

Although the focus group as a qualitative method is widely used, it has been criticized as lacking rigor [32]. Therefore, some guidelines for enhancing the quality of qualitative research should be maintained. In their review of the common problems intrinsic to qualitative evaluation studies in the social work literature, Shek et al. [33] suggested that 12 principles should be maintained in a qualitative evaluation study. These include the following: explicit statement of the philosophical base of the study (Principle 1); justifications for the number and nature of the participants of the study (Principle 2); detailed description of the data collection procedures (Principle 3); discussion of the biases and preoccupations of the researchers (Principle 4); description of the steps taken to guard against biases or arguments that biases should and/or could not be eliminated (Principle 5); inclusion of measures of reliability, such as inter- and intrarater reliability (Principle 6); inclusion of measures of triangulation in terms of researchers and data types (Principle 7); inclusion of peer and member checking procedures (Principle 8); consciousness of the importance and development of audit trails (Principle 9); consideration of alternative explanations for the observed findings (Principle 10); inclusion of explanations for negative evidence (Principle 11); clear statement of the limitations of the study (Principle 12). It was argued that the above principles should be upheld as far as possible in focus group studies. In the focus group studies integrated in this paper, these principles were adopted as far as possible.

The purpose of this paper is to present an integrated picture of the qualitative findings collected in a series of focus group studies with students participating in the Tier 1 Program of P.A.T.H.S. Project. In each focus group study, a general qualitative research approach [34] was adopted, where general strategies of qualitative research were employed (e.g., collection of qualitative data, respecting the views of the informants, data analysis without preset coding scheme), but a specific qualitative approach was not adhered to. The exposition of the nature of this qualitative study is consistent with the view of Shek et al. [33] that there should be an explicit statement of the philosophical base of the study (Principle 1).

2. Methods

2.1. Participants and Procedures

From 2005 to 2009, in the Experimental and Full Implementation Phases, the total number of schools that participated in Project P.A.T.H.S. was 244, with 669 schools in the Secondary 1 level, 443 in the Secondary 2 level, and 215 in the Secondary 3 level. Among them 46.27% of the respondent schools adopted the full program (i.e., 20 h program involving 40 units), whereas 53.73% of the respondent schools adopted the core program (i.e., 10 h program involving 20 units).

A total of 28 schools were randomly selected for the study of student focus group evaluation (14 schools for the Secondary 1 program, 10 for the Secondary 2 program, and four for the Secondary 3 program), in which 23 schools joined the full program (20 h) and five schools joined the core program (10 h) of the Tier 1 Program of P.A.T.H.S. Project. Among the schools that joined this study, 67.9% (𝑛=19) incorporated the Tier 1 Program into the formal curriculum (e.g., Liberal Studies, Life Education, and Religious Studies) and 32.1% (𝑛=9) used the class teacher’s period or other modes to implement the program. For the consenting schools, the respective workers randomly selected students to join the focus groups. In all, 252 students joined 29 focus groups of approximately 1 h each, with the number of informants in each focus group ranging from 3 to 12 students. The characteristics of the schools that joined this qualitative evaluation study can be seen in Table 1.

Table 1: Description of data characteristics from 2005 to 2009.

Because data collection and analyses in qualitative research are very labor intensive, it is the usual practice that small samples are used. In the present context, the number of focus groups and student participants could be regarded as respectable. In addition, the strategy of randomly selecting informants and schools that joined the project could help to enhance the generalizability of the findings. These arguments can satisfy Principle 2 (i.e., justifications for the number and nature of the participants of the study) proposed by Shek et al. [33].

2.2. Instruments

An interview guide was used for conducting focus group interviews with students (Table 2). In the focus group studies under review, qualitative analyses were analyzed mainly in three areas: (1) descriptors that were used by the informants to describe the program, (2) metaphors (i.e., incidents, objects, or feelings) that were used by the informants to stand for the program, and (3) informants’ perceptions of the benefits of the program to themselves. To enhance credibility of the findings, the data were analyzed by two trained research assistants and crosschecked by another trained research assistant. Furthermore, to enhance the reliability of the coding on the positivity nature of the raw codes, both intra- and interrater reliability were carried out. Results in the focus group studies reviewed in this study showed that the intra- and interrater reliability were on the high side [17–19]. The raw data and categorized data were kept in a systematic filing system in order to ensure that the findings were auditable.

Table 2: Interview guide for the student focus group.

3. Results

There were 390 raw descriptors used by the informants to describe the program, and they could be further categorized into 78 categories (see Table 3). Among these descriptors, 234 (60%) were coded as positive descriptors, whereas 120 (30.8%) could be classified as negative descriptors. In order to examine the reliability of the coding, two research assistants who did the coding of raw data recoded 20 randomly selected raw descriptors at the end of the scoring process, and the average intrarater agreement percentage calculated on the positivity of the coding from these descriptors was 96.3% (range 90–100%). Finally, these 20 randomly selected descriptors were coded by another two research staff members who did not know the original codes given, and the average inter-rater agreement percentage calculated on the positivity of the coding was 94.4% (range 90–100%).

Table 3: Categorization of the descriptors used by the students to describe the program.

For the metaphors that were used by the informants that could stand for the program, there were 188 raw objects involving 242 related attributes (Table 4). Results showed that 109 metaphors (58%) and 158 attributes (65.3%) could be classified as positive in nature, and 43 metaphors (22.9%) and 41 related attributes (16.9%) were regarded as neutral responses. Reliability tests showed that the average intrarater agreement percentage calculated on the positivity of the coding from these metaphors was 96.3% (range 92.5–100%), whereas the average inter-rater agreement percentage calculated on the positivity of the coding was 88.8% (range 85–95%).

Table 4: Metaphors used by participants to describe the program.

The perceived benefits of the program to the program participants are shown in Table 5. There were 754 meaningful responses decoded from the raw data categorized into several levels, which are benefits at societal, familial, interpersonal, and personal levels and general benefits. Most of the perceived benefits to program participants fell on the personal level (𝑛=305), followed by benefits on the interpersonal level (𝑛=152). The findings showed that 597 responses (79.2%) were coded as positive responses and 35 responses (4.6%) were counted as neutral responses. In order to examine the reliability of coding, the research assistants recoded 20 randomly selected responses at the end of the scoring process. The average intrarater agreement percentage calculated from these responses was 98.1% (range 95–100%). The raw benefit categories were coded again by another two research staff members who did not know the original codes given. The average inter-rater agreement percentage calculated from these responses was 95% (range 90–97.5%).

Table 5: Categorization of responses on the perceived benefits of and things learned in the program.

4. Discussion

The purpose of this study was to evaluate the Tier 1 Program of Project P.A.T.H.S. using findings based on focus groups involving program participants in the Experimental and Full Implementation Phases (2005–2009) of the project. There are several characteristics of this study. First, a large sample of participants (𝑛=252 students in 28 secondary schools) were involved. Second, different datasets collected at different points of time were analyzed. Third, views of students in different grades were collected. Fourth, this is the first known scientific study of focus group evaluation of a positive youth development program based on different cohorts in China. Finally, this is also the first focus group evaluation study based on such a large sample of participants in the global context.

Based on the integrative analyses, two salient observations can be highlighted from the findings collected from different cohorts of students. First, the program was generally perceived as positive from the perspective of the program participants (Table 3), although some students perceived the program to be negative, which was not the dominant view. The program participants generally used positive descriptors and metaphors to describe the program (Table 4). Nevertheless, some negative responses were recorded, although those were not the dominant responses.

Second, results in Table 5 show that the program had beneficial effects on the participants, with roughly 80% of the responses coded as positive. Generally speaking, benefits in both the personal and interpersonal levels were observed. The above observations are generally consistent with the objective outcome evaluation findings [11–13] that the students changed in the positive direction in various developmental domains. With reference to the principle of triangulation, the present study and the previous findings suggest that based on both quantitative and qualitative evaluation findings, evidence on the positive effects of the Tier 1 Program on holistic youth development among the program participants is present.

As suggested by Shek et al. [33], it is imperative to consider alternative explanations in the interpretations of qualitative evaluation findings (Principle 10). There are several plausible alternative justifications for the findings based on the focus group methods. The first alternative explanation is demand characteristics. However, this explanation is not likely because the participants were encouraged to express their views freely and negative voices were, in fact, heard. In addition, since the teachers were not present, there was no need for the students to respond in a socially desirable manner. Another explanation is that the findings were due to selection bias. However, this argument is not strong because the schools and students were randomly selected. The third explanation is that the positive findings were due to ideological biases (e.g., self-fulfilling prophecies) of the researchers. However, because several safeguards were used to reduce biases in the data collection and analysis processes, this possibility is not high. Finally, it may be argued that the perceived benefits were due to other youth enhancement programs. Nonetheless, this argument can be partially dismissed because none of the schools in the present study joined the major youth enhancement programs in Hong Kong, including the Adolescent Health Project and the Understanding the Adolescent Project. Most importantly, participants in the focus group interviews were specifically asked about the program effects of Project P.A.T.H.S. only.

There are several contributions of the present study. First, in view of the lack of positive youth development programs and related evaluation findings in the Chinese contexts, the present study is pioneering. Besides showing that Project P.A.T.H.S. is effective, it also demonstrates how focus group evaluation based on a large sample can be carried out. Second, the present integrative study demonstrates how the principle of qualitative evaluation studies proposed by Shek et al. [33] could be applied in focus group studies. Finally, the findings demonstrate the utility of using “descriptors” and “metaphors” in generating qualitative data. Actually, a review of the literature shows that there is an increasing effort to conduct qualitative evaluation studies. Bowey and McGlaughlin [35] studied the views of 11 young persons with an objective to improve attitudes to crime and the police, to reduce exclusion, and to develop self-esteem in at-risk young people. De Anda [36] collected qualitative data to evaluate the first year of a mentor program for at-risk high school youth in a low-income urban setting. Nicholas et al. [37] collected qualitative data from 24 adolescents with chronic kidney disease to evaluate an online social support network. The present study further illustrates the utility of collecting qualitative data in evaluation contexts.

On the other hand, there are several limitations of the study that should be addressed in qualitative research (Principle 12). Primarily, several general limitations involved in focus groups are worth noting. First, focus groups provide descriptions about the perceptions of the program, and they are not useful for testing hypotheses in the traditional experimental design. Second, although group interaction is generally seen as an advantage of focus groups, there is always the possibility that intimidation within the group setting may inhibit interaction. Further, caution must also be exercised because the quality of the findings is tied to the skills of the moderator. Regarding the second and third limitations, the use of experienced moderators in this study could minimize the problems. In addition, the inclusion of other qualitative evaluation strategies, such as in-depth individual interviews, would be helpful to further understand the subjective experiences of the program participants. Despite these limitations, the present study provides pioneering qualitative evaluation findings supporting the positive nature of Project P.A.T.H.S. and its effectiveness in promoting holistic youth development among Chinese adolescents in Hong Kong.

Acknowledgments

The preparation for this paper and Project P.A.T.H.S. were financially supported by the Hong Kong Jockey Club Charities Trust. The authorship is equally shared between the first and second authors.

P. Manicas, “The social sciences since World War II: the rise and fall of Scientism,” in The SAGE Handbook of Social Science Methodology, W. Outhwaite and S. P. Turner, Eds., pp. 7–31, Sage, London, UK, 2007.View at Google Scholar

D. T. L. Shek and R. C. F. Sun, “Effectiveness of the tier 1 program of project P.A.T.H.S.: findings based on three years of program implementation,” TheScientificWorldJournal, vol. 10, pp. 1509–1519, 2010.View at Publisher · View at Google Scholar