Qualitative Evaluation

Qualitative research, which explores how or why something occurs, can contribute new knowledge to the understanding of home visiting. While qualitative research designs are sometimes viewed as less rigorous addons to quantitative research designs, studies utilizing qualitative research methods—whether part of a mixedmethods approach or as a standalone design—can be rigorously designed to provide reliable and trustworthy information.


Introduction
Qualitative research, which explores how or why something occurs, can contribute new knowledge to the understanding of home visiting.While qualitative research designs are sometimes viewed as less rigorous addons to quantitative research designs, studies utilizing qualitative research methods-whether part of a mixedmethods approach or as a standalone design-can be rigorously designed to provide reliable and trustworthy information.
The goal of the research is to understand participants' values, beliefs, behaviors, feelings, expectations, motivations, and expressions.The evaluators have developed trusting relationships with study participants, such as local implementing agency (LIA) staff.Outcomes cannot be easily quantified.There is a need to contextualize or validate quantitative data.
Avoid using qualitative methods when-Quantitative methods, such as surveys, may be more time efficient and cost effective to answer the research questions.The analyses needed to answer the research questions require a large participant sample.

Planning Qualitative Data Collection
Qualitative data can be collected using multiple methods and tools designed to elicit the participant's perspective on a program, experience, or scenario.The evaluation team should consider issues related to data collection methods, and sampling and recruitment strategies to find participants with relevant and diverse perspectives.

Data Collection Methods
Qualitative evaluations may use a combination of data collection methods to examine key issues in depth, triangulate data from multiple sources, or complement quantitative data. 5Exhibit 1 describes advantages and disadvantages of common data collection methods.For example, interviews might elicit meaningful responses, but they are time consuming.Focus groups allow for more rapid and less costly data collection, but it may be difficult to collect sensitive information in group settings.Qualitative research often involves developing tools for specific situations.Pilot testing addresses the following:

Exhibit 1.
Are questions clearly worded and readily interpreted as intended?How long does it take to conduct an interview or focus group?What procedural or methodological challenges may arise?
Conduct pilot tests with a small sample of respondents similar to the target population.The sample composition is particularly important when addressing cultural context, ability to answer questions, or sensitive topics that may be perceived as offensive.Use cognitive testing methods to assess question wording, flow, and timing.Determine whether questions capture the intended concept, use the appropriate words, and appear in a logical order.Use the pilot test process to inform the development of tool administration protocols that minimize the respondent's burden (e.g., drop or rework questions that are confusing, offensive, too long, or unnecessary).

Sampling and Recruitment Strategies
A qualitative evaluation sampling plan should include the sampling technique, inclusion and exclusion criteria, and number of participants.Use the plan to identify the following: Who is included in the sample?What is the composition of the sample (e.g., application of inclusion and exclusion criteria)?
Where and how will the sample be recruited?How and when will data be gathered?Why are the data important to the evaluation?
Qualitative sampling strategies rely on establishing homogeneity in sample groups to generate deep rather than broad findings, 6   Due to ease of recruitment access or desire to include/emphasize specific groups visiting experience).Sample sizes or relative proportions are specified before sampling begins.Snowball or chain referral New/additional participants are identified and recruited from current participants.This method may be used to recruit hard-to-find or hard-torecruit populations not identified through other sampling strategies.
Due to sampling that overrepresents specific characteristics and similar perspectives

Convenience
Participants are identified and recruited based on easy identification, contact, proximity, and willingness to participate.For example, they may be at a particular LIA or share the same home visitor.
Due to ease of access or approachability of participants who share characteristics or patterns of behaviors or activities A combination of small sample size and focused sampling techniques may unintentionally introduce bias.For example, for a study describing home visiting experiences, recruiting 10 parents from the same event or home visitor may limit the variety of information collected.Make sampling decisions in the context of the evaluation design and establish processes to- Tracking enrollment in the study and taking time to reflect on sampling and recruitment strategies can help the team identify bias in recruitment and assess sample diversity.Assess sample characteristics to ensure all relevant perspectives are represented (e.g., different caregivers, age, ethnicity, religion, health status).For example, if prenatal and postnatal mothers are to be enrolled in equal numbers, record the number of children for each interviewee and periodically review enrollment to confirm the recruitment strategy is on track.
Adjust the sampling plan.During the data collection process, tracking the information gathered can reveal flaws in the sampling plan.Adjustments can then be made as needed to increase rigor.For example, it may become clear that interviewing home visiting supervisors, in addition to home visitors, would offer an important, unique perspective to the study questions.Documentation of team discussions and sampling decisions will be useful for institutional review board amendments, justifications to funders for design changes, and dissemination efforts.Assess saturation.Saturation refers to the point in data collection when new data no longer bring additional insights.Reaching saturation is the gold standard of purposive sampling and lends credibility and validity to the data.Given that achieving saturation depends on the sample, research questions, and study design, evaluation plans should include criteria on what constitutes saturation.Strategies for identifying saturation include creating a saturation table by theme and respondent; documenting when the same themes, explanations, or interpretations recur; or conducting a few final interviews or a focus group. 8,9alyzing Qualitative Data Qualitative data analysis is an iterative set of processes.It is often described as a loop-like pattern of multiple reviews of the data as new questions emerge, new links are identified, and more theories develop with an increased understanding of the data. 10Due to the iterative nature of qualitative data, bias is likely to occur during data analysis.The following strategies should be considered to help minimize bias: researcher reflexivity, data reduction, data triangulation, member checking, and alternative explanations or contextual factors.

Researcher Reflexivity
The researcher's experiences, emotions, and patterns of interpretation shape all aspects of the research process. 11Qualitative and quantitative research should be reflexive and dynamic because the practitioner is part of the research, not separate from it. 12,13In qualitative research, the researcher is the primary instrument for data collection and analysis.Reflexivity requires researchers to consider how their world view may influence their work.They must continuously monitor how their background, biases, assumptions, perceptions, and interests affect the research process.Researchers can keep reflexive journals to document their experiences and reflections at each stage of the evaluation. 14This allows them to examine the human factors that may influence interpretation.Reflexive journals should include-Daily schedule and logistics of the evaluation Methodology log Description and interpretation of researcher behavior and experiences Thoughts, feelings, ideas, and hypotheses generated by interacting with participants Questions, problems, and frustrations concerning the research process Journaling encourages researchers to be aware of biases and assumptions so they can factor them into their analytical approach to improve the credibility of the findings.

Data Reduction
Data reduction techniques allow researchers to rigorously and meaningfully categorize data, which strengthens the validity of evaluation findings.Data reduction involves (1) selecting, focusing, simplifying, and abstracting raw data; (2) transforming and analyzing the condensed data set to identify significant patterns and answer evaluation questions; and (3) drawing conclusions from the data and building a logical chain of evidence. 15,16For example, coding is a key data reduction method.To minimize bias and strengthen the credibility of findings, evaluators should create systematic coding procedures, such as-Documenting each step to ensure the process can be replicated Developing and using codebooks that include code descriptions, inclusion and exclusion criteria, and example text Establishing interrater reliability (i.e., having multiple coders code the same data) Using a code-recode process (e.g., recoding the same data at least 2 weeks after the initial coding) Several qualitative data analysis software packages, in addition to Excel, are available to organize and code data (e.g., Dedoose, NVivo, Atlas TI 17 ).When selecting software consider the type, amount, and sources of data to analyze and the type of analysis to conduct.While researchers cannot use qualitative analysis software to determine meaningful coding categories or identify and define themes, it can be used to efficiently reduce data.

Data Triangulation
Evaluators should verify results by triangulating across multiple data sources and methods. 18,19Triangulation involves comparing data from multiple sources to cross-check the information and its interpretation, to increase the confidence in the findings.Generating similar results from two or more methods strengthens the credibility and validity of findings.Analyzing data from multiple sources can also provide a holistic, balanced picture of the phenomenon examined.

Member Checking
Member checking, also known as participant feedback or validation, is the process of sharing data and themes with study participants to confirm they have been interpreted correctly. 20This technique increases the credibility of qualitative findings by ensuring that participant voices and beliefs are accurately represented. 21esearchers can incorporate member checking throughout the data analysis process to verify interim findings and then adjust analyses as necessary. 22Consider asking participants the following questions: Is the description of the phenomenon complete and realistic?Are your stories portrayed accurately?Are the themes accurate to include?If not, what should be modified?Are the interpretations fair and representative?Do you have any objections to the interpretations?Is there anything else you would like to share?

Alternative Explanations and Contextual Factors
When preparing findings, consider all possible explanations or contexts that may contribute to the conclusions. 23or example, if a study reveals a large number of families withdrew from a home visiting program within a short period of time, all possible explanations should be explored to help understand why.Accounting for all plausible explanations can strengthen the interpretation of findings.Incorporating relevant contextual information (e.g., participant characteristics, location of the study) also promotes a comprehensive understanding of themes emerging from the data.

Reporting Qualitative Findings
Accurate and targeted reporting is essential to valid and reliable qualitative findings.Reports of qualitative data analysis findings should be grounded in the data (e.g., examples, quotes, excerpts, descriptions of the evaluator's engagement in data collection).Also, evaluators should provide a description of the sample and important contextual information that may help the target audience assess the transferability of findings across other groups or settings.Evaluators should consider the following strategies to strengthen the trustworthiness of findings: Review findings with peers.Allowing peers to examine key findings and recommendations, referred to as peer debriefing, can help minimize bias.Evaluators should discuss the research process and results with peers experienced in qualitative methods who are not involved in the project. 24Debriefing can include an examination of transcripts, documents, recorded interviews, or field notes.Peers can identify potential issues in the research, such as vague descriptions, underemphasized or overemphasized points, errors in the data, or researcher bias.Peers can also strengthen credibility by reviewing data categories and identifying disconfirming evidence (e.g., cases that do not fit patterns or refute conclusions).Evaluators should provide evidence of this process and explain how they modified the report using peer feedback.
Tailor and disseminate findings based on the target audience.Consider the primary audience when presenting and disseminating qualitative findings.Key factors include length, level of detail, and complexity of the data. 25For example, funders may be interested in a full-length evaluation report with rich quotes and detailed tables of findings, while a one-page summary with engaging visuals may be more appropriate to share outcomes with the community.Use qualitative data visualization to engage readers across audiences in meaningful ways.For additional guidance, see the Resources section.
though it is important to include participants who represent diverse perspectives.To develop a sampling strategy-Determine the sample size in the context of the evaluation design and data collection methods.The number of participants or cases should allow for multiple perspectives to be represented and for oversampling to ensure an adequate range of information is collected.Select a sampling technique (e.g., purposive, quota, snowball).See exhibit 2 for a description of common qualitative sampling techniques.oMost qualitative research uses a form of purposive sampling and matches the specific strategy to the evaluation context and requirements. 7o Bias can skew the evaluation.Consider the potential sources of bias, including the researcher's own biases, and ways to mitigate the risk of selection bias in the sample.Biases should be discussed in the final report or presentation of findings.Exhibit 2.

Common Qualitative Evaluation Sampling Techniques Sampling technique Description Potential selection bias
QuotaParticipants are identified and recruited according to characteristics: general (e.g., demographics), related to the study (e.g., use of services), or related to insights into the research topic (e.g., prior home