Theorizing from secondary qualitative data: A comparison of two data analysis methods

This study aims to compare the analytical processes involved in two theorizing approaches applied to secondary qualitative data. To this end, the two authors individually analyzed the same raw material, one using the grounded theory approach and the other using the general inductive approach. Our comparison of these processes brought out the strengths and weaknesses of each approach. More specifically, this study found that data analysis using the grounded theory approachmakes it possible to go beyond the analysis and interpretation resulting from the general inductive approach. Recommendations are made regarding the importance of the conceptual framework when theorizing from qualitative data. Finally, this study highlights relevant ways to use secondary qualitative data. Subjects: Qualitative and Mixed Methods; International & Comparative Education; Research Methods in Education


Introduction
The analysis of qualitative data is often the most complex stage of the research process given that there is no universal procedure that applies to every situation (Turcotte, Dufour, & Saint-Jacques, 2009). Depending on the context, the purpose of the study, or the researcher's epistemological ABOUT THE AUTHORS Isabelle F. Dufour has a Ph.D. in Social Work. She is an associate professor in Psychoeducation at Laval University (Quebec City, QC, CANADA). Her research interests include the evaluation of programs aimed at reducing offender recidivism and the analysis of the processes leading to desistance from crime in both adolescent and adult populations. The validity of qualitative research methods, which she teaches to graduate students in her faculty, is of particular importance to her.
Marie-Claude Richard has a Ph.D. in Social Work. She is a tenured professor in the School of Psychology at Laval University (Quebec City, QC, Canada). She is interested in community psychology, more specifically in the transition to adulthood among vulnerable youths. She is also concerned with improving the ways of effectively supervising graduate students in psychology using qualitative research methods in their research projects.

PUBLIC INTEREST STATEMENT
There are several ways to study social phenomena. It is possible to attempt to identify the variables explaining the development of a social phenomenon using sophisticated tools (e.g. questionnaires, surveys, etc.) or by asking people to relate their own experience. However, since personal experiences tend to be diverse, it can be complex to identify their commonalities, or the "essence" of social experience. To this end, researchers have some tools for analyzing what is referred to here as "qualitative" data. Nevertheless, it is difficult to say whether these tools all have the same explanatory power. We thus analyzed the reported experiences of members of blended families using two methods of qualitative data analysis and then compared our results. We also identified some advantages and drawbacks of each of these methods. We hope that this study will facilitate the choice between these methods of qualitative data analysis, based on the goals pursued.
stance, it can be complex to select the most appropriate approach for analyzing the data. However, it is this wide diversity of methods of analysis that contributes, among other things, to the richness and depth of qualitative research. A problem arises when the researcher must choose between two approaches of analysis assumed to be equivalent in terms of the anticipated knowledge production. For example, a researcher who wishes to theorize from qualitative data could choose between the grounded theory (GT) approach and general inductive approach (GIA). The first is known for its complexity (Stern, 2010), which puts off some researchers. The second is seen as a more accessible approach. Moreover, as argued by Thomas, "the outcomes of analysis [using GIA] may be indistinguishable from those derived from a grounded theory approach" (Thomas, 2003: 9). It is this statement, in fact, that led us to conduct the present study. Our goals were twofold: 1) to compare the processes involved in these two theorizing approaches to qualitative analysis, deemed to be "equivalent"; and 2) to analyze the limitations and advantages of each approach. To this end, we needed qualitative data. For reasons discussed further in this article, we used secondary qualitative data, a choice that brought to light a number of highly pertinent issues. Consequently, a third objective arose: 3) to identify the theoretical, ethical, and epistemological issues associated with the use of secondary data with respect to each of the analytical approaches. Considering its importance in light of our results, we will first address this third goal.

Using secondary data
The analysis of secondary quantitative data is strongly encouraged (Bishop, 2007;Corti, 2007;Dunn, Arslanian-Engoren, DeKoekkoek, Jadack, & Scott, 2015;Irwin & Winterton, 2011). However, the analysis of secondary qualitative data remains uncommon in the social sciences (Whiteside, Mills, & McCalman, 2012). Indeed, some researchers are very reluctant to use secondary data (for example, Hammersley, 1997;Mauthner & Doucet, 2008;Mauthner & Parry, 2009;Mauthner, Parry, & Backett-Milburn, 1998). On the other hand, the changing landscape of qualitative research in recent years has made it increasingly difficult for lone researchers to collect their own data. The push by research funds to create larger research teams has translated into a situation where researchers end up analyzing data collected by post-doctoral students or research assistants (Camfield & Palmer-Jones, 2013;James, 2013). It is thus becoming increasingly common to use pre-existing qualitative data, whether by design or otherwise.
The (re)analysis of secondary qualitative data offers many advantages. In a context where professors must comply with increasingly higher standards (publishing, teaching, grants, etc.), the analysis of previously collected data provides significant financial benefits (Irwin & Winterton, 2011). It also makes it possible to determine the validity, credibility, or generalizability of previous studies (Corti, 2007;Irwin & Winterton, 2011), support primary data collection (Irwin & Winterton, 2011), access rich descriptive data on another historical era or the context in which the primary data was collected (Bishop, 2007;Corti, 2007;Irwin & Winterton, 2011), conduct new analyses using an emerging theoretical framework or new analytical tools (Camfield & Palmer-Jones, 2013;Gläser & Laudel, 2008;Johannessen, Möller, Haugen, & Biong, 2014), and conduct research on vulnerable or hard-to-reach populations without further intrusion (Whiteside et al., 2012). The use of "unsolicited" data produced spontaneously by participants can also provide crucial information on populations that are very hard to reach using traditional methods (Owens, Hansford, Sharkey, & Ford, 2016).
On the other hand, using secondary qualitative data raises many ethical issues in terms of consent, opportunity and risks, data sharing, transparency, clarity and anonymity, permission and responsibility (see Yardley, Watts, Pearson, & Richardson, 2014 for a more complete review of ethical issues). The quality of data can also be problematic. Data can be incomplete or outdated or may have been collected improperly, and it can be difficult to assess the quality of the original data collection procedure (Rubin & Babbie, 2008). The breadth and richness of primary data can also be crucial when addressing new research questions (Corti & Thompson, 2004;Whiteside et al., 2012).
The main obstacle observed in the literature is the risk of decontextualizing the data (Andrews, Higgins, Waring Andrews, & Lalor, 2012;Bishop, 2007;Corti & Thompson, 2004;Yardley et al., 2014). This refers to the nature of the relationship between "the researcher and the researched," since being too close to the primary data set might lead to premature conclusions whereas being too distanced might present an obstacle to a more nuanced interpretation (Whiteside et al., 2012). As found by Yardley et al. (2014), the proximity between the researcher and secondary data set remains a strong subject of debate. Indeed, the loss of context is not always seen as detrimental to interpretation, and it has been argued that secondary analysis can add to the plausibility or generalizability of qualitative findings. It has even been said that secondary researchers "produce more convincing accounts due to access to wider contextual data; greater resources, including time; the 'wisdom of hindsight'; and more sophisticated theoretical frameworks and methods of analysis" (Walters, 2009, as cited by Camfield & Palmer-Jones, 2013: 330). Despite a lack of consensus in the literature regarding the merits of secondary analysis, its goals are the same for qualitative data as for quantitative data (replication, comparison, verification or validation of previous results, development of new data collection or analytical tools, and training or research education) and the issues raised are also comparable (Corti, 2007). However, the focus given to the proximity between the researcher and the data appears to be a more crucial problem when it comes to qualitative secondary data.

Comparing analytical approaches
Although the use of secondary qualitative data was justified for this methodological exercise-in terms of the feasibility of this study and to guarantee a comparable "proximity" to the data between the two researchers involved-it appears important to highlight the challenges inherent in such an analysis. The question of the expertise needed to analyze and interpret data, while implicit for primary data, appears even more acute when using secondary qualitative data. Indeed, the interpretation of data belonging to a peripheral field of knowledge requires a certain restraint in the formulation of results on the part of researchers. The issues are not just about the ability to make relevant connections with the knowledge already produced, but also to understand the impact of these results for both research and practice. Therefore, it is worth noting that the "blended family experience" discussed in this article is merely the pretext of this comparison and should be considered with caution.
Obviously, this type of comparison is open to potential bias. The two authors thus decided to analyze the data separately and compare our results. Although we have similar profiles in terms of knowledge, experience and research interests, the fact that we each intentionally selected a method is not without significance. To avoid selection bias, the alternative would have been for one person to use both methods of analysis, but this would have presented the challenge of keeping each analytical process separate. While it should be noted that it is virtually impossible to perfectly compare two methods of analysis, it can nevertheless be fruitful to analyze the same data using two different approaches in order to compare findings (Johannessen et al., 2014), build theoretical models (Whiteside et al., 2012) or, in our case, compare two analytical approaches, namely, grounded theory and the general inductive approach.

Grounded theory approach
Grounded theory has two major goals: 1) to build social theories grounded in reality using a process of continuous benchmarking involving a constant back and forth between the empirical data and the emerging theory; and 2) to support the scientific credibility of qualitative research by establishing rules for rigorous qualitative data analysis (Glaser & Strauss, 1967). In this article, the term "grounded theory approach" is used to refer to a qualitative method of analysis derived from the more general research method, but whose scope is less ambitious (Méliani, 2013). Indeed, according to (Timonen, Foley, & Conlon, 2018), the requirement of developing a new theory from GT is misleading. The purpose of grounded theory is to achieve a new understanding of phenomena. It thus involves a stage of theorizing, which is always grounded in empirical data. This approach combines two fundamental and seemingly antagonistic rules: not to apply preconceived concepts to empirical data, and to demonstrate theoretical sensitivity in the theorizing process. The concept of theoretical sensitivity (Glaser, 1978) was proposed to avoid a naive empiricism that would require eliminating all theoretical premises to prevent the contamination of conceptual categories. However, the epistemological position holding that researchers can understand a reality "as it is," that is to say, without a priori, has received a fair amount of criticism (Timonen et al., 2018). The construction of a theory, grounded or otherwise, is necessarily based on the researcher's knowledge (Kelle, 2005), and the development of the categories cannot be achieved without using this knowledge. In fact, a researcher cannot be considered to be a "blank slate" (Charmaz, 2014) and may have a theoretical orientation informed by the literature when using GT (Timonen et al., 2018). From this perspective, researchers must demonstrate reflexivity in explaining the filter with which they collect data from the field (Méliani, 2013).
The GT approach involves an iterative, or back-and-forth, process between the raw data and coding, in three phases referred to as open, axial, and selective coding (Strauss & Corbin, 1990), entailing six separate operations (Méliani, 2013). Open coding is a form of data splitting that involves identifying key words (coding), which will later become interim codes and formal categories (categorization). The axial coding phase involves organizing the data to develop the main categories (making connections between and integrating the codes). Lastly, selective coding involves developing an explanatory model of the phenomenon based on the links between the categories and in relation to a core category (modeling and theorizing).

General inductive approach
A significant number of qualitative studies vaguely refer to "inductive analysis" in describing the analytical strategy used. In an effort to synthesize the knowledge available on this analytical strategy, (Thomas, 2003(Thomas, , 2006 established the parameters of what he calls the general inductive approach for qualitative data analysis (GIA). The goal of this method is to condense the raw data, make connections between the research objectives and categories emerging from the raw data analysis, and provide a theory based on these emerging categories (Blais & Martineau, 2006). This strategy is characterized by the use of systematic procedures for identifying themes in the data that are common, recurring, dominant, or significant. It is based on five principles, namely: 1) data analysis is guided by both the research objectives (deductive) and the interpretation of the raw data (inductive); 2) the main function of the analysis is the development of categories that encompass the process (or phenomenon) being studied; 3) the results of the research are inevitably linked to the knowledge and experience of the researcher; 4) two researchers can achieve results that are not perfectly identical using the same raw data, and 5) the authenticity of the results should be systematically evaluated by the researcher (Thomas, 2003).
This method generally involves three phases, namely consolidating, ranking, and establishing connections. Thus, the researcher begins with a careful reading of the data to identify themes or codes that are common, dominant, or significant. This generally results in the identification of 30 to 40 emerging codes. Then, based on the research questions and the literature available on the subject, the researcher reviews these codes and groups them under 15 to 20 broader headings. Finally, the researcher refines this categorization, highlighting categories and analyzing the relationships between them, or bringing out a temporal sequence linking them together. After identifying key aspects of the process or phenomenon under study, the researcher is usually left with no more than 8 to 10 categories viewed as being the most important.
An extended review of studies using these two analytical approaches is well beyond the scope of this paper. Readers interested in specific examples are invited to consult the work of the main historical and contemporary authors, including Barney Glaser, Anselm Strauss, Kathy Charmaz and Adele Clarke (Apramian, Cristancho, Watling, & Lingard, 2017).

The researchers
Because qualitative analyses are inevitably influenced by the researcher's knowledge, skills and experiences (Guba & Lincoln, 1981;Patton, 2002), it is important to briefly present our profiles. First, both authors completed a doctorate in social work at the same university during the same period. In addition, we have the same number of years of experience as professors and share common fields of interest. Perhaps the most significant difference between us is our experience with the methods discussed here. In fact, in our own previous studies, one of us (MCR) has used grounded theory while the other (IFD) has used the general inductive approach. This is how the idea for this study originated. In order to "fairly" compare the advantages and pitfalls of our own preferred analytical approach, we chose to reuse data that neither of us had collected, or what we refer to as "neutral data."

The data
The analysis is based on secondary data collected by other researchers as part of a study on the factors that facilitate or interfere with the stability of blended families (Saint-Jacques et al., 2011). We chose this topic because it was sufficiently close to our research interests to allow us to contextualize it, while being far enough from our primary area of expertise for us to have comparable knowledge on the subject. The questions that guided the initial research were: 1) What is the nature and intensity of the difficulties these families face?; 2) What are the strategies adopted to mitigate these difficulties?; and 3) What are the contextual elements that facilitate or complicate the mitigation of these difficulties? The sample for this study was constructed using the contrast-saturation technique (Pires, 1997), which involves selecting respondents with varied demographics so as to ensure greater validity of the results and maximize their transferability (Bertaux, 1997). Data were collected using semi-structured interviews lasting an average of 95 minutes. The analysis was conducted on the raw data, that is, the content of the full transcripts of the recorded interviews.

The experimental method
The two authors avoided contact during the analysis of the secondary data, which took place between November 2013 and March 2014, in order to isolate our respective analyses. We used Qualitative Data Analysis Miner software (QDA Miner version 3.0) to perform our analyses. This tool helped us keep track of each of the steps we took by using the logbook (e.g. code numbers for each step; merging/dividing codes; transforming the codes into categories, etc.). The analyses presented here do not aim to highlight the experience of blended families, but rather to illustrate how two researchers codified and interpreted secondary qualitative data using different methods of analysis. Our focus is therefore on the process that led to the results.

Analysis using the grounded theory approach
Using GT to analyse secondary data is not ideal (Paillé, 1994, 152) but remains possible with rich data (Coghlan & Filo, 2013;Timonen et al., 2018;Whiteside et al., 2012). The first phase of open coding involved summarizing the meaningful units using keywords that were as close as possible to the experience described. Open coding was applied to the transcript of the first interview and identified 39 keywords, including "winning conditions" and "expectations." Subsequently, all the excerpts containing each keyword were examined in a reduction exercise to determine whether they were indeed related to one another, whether the choice of keyword could be improved and whether it was possible to group any keywords together. For example, the keywords "relationships," "siblings' names," and "new child" were merged into a temporary code named "sibling relationships." The revision of the provisional categories involved reviewing all the interview transcripts to verify the appropriateness of the reduction undertaken and, where appropriate, identify new categories (Duchesne & Savoie-Zajc, 2005). For example, the "experience of single parenthood" code emerged at this time. A set of questions regarding the possibility of merging, adding, or changing codes to better reflect the idea represented guided the revision. Reading the verbatim transcripts made it possible to classify each code under one of the interim categories, namely: individual characteristics, individual experiences, retention factors, obstacles, family characteristics, and values. All temporary codes and categories were then re-examined, and improved or renamed as appropriate, thus enabling their transformation into formal categories and subcategories. This step involved reviewing the excerpts relating to each category, subcategory, and provisional code to determine which seemed the most promising for theorizing. Table 1 presents some codes and classifications generated during this first coding phase.
The second phase of analysis was axial coding. At this stage, the data were organized to develop the main categories (Strauss & Corbin, 1990). Thus, the subcategories were reviewed and those whose content did not appear relevant were eliminated. Likely because of her training and repeated involvement in projects based on this approach, the researcher using GT in this study (MCR) was spontaneously inspired by an ecological framework that led her to go beyond the family and consider external influences. By organizing the various factors according to an ecological perspective, the values category appeared more meaningful when it was included under factors related to roles and to the blending of families. Also, some subcategories appeared to be more distal, acting as "support factors." The main categories were thus modified to: factors related to self, to relationships, to roles, to the blending of families, and to support. Using the axes borrowed from the ecological perspective, some categories corresponded to "proximal" factors (factors related to self, to roles, to the blending of families, and to relationships); while others referred to "distal" factors (support factors). By organizing the categories according to this perspective, it became clear that relationships were at the heart of the respondents' discourse: "proximal" relationships involving spouses, children, stepchildren, and the respondent him or herself, and "distal" relationships involving the surrounding society as a whole. The analysis was thus further pursued around a core category of analysis entitled "relationships." The last stage of codification was selective coding and generated an explanatory model of the phenomenon based on the connections between the categories and in reference to the core category of analysis (Strauss & Corbin, 1990). First, organizing the categories around a core category revealed that one of the major challenges facing blended families is relational in nature. The second stage of the selective coding process was to examine three aspects of the core category: its properties, or the elements that distinguished it as the main category; its dimensions, or the significant aspects of its properties; and the conditions that must exist for the category to be applicable. This analysis involved a return to the excerpts classified under the major categories. Thus, the categories "factors related to self," "factors related to roles," "factors related to the blending of families" and "factors related to relationships" helped define the "proximal" nature of some relationships and bring out a particular dimension, that of the parent and step-parent roles. The category "supporting factors," for its part, brought out the "distal" nature of some relationships, and provided information on the dimension of social support and the social perceptions of blended families. Table 2 summarizes the three components of the core category of analysis chosen, namely "relationships." The respondents' discourse revealed that blended families represent a relational phenomenon. On the one hand, they involve close relationships wherein the person has to be positioned in his or her role as a step-parent, and on the other hand, they involve distal relationships referring to the notions of support and social perception. The respondents played one of three different roles in their relationship with their stepchildren: the role of emotional supporter (friend, confidant, equals), a supervising role (neutral, intermediary, values manager), or a disengaged role (a "nonparent"). Respondents whose discourse revealed the distal sphere to be less supportive denounced the negative social perceptions of step-parents and the burden of proof of good faith that weighed on them in order to "earn" the consideration of their entourage.
The final stage of the selective coding analysis involved developing a theorization of the phenomenon. Based on this analysis, it could be concluded that the success of a blended family depends on the possibility of adopting a role that is coherent with the internal dynamics of the family (the proximal aspect) and the pressure or support coming from the environment (the distal aspect). The better able a person is to adopt a role that is appropriate to the circumstances, the more likely the blended family will succeed. The researcher (MCR) identified three roles based on the respondents' discourse.

The role of "supporter"
The step-parent considers him or herself to be a friend or "equal." The blended families in question were those in which the relationship with the stepchildren was strongest, albeit different from that with the respondent's own children (Interviews #104 and #120). This is not a relationship characterized by what is different, but rather by being "chosen." The following excerpt illustrates this role. [

The role of "mediator"
In these families, the step-parent occupied the more neutral role of intermediary, a values manager (Interviews #105, #107, and #119). In this role, the relationship with stepchildren is less close emotionally, and the step-parent is found to manage values, reactions, and living spaces. For example, Respondent #105 talked at length about the reconciliation of values: those of her spouse as a parent, those of the former spouse as the other parent, and her own values, all of which should also be consistent with the values proposed to her own children. Respondent #107 referred to a "status quo" with regard to her relationship with her stepdaughter, while also managing different approaches to child rearing, as expressed in the following passage: Neutrality was an important aspect, especially present in the discourse of Respondent #119. She presented the idea of a neutral living space as a blank page on which everyone can invent a new role, outside of a normal structure. The following excerpt illustrates the concept of neutrality: Neutral spaces were not charged emotionally, which allowed us to have different rules for the father and for me that were more central (Respondent #119)

A role "to be defined"
The step-parents in this case viewed themselves as "non-parents" (Interviews #117 and #128). They found themselves in a role that was defined by what they were not: a parent. This new reality is illustrated in the following excerpts: It's not clear. There isn't a long history of blended families; they haven't been around for centuries and centuries. This means that sometimes it's hard to tell what's normal, what should I expect? (Respondent #117). I found it a bit difficult knowing how to position myself as a parent, but at the same time not a parent, not a mother (Respondent #128).

Analysis using the general inductive approach
The first phase of analysis of the general inductive approach is open coding/categorization. The researcher identifies broads themes that are discussed by the participants and codes them accordingly. Code names representing specific emotions or actions are easier to use (e.g. "mother's jealousy" or "protecting the children") than very broad codes (e.g. "feeling bad"). Codes should also give "meaning" to what the participants express and thus already constitute a first step of the analysis. In our study, 46 codes emerged during this phase (see Table 3 for example).
In the second phase, the researcher (IFD) turned to the literature dealing with the themes that emerged during the open coding/categorization phase in order to develop sensitizing concepts. Some of the literature showed that blended families are associated with negative myths (Määttä & Uusiautti, 2012) and are subject to negative stereotypes (Saint-Jacques et al., 2011). There are few positive role models for stepmothers, which complicates the adoption of this role and the normalization of the difficulties involved (Felker, Fromme, Arnault, & Stoll, 2002). Issues relating to discipline appear to be universal among blended families (Felker et al., 2002;Hetherington & Clingempeel, 1992) and spousal support appears to be essential in order to positively resolve this difficulty (Gosselin, Doyon, Laflamme, & David, 2007). The unity of the couple also contributes to the development of a strong family identity, which contributes to successful blending (Felker et al., 2002;Gosselin et al., 2007;Micheals, 2006).
Step-parents multiply gestures facilitating entry into this new family identity, for example, by rearranging the house or creating family traditions, or through sports and recreational activities (Micheals, 2006). In turn, this strong family identity encourages children to develop closer ties to their step-siblings (Gosselin et al., 2007). Comparing the emerging codes to the literature made it possible to group them under "broader" headings but which were not yet categories. For example, the codes "children with anger towards their stepmother" and "children refusing affection from their stepmother" were gouped together under the heading "conflicting loyalties." The headings thus conveyed abstract concepts encompassing many emotions, actions, reactions, etc. They were not as "visible" as the codes. This second phase is very important as the scientific literature is used to triangulate the findings from the first coding and grasp the "bigger picture," making it possible to theorize.
In the third phase of GIA, the emerging categories are combined/compared with the sensitizing concepts to become the new theory. Coming from a social work background, when the researcher (IFD) sought to understand the process of becoming a "successful blended family," it appeared to her that this process was "systemic." Three main categories were found at this stage: "A hostile environment fed by negative myths"; "The redefinition of boundaries (a) among the subsystem and (b) between the family and the environment"; and "The system's capacity to adapt: openness and flexibility" (see Table 3). Each code/heading and category was then reanalyzed and compared again with the existing literature in order to regroup/reorder or reorganize the sequence of events. For example, the literature showed that "A hostile environment fed by negative myths" and "Presenting a united front" (a heading under "The redefinition of boundaries (a) among the sub-system") did not actually belong to separate categories since presenting a united front, in fact, represented a reaction to the hostile environment. The categories thus had to be redefined accordingly (see Final result) Final result: Comparing the emerging categories to the sensitizing concepts led to the identification of the first theoretical category: Disrupting the Homeostasis: The first two years of stepfamily life were generally difficult. Some stepmothers experienced an antagonistic relationship with their partner's former spouse. Others perceived an intrusion by the former partner in a teenager's oppositional behaviour. Some had to encourage their spouse to be more demanding in terms of discipline while others felt themselves to be excluded from the very close relationship between the father and his child(ren) (Subcategory "Search for legitimacy").
To overcome these difficulties, "successful" blended families were built around a strong bond of love between the parents and a common desire to create a new family identity. As the architects of this new family identity, they had to be patient, accommodating, and flexible in adopting their new roles. These early years had to be devoted to the recognition of their respective legitimacy among the children, but also among those around them. The stepparents had to come to be recognized as a "benefit" in the lives of their stepchildren. They had to contribute significantly to the welfare of the spouse in addition to meeting the (step)children's needs for security, care, and affection (Sub-category "Strong bound between the parents").
The second theoretical category to emerge was A new family system. The emergence of a family identity allowed the members of the blended family to solidify their ties. Spouses gradually took on similar and complementary roles within the family. The step-parent was no longer perceived as an intruder and could also enforce family standards and rules. Traditions were established. The distinctions between "child" and "stepchild" dwindled and, in some cases, the stepmothers talked about "the children," without differentiating between them. The fathers showed themselves to be more open to their partner's suggestions and interventions regarding their children and thereby gradually abandoned their position as the "protector." As for the children, they came to tolerate their stepmother's disciplinary interventions more easily, having developed a more intense emotional bond with her. They became warmer towards her, which, in turn, facilitated the consolidation of the stepmother's parental identity (Subcategory "Involving the children"). Ultimately, the blended family redefined its own sense of family which, while not being seen as the opposite of an intact family, was not seen as "the" desired model either. Some blended families were barely distinguishable from intact families, while others did not always reside under the same roof and could have asymmetric rules (each parent setting specific rules for their own children) and tolerate wide variations between the values advocated by its members (Subcategory "Choosing oneself.)" In summary, blended families are structured around a quest for family identity that requires recognition of its legitimacy, a strong alliance between the spouses and openness to the children's feelings and desires, as well as a redefinition of roles allowing each family member to "choose oneself" through new traditions and a new definition of family boundaries. The following (Figure 1) illustrates this theorization of successful blended families: 2.6. Comparison of the results obtained using each method From our point of view, the results are comparable in terms of understanding the phenomenon. Indeed, both methods led to the identification of the importance of roles in blended families. However, our results showed that these two analytical approaches can lead to different outcomes, as the GT approach led to a typology. However, this difference cannot be attributed solely to the analytical approach used since the theoretical framework adopted may also have contributed to a more detailed analysis of the distal social roles in the family. Nevertheless, GT appears to have provided an additional tool to make sense of the data. As for the analysis using GIA, in the absence of the core "relationships" category, which pushed the GT analysis to a deeper level of complexity, it was difficult to see these links "appear." Based on the results of our study, it appears inappropriate to claim that "the outcomes of analysis [using GIA] may be indistinguishable from those derived from a grounded theory approach" (Thomas, 2003, 9). However, since it is almost impossible to claim without a doubt that the two methods are not equivalent, we propose choosing between these methods on the basis of their advantages and drawbacks, rather than on the comparability of their theorizations.

Advantages and drawbacks of each approach
The GT approach amounts to a "leap of faith," considering the great importance given to the researcher's reflection, which is essentially marked by the data and the requirement to constantly return to it to avoid missteps. It appears that while the GT approach can be used on a body of secondary qualitative data, its use in this particular context restricts the scope of the analysis. Since it is not possible to deepen the avenues for analysis that come up during the data collection by reorienting them based, for example, on research questions, the researcher's analysis is confined to the data already collected. This is akin to exploring a closed room rather than having the ability to open certain doors. In our case, this process was less constrictive because the researcher who carried out the data collection was able to outline and test some avenues of analysis while being immersed in the data. Nevertheless, some factors played a role here in achieving the development of a more complex theory using secondary qualitative data (Whiteside et al., 2012). First, we ensured the quality of the data by approaching a researcher who is renowned for her professionalism and thoroughness. Second, the data fit our research objectives, as they were sufficiently rich to permit theorizing.
Conversely, it is relatively easy to use the GIA because of its flexibility (see Table 4 for a comparison of the two analytical methods). Indeed, the main themes that emerge from a body of data are relatively obvious, even if one does not know the phenomenon under study, as the respondents' statements provide a fairly good idea of "what is happening in the data." Being able to then compare the results of this inductive analysis to the scientific literature solidifies this first step and is reassuring for the researcher. The interpretation appears to emerge quite naturally from the combination of the inductive and deductive phases. In summary, although the GIA appears less well-suited to developing a complex theory, it proved to be an efficient, relatively quick, and less demanding method by which to meet the research objectives.

Conclusion
Our experimentation was as airtight as possible. We were each able to apprehend "neutral data," in isolation from one another, using one of the two approaches. In doing so, we were also able to challenge the idea that two analytical approaches would lead to the exact same results: the selective coding of GT appears to have allowed for a higher level of analysis which had no equivalent in GIA. Our study also provides a new perspective on the quality of analysis produced by each approach and highlights their strengths and limitations. It is hoped that these findings will facilitate the choice between them according to the research objectives pursued and the researchers' skills, experience and preferences. That being said, it is important to keep in mind that there may be a subjective aspect in our approach to coding the data that contribute to explain our separate conclusions.
Another approach would have been to collaborate on a GT analysis and then compare the results with those of a collaboration using the GIA approach. This might have facilitated the identification of the particular features of each analytical approach without being confounded by the individual differences of the researchers. This idea did not occur to us prior to our experimentation, but could be considered in the future.
Our study demonstrates the importance of the conceptual framework used in the crafting of theorized explanations of social facts. This is a very important finding since the "level" of theoretical sensitivity needing to be able to make sense of qualitative data is still strongly debated among grounded theory specialists (see Charmaz, 2006). Indeed, as grounded theory builds from the ground up, a prior conceptual framework would be less necessary. From our point of view, it  was virtually impossible to analyze the data in an entirely "a-theoretical" and purely inductive manner. Somehow, the researchers should understand any subject that they study, whatever methods they are using. Spontaneously, we referred to the ecological model and systems theories to make sense of the data since these frameworks are common and useful for the analysis of families in the social work field.
Ultimately, the question remaining is whether it is possible-or even desirable-to analyze secondary qualitative data without knowing the conceptual framework(s) relevant to the subject studied. One way to answer this question would be to select secondary qualitative data on subjects totally unknown to the researchers, but this would open a series of ethical questions regarding the risks for participants (see Irwin, 2013). Nevertheless, our results highlight the importance of reflexivity and transparency in the choice of the framework selected to analyze data, since its influence on the results appears to be greater than that of the data analysis approach used.
To conclude, the main contribution of this study was to compare two analytical approaches to theorizing qualitative data. This kind of scientific exercise is rare (Andrews et al., 2012) and to our knowledge, no previous study has compared these two approaches. Nevertheless, this exercise is relevant to social science researchers in at least two ways. First, it highlights the importance of being transparent about the choice of the conceptual framework used, given its importance in theorizing from qualitative data. Second, it illustrates a fruitful way to use secondary qualitative data to inform methodological questions, rather than empirical or theoretical ones.