Feasibility and Acceptability of Ecological Momentary Assessment With Young Adults Who Are Currently or Were Formerly Homeless: Mixed Methods Study

Background Ecological momentary assessment (EMA) has been used with young people experiencing homelessness to gather information on contexts associated with homelessness and risk behavior in real time and has proven feasible in this population. However, the extent to which EMA may affect the attitudes or behaviors of young adults who are currently or were formerly homeless and are residing in supportive housing has not been well investigated. Objective This study aims to describe the feedback regarding EMA study participation from young adults who are currently or were formerly homeless and examine the reactivity to EMA participation and compliance. Methods This mixed methods study used cross-sectional data collected before and after EMA, intensive longitudinal data from a 7-day EMA prompting period, and focus groups of young adults who are currently or were formerly homeless in Los Angeles, California, between 2017 and 2019. Results Qualitative data confirmed the quantitative findings. Differences in the experience of EMA between young adults who are currently or were formerly homeless were found to be related to stress or anxiety, interference with daily life, difficulty charging, behavior change, and honesty in responses. Anxiety and depression symptomatology decreased from before to after EMA; however, compliance was not significantly associated with this decrease. Conclusions The results point to special considerations when administering EMA to young adults who are currently or were formerly homeless. EMA appears to be slightly more burdensome for young adults who are currently homeless than for those residing in supportive housing, which are nuances to consider in the study design. The lack of a relationship between study compliance and symptomatology suggests low levels of reactivity.


Background
Ecological momentary assessment (EMA), also known as the experience sampling method, is an intensive, longitudinal, real-time sampling strategy with widespread adoption in public health research and social sciences [1][2][3][4]. EMA, which leverages advancements in mobile computing to deliver repeated surveys throughout a specified measurement period, often by using cell phone technology [5], can reduce recall biases and improve the ecological validity and environmental representativeness of the collected data [6]. Given that young adults experiencing homelessness, of which the prevalence is estimated to be 1 in 10 annually in the United States [7], live relatively unstable lives that are highly affected by their immediate environment [8,9] and have high rates of mobile phone technology adoption [10], EMA may be well suited for use with young adults experiencing homelessness [11,12]. In addition, this method may overcome existing limitations in homelessness research that has relied on methods using retrospective reporting, including cross-sectional designs (eg, single point-in-time measurement) [13] and longitudinal study designs with infrequent measurement points (eg, monthly follow-ups) that include issues with attrition [14]. However, although EMA is likely to be a useful method in homelessness research, it is important to better understand the feasibility and acceptability of EMA in this population.
To date, few studies have used EMA to investigate the daily life experiences of young adults who have experienced homelessness, and several studies have shown the feasibility of EMA in this population. Santa Maria et al [15] provided smartphones to 66 young adults who were homeless aged 18 to 25 years to collect EMAs over 21 days and found daily drug use to be predicted by discrimination, pornography use, alcohol use, and urges for substance use and stealing behaviors. In a different study, Tyler et al [16] distributed mobile phones to implement EMA via SMS text messaging with 150 youths who were homeless aged 16 to 22 years over a 30-day period and found that experiencing physical or sexual victimization on a specific day was positively associated with drinking alcohol later that day. Both studies reported high compliance with completing EMA prompts, and the latter study also reported that participants perceived the study to be of low burden [17].
Another important aspect of EMA feasibility is reactivity. Reactivity is understood to be the extent to which the frequency or quality of behavior changes as a result of being monitored [18]. Understanding the potential reactivity is critically important as it suggests that EMA could serve as a possible intervention or manipulation. The literature has generally found low reactivity when using EMA [19][20][21][22][23], including in college and clinical samples [24,25]. Acorda et al [26] found that young people experiencing homelessness were highly receptive to EMA but may have experienced limitations regarding the use of technology and that the repetition of EMA prompts may have affected some behaviors of those participating. However, the extent to which EMA may affect the attitudes or behaviors of young adults experiencing homelessness during a time of identity formation and instability, especially when examining risk behaviors, including sex risk and substance use, has not been well investigated.

Objective
More research is needed to understand the daily experiences of young people who have experienced homelessness, including those who have transitioned into supportive housing, which is a primary intervention being applied to homelessness. Previous work has identified ways in which housed and unhoused young adults differ, including abuse at home [27], which affects mental health [28,29] and substance use [30]. To understand the environmental influences on young adults who have experienced homelessness, it is imperative to examine both those who are currently experiencing homelessness and those who have transitioned from homelessness to supportive housing environments. This study seeks to address this gap by using a mixed methods approach to examine whether there are differences between young adults who are currently (ie, unhoused) versus were formerly homeless (ie, housed in supportive housing) in terms of acceptability, compliance, and reactivity to EMA.

Study Design
This mixed methods study examines the experiences of EMA in a sample of young adults currently experiencing homelessness and young adults who were formerly homeless who have been placed into supportive housing programs. Specifically, as described in a previously published research protocol paper, young adults participated in a study on health risk behaviors using geographic EMA through a smartphone app that allowed for the collection of time-stamped geographic location data along with EMA behavioral data. Consistent with previous literature [15][16][17], high compliance with completing EMA prompts (80.2% across the entire sample) over a 1-week study period for the combined sample has already been reported [31]. For this study, we first compared the feedback of housed and unhoused participants regarding their experiences of participating in the EMA week. Responses were then used to examine rates and predictors of EMA compliance and survey responses regarding acceptability and feasibility, comparing those in housing with those who were currently homeless (ie, unhoused). Reactivity was examined using reported anxiety and depression symptomatology before and after the EMA week. Next, we used a qualitative approach to analyze the focus group data of participants who are current or were formerly homeless to better understand their experiences with EMA, which may help explain our quantitative findings.

Participants
Participants (N=231) in transitional living programs or permanent supportive housing (ie, housed sample; n=122, 52.8%) and participants not in housing programs (ie, unhoused sample; n=109, 47.2%) were enrolled in the study in Greater Los Angeles using stratified convenience sampling. Unhoused participants were recruited via drop-in centers and emergency shelters, including individuals who were explicitly homeless or unstably housed with temporary living situations that were not reliable beyond 30 days (eg, temporarily crashing with a friend or family member or couch surfing). Participants who consented to the EMA component of the study received up to US $90 in scaled compensation based on response rates and were given the choice of using a study phone with an unlimited data plan or their own smartphone, with an additional US $10 compensation for using their own data plan.

Ethics Approval
All protocols and procedures were approved by the institutional review board at the University of Southern California (review number: UP-16-00046) [31].

Overview
Participants were enrolled in a 7-day EMA study comprising questions that asked participants to report on current and previous 2-hour experiences, which were delivered approximately every 2 hours during waking hours using a custom-built app for smartphones using the Android operating system (Google). This study used custom EMA software written by the investigative team. Phones were programmed to only deliver prompts during the waking day, which was determined using the participants' individual estimated sleep and wake times. Prompted surveys asked about physical and social environments, as well as affect and substance use. Participants received an average of 5 EMA prompts per day.
In addition to the EMA prompts, participants completed a daily survey for each EMA day. Daily diaries captured the risk behaviors of the previous day and infrequent behaviors that may be missed by EMAs. Daily survey prompts were scheduled to be delivered at a participant's preferred time but were also available to access via the app at any time during the day to report on the previous day. Daily surveys inquired about participants' social environments, sex behaviors, and substance use.
Before the EMA week, participants completed a baseline interview that took an average of 60 minutes to gather demographic information and data on their histories of homelessness, mental health, and other behaviors. Following the EMA week, participants participated in an exit survey, in which their thoughts and feelings regarding their participation in the EMA study were gathered. The exit surveys lasted approximately 30 minutes. Participants returned the phones and were paid for study participation at the conclusion of the exit survey. The Patient Health Questionnaire-9 was used to assess depression symptomatology [32], and the General Anxiety Disorder-7 was used to assess anxiety symptomatology [33] at both the baseline and exit surveys.
To reduce missed surveys, there were multiple push notifications for both the EMA and daily survey prompts. EMA prompts required a response within 10 minutes after the first prompt, which comprised a chime and vibration. During this 10-minute window, push notifications were sent every 3 minutes. After 10 minutes, the EMA prompt became inaccessible to ensure momentary reporting of the current time and day. Daily surveys were programmed to send push notifications at 3 time points during the day; however, they could be answered at any point within the day; when answering the daily surveys, participants reported on the prior waking day. The complete study methods are available for further review elsewhere [31] (see Multimedia Appendix 1 for the complete EMA questionnaire and Multimedia Appendix 2 for the daily survey questions).

Analyses
Quantitative analyses in this study included chi-square analysis to compare results by housing status and bivariate linear regressions to predict EMA and daily compliance. Compliance measures the total number of prompts answered out of those received. As the aim of this study was to examine personal factors, such as housing status, rather than artifacts associated with EMA technology, which are associated with compliance, we chose to calculate compliance based on the number of prompts received instead of prompts possible (ie, scheduled). Furthermore, between-subject mixed effects regressions assessed reactivity using the Patient Health Questionnaire-9 [32] and General Anxiety Disorder-7 [33], with random intercepts for each participant.

Qualitative Component
Overview A total of 4 separate focus groups were conducted to better understand participants' experiences with EMA, each of which occurred within 2 weeks of their last day in the study and lasted approximately 1 hour. Focus groups were chosen as a time-saving way of easily measuring and capturing wide-ranging reactions to EMA. A total of two focus groups included participants recruited from housing programs (one with n=12 transitional living program residents and one with n=6 permanent supportive housing program residents) and 2 focus groups (n=6 and n=7) recruited from youth drop-in centers. Focus group facilitators began by asking participants about their general experiences in the study (eg, "What did you like? What did you not like?"). Additional probing questions included specific aspects of study participation (eg, whether EMA interfered with their daily lives), perceived reactivity to EMA surveys, how accurate they thought their reporting was, level of comfort reporting about sensitive topics such as drugs and alcohol, timing and density of survey prompts, and suggestions for similar studies in the future.

Analyses
Focus group recordings were transcribed and evaluated by 2 independent reviewers, one of whom was the focus group facilitator. Coding and case summaries took a deductive approach using exit survey questions focused on experiences of study participation as a guide (see Multimedia Appendix 3 for the focus group interview guide). Co-coder consensus was achieved through the codevelopment of 4 individual case summaries that summarized each item, including quotations, with 1 for each focus group. Case summaries were then analyzed, first considering housed and unhoused groups separately and then together, to see what might account for and expand upon the differences found. Table 1 describes the sample characteristics by housing status,  and Table 2 describes the exit surveys and responses, also by housing status. Approximately two-thirds of the participants had an overall positive experience with the study, >90% reported they would be willing to participate in the study again, and approximately 67.5% (156/231) reported that the study took place during a typical week. Approximately 69.7% (161/231) did not feel judged about their sex or drug use, and approximately half of the participants would prefer to use their own phone in a new study provided they owned a phone compatible with EMA technology. As a result of personal choice and incompatibility, only 9% (21/231) used a personal phone in this study, and significantly more unhoused individuals opted for personal phone use. Housed and unhoused participants also reported statistically significant differences in their experiences of charging the phones, behavior change because of EMA content, being open and honest about EMA survey questions, and whether EMA interfered with their daily life. Compared with those in housing, unhoused participants reported greater difficulty charging their phones (P=.007 to overall P=.02), greater self-perceived behavior changes in response to EMA (P=.001 to overall P<.001), that EMA interfered more with their daily life (specific and overall P<.001), and more stress or anxiety because of EMA surveys (P=.008 to overall P=.02), whereas housed participants reported that they were more comfortable answering EMA survey questions openly and honestly (overall P=.03).

Quantitative: Delivery and Compliance
Out of a theoretical maximum of 20,076 prompts, 17,944 (89.38%) prompts were scheduled for delivery by the custom software. The discrepancy of the 10.62% (2132/20,076) of prompts may be explained by hardware issues (eg, low battery), software issues (eg, app crashing), or schedule timing (ie, where a participant was enrolled at midday and would not have received earlier prompts). Of the scheduled prompts, approximately 39.57% (7101/17,944) were not delivered as the app detected that the prompt time was within sleep parameters specified by the participant. Of the 17,944 prompts, 679 (3.78%) scheduled prompts were not delivered because of Android system features designed to conserve battery life and memory, and 138 (0.77%) prompts were not delivered as the phone was intentionally turned off. The remaining undelivered scheduled prompts, which was an average of 46, were missing because of unknown software or hardware errors. In all, of the 17,944 prompts, the participants received 9980 (55.62%) prompts and completed 8001 surveys upon answering the prompts (8001/9980, 80.17% EMA completion). Participants failed to answer 18.24% (1820/9980) of prompts and answered but did not complete 1.59% (159/9980) of surveys. Participants took, on average, 97 (SD 70, range 19-599) seconds to complete the EMA surveys and completed 13.8 (SD 6.4, range 0-25) questions per EMA survey (see Multimedia Appendix 1 for the EMA survey questions and Figure 1 for an example screenshot of the app). Approximately 3% (7/231) of participants did not receive 2.93% (49/1673) of daily survey prompts, and an additional 6.69% (112/1673) of daily survey prompts were not delivered, out of the theoretical maximum number of prompts, because of possible hardware, software, or schedule timing issues. Of the 1512 scheduled daily survey prompts, 5 (0.33%) were not delivered because of Android system battery conservation issues, and 5 (0.33%) were not delivered as the phone was turned off by the participant. Out of 1502 prompts that were received, participants answered and completed 1376 (91.61% daily compliance) daily prompts; of the 1512 surveys, participants failed to fully complete 6 (0.4%) surveys. Participants took, on average, 89 (SD 61, range 14-570) seconds to complete daily surveys, and they completed 12.2 (SD 5.1, range 0-45) questions per daily survey (see Multimedia Appendix 2 for the daily survey questions).
The results of the analyses of compliance are presented in Table  3. Neither daily compliance nor EMA compliance was associated with housing status (t 1 =−0.38, P=.71 and t 1 =−0.86, P=.39, respectively). Only 9.1% (21/231) of participants chose to complete the study on their own phone, with no difference in compliance between those who used a personal versus a study phone (t 1 =1.15, P=.25 and t 1 =0.74, P=.46, respectively). Compared with those reporting negative or neutral experiences, participants who reported a very positive or somewhat positive experience with the study had 6% (SE 0.03%) greater EMA compliance (t 1 =2.49; P=.01). Furthermore, those reporting not at all or a little bit of difficulty charging their device had 6% (SE 0.03%) greater EMA compliance than those reporting more difficulty charging their device (t 1 =2.41, P=.02).
Daily survey compliance was not associated with general experience with the study (t 1 =0.97; P=.33), nor was participants' self-report of honesty regarding survey responses associated with EMA or daily compliance (t 1 =0.92, P=.36 and t 1 =1.23, P=.22, respectively). Compared with those reporting somewhat or greater interference with the study protocol in their lives, participants who reported not at all to a little bit of interference had 8% (SE 0.02%) greater EMA compliance (t 1 =−3.83; P<.001) and 6% (SE 0.02%) greater daily compliance (t 1 =−3.42; P=.001). Similarly, participants experiencing little or no stress or anxiety from surveys had 7% (SE 0.03%) greater EMA compliance (t 1 =2.62; P=.009) and 5% (SE 0.03%) daily compliance (t 1 =2.05; P=.04) than those experiencing greater stress or anxiety from surveys. Compared with those who indicated feeling judged by surveys, participants who did not endorse feeling judged by the surveys had 6% (SE 0.03%) greater EMA compliance (t 1 =2.38; P=.02) and 9% (SE 0.02%) greater daily compliance (t 1 =4.02; P<.001). Participants who reported having a typical week had 8% (SE 0.02%) greater EMA compliance than those who reported having an atypical week (t 1 =3.58; P<.001); however, those with typical weeks only had marginally greater daily compliance (t 1 =1.79; P=.08). Willingness to participate again was not associated with EMA or daily compliance (t 1 =0.34, P=.74 and t 1 =−0.05, P=.96, respectively).  Table 4 displays the results from the mixed effects models to examine reactivity to EMA participation, specifically regarding anxiety and depression symptomatology. Both anxiety and depression scores decreased from baseline to follow-up (β=−1.77, P<.001 and β=−1.10, P=.03, respectively). However, no significant effects for EMA compliance were detected for either anxiety or depression; thus, the decrease in symptomatology was not associated with compliance. Both models controlled for age, gender, sexual orientation, race and ethnicity, and housing status.

Overview
The qualitative findings that were generated independently of the quantitative findings were categorized under the 5 main emergent themes. The first theme explains how participants felt the study design increased mindfulness and reflection, whereas the second theme captures those negative instances when the participation in the study resulted in causing stress and anxiety. The third theme discusses the ways in which study participation incited behavior change, whereas the fourth theme addresses participants responding honestly to questions. The final theme captures participant suggestions about future study designs. We note that a comparative analysis of the housed and unhoused samples indicated that these themes apply to both groups, as shown in Table 5. Suggestions for future studies a SP: study participant

Increased Self-reflection and Self-awareness
Self-reflection and awareness while participating in EMA were common points of discussion in the focus groups. In fact, this idea came up in all 4 focus groups. Many participants noted increased self-awareness regarding how their own thoughts, feelings, and social or physical contexts influenced their engagement in protective or risky behaviors.

Self-reflection Negative Case Analysis: Causing Stress and Anxiety
Although self-reflection and self-awareness were common in focus group discussions, which seemed to promote a sense of calmness among participants, aspects of the EMA protocol also seemed to trigger stress, anxiety, and paranoia, particularly in response to location tracking. Approximately 16% (37/231) of participants reported EMA caused somewhat or quite a bit of stress or anxiety in the exit survey ( Fears regarding the personal nature of questions and location tracking, such as those discussed by participants SP301 and SP403, who were unhoused and both enrolled via drop-in centers, were more common among young adult participants who were actively homeless (ie, focus groups 3 and 4 [see Table  5 for more quotes by focus group and housing status]).
Further stress appeared to arise from the repetitive nature of the prompting schedule, with EMA prompts approximately 2 hours apart with the same array of questions. A participant commented on this by saying the following: In addition, the subject matter, combined with the annoyance of the prompting schedule, brought on an increased desire to use: Especially since it was that ringing and a little bit of an irritant, and it's like, "Ooh, a drink would be really nice to kind of just not deal with this right now" [SP206, housed]

Responding Honestly
Across the 4 focus groups, the participants discussed honesty in their survey responses. Although 81.8% (189/231) of focus group participants agreed with being open and honest, others expressed greater distrust and, therefore, less honesty. Some participants oscillated between providing an honest picture of their day and, at times, lying. One of the participants provides a good example of the latter: Here, fluctuations in fatigue and social engagement influenced honesty in survey responses.
The occasions where participants reported dishonesty, which occurred more often among unhoused participants, were largely related to the personal nature of the questions. One of the participants noted that the more specific and personal the questions got, the harder they were to answer honestly. This participant continued by providing a specific example:

Suggestions for Future Studies: Technology
The largest perk for many participants in the study without a phone of their own was receiving a phone with a full data plan for the whole week, which most participants chose to do. Only 9.1% (21/231) of participants used their own phones for the study. Participants from focus group 4 (unhoused) discussed the following: Another discussion point stemming from this issue occurred in friend groups in which multiple people were enrolled in the study at the same time. One of the participants mentioned the difficulty of keeping track of their phone when interacting with others also in the study:

Mixed Methods
In comparing the quantitative and qualitative arms of this study via a convergent parallel design, we found qualitative responses from both housed and unhoused participants to confirm the quantitative findings (see Table 6 for an integration of the quantitative and qualitative findings). The findings regarding the previously discussed difficulties with charging devices were found to be convergent. Qualitative findings regarding study-induced stress and anxiety, as well as interference with daily life, provided additional contextual information to the quantitative findings, offering an expansion in the interpretation of results. As previously stated, unhoused participants self-reported greater, statistically significant study-induced stress and anxiety. However, the qualitative findings highlight that housed individuals, at times, also noted stress and anxiety related to study participation; however, this stress and anxiety seemed to be contextually different from that experienced by unhoused participants, which was often connected to paranoia, fears of snitching, and being watched by the Feds. The latter stress and anxiety could be directly related to street culture and economy. Similarly, quantitative findings showed increased reporting of study interference in daily life among unhoused individuals; however, qualitative findings showed that both unhoused and housed participants noted interference. Housed participants most often talked about interference in terms of school or work responsibilities, whereas those unhoused discussed needing to answer to make the money for survey compliance, which made it more likely to interfere with what they were doing. Compared with housed participants, unhoused participants were more likely to report that the study caused them stress or anxiety (P=.02).

Confirmatory; expansion
Although surveys appeared at inopportune times for both housed (eg, while at work or school) and unhoused (eg, while visiting with a case manager) participants, unhoused participants discussed more stress regarding needing to answer to make the money for survey compliance, which made it more likely to interfere with what they were doing.
The study was reported to have interfered with daily life among those unhoused compared with those in housing (P<.001).
Confirmatory; convergent Housed participants were more likely to be in locations with outlets. Charging had to be sought out by unhoused participants.

Confirmatory; complementarity
Although both housed and unhoused participants noted increased awareness of substance use related to substance use questions, unhoused participants more often reported increased substance use, whereas housed participants seemed to mention the awareness of substance use and trended toward a reduction in their use.
More unhoused participants reported that the study caused changes in their behavior (P<.001) than those in housing.
Confirmatory; convergent Housed participants seemed to feel more comfortable with honest responses, whereas unhoused participants again noted paranoia and fear of snitching.
Compared with those in housing, unhoused participants reported being less likely to be open or honest when answering survey items (P=.03).
Additional statistically significant differences between housed and unhoused participants occurred regarding reported behavior changes and honesty in the survey responses. Although both housed and unhoused participants noted increased awareness of substance use related to substance use questions, quantitative findings showed that more unhoused participants reported that the study affected their behavior during the week. Qualitatively, we obtained complementary findings in that those who were unhoused more often reported increased substance use, whereas housed participants seemed to mention the awareness of substance use and trended toward a reduction in their use. Finally, convergent findings emerged regarding honesty in the responses. It was clear that housed participants seemed to feel more comfortable answering openly and honestly, perhaps as a result of the newfound freedom associated with transitioning from homelessness to housing [34]. This is contrasted by unhoused participants once again noting paranoia and fear of outing peers on the street.

Principal Findings
The results of this mixed methods study illuminate the experiences of housed and unhoused young adults enrolled in an EMA for a 1-week period. Although no statistically significant differences in compliance by housing status were found, statistically significant differences were found regarding the impact of study participation. Housing status was found to affect young adults' engagement with EMA. Differences in housing status were found regarding the ability to keep their device charged, interference with daily life, stress and anxiety associated with participation, behavior change as a result of EMA, and the ability to respond openly and honestly to prompts.
In terms of compliance, those who had difficulty charging also had lower survey compliance when the phone was charged, and unhoused participants reported greater difficulty in charging. Perhaps the unhoused participants had difficulty charging but made sure to find some way to charge it, possibly because of the importance of the incentive. Similarly, less interference with daily life was associated with greater study compliance in both the daily and EMA surveys, and unhoused individuals noted greater interference. A thought here is the significant correlation between difficulty in charging the device and interference in daily life. Research apps may drain battery life more quickly than other apps and disrupt typical charging patterns based on their usual phone use. If unhoused participants struggled to find a power source and spent much time consumed with finding ways to charge the phone [35] to maintain compliance (ie, get paid), this could interfere greatly with one's day. This could also explain why unhoused participants reported greater stress and anxiety associated with study participation. Again, status was not associated with compliance rates; however, greater stress and anxiety resulting from study participation produced worse compliance rates.
Overall, study compliance was approximately 80%, with no significant detected differences explicitly related to housing status. However, this does not imply that housing status does not need to be considered in the study design. The findings clearly reveal greater impacts, and perhaps burden, for unhoused participants, which was confirmed in the qualitative interviews. Future use of EMA, particularly with unhoused individuals, should consider barriers to using technology in research. Although entirely possible, a study design that relies on the use of technology such as a mobile device should consider issues related to access and how this may create increased burdensomeness. We observed how increased burden could produce additional stress in an already stressful environment, particularly with regard to being tracked. In some cases, the awareness of being tracked has the potential to exacerbate the underlying mental health or substance use issues. Greater efforts to ensure comfort with protocols, including an enhanced focus on confidentiality, could be beneficial, especially because of the histories of marginalization that have led to a deep-seated distrust of systems, including social service systems. Taking more time at the outset of EMA studies to ensure understanding and consent could promote greater trust and decrease the associated stress and anxiety. This could also increase honesty in responses, particularly among unhoused individuals who reported being less honest in their responses compared with those in supportive housing, potentially because of perceived impacts on securing housing or other needed services and resources. Other methods to potentially increase data quality and assurance are to use a lead-in period where participants would get a day or two of practice with the app and EMA questions before recording their responses for analysis.
In testing for reactivity using anxiety and depression symptomatology before and after EMA the week, results show compliance to not be significantly related to the degree of anxiety and depression symptomatology reported. However, a decrease in reported symptoms occurred from before to after the test. This is particularly interesting as some participants, particularly unhoused participants, discussed increased stress and anxiety because of study participation. This indicates that the momentary stress of being prompted did not affect symptomatology and perhaps was fleeting. Although participants may have frequently thought about the bother of the study and became momentarily overwhelmed, it seems to have not been a lasting experience with long-term impacts, despite 22.9% (53/231) of participants feeling that the study affected their behaviors. The fact that compliance was not significantly related to symptomatology (ie, more prompts completed were associated with increases in symptomatology) indicates low reactivity to EMA prompting and participation. More work is needed to effectively examine the overall decreases in symptomatology that may be associated with EMA, as the results cannot definitively be ruled as reactivity, echoing previous literature [36].
The study's implications include support for using intensive longitudinal methods such as EMA with housed and unhoused young adults. Acorda et al [26] explored the impact and acceptability of EMA among 18 youths experiencing homelessness, making recommendations for use with young people who are actively homeless. The results produced by Acorda et al [26] reinforce the findings of this study, most notably the effects of increased self-awareness and the potential for behavior change as a result of EMA. Given the discussion regarding perceived behavioral change and behavioral intentions, EMA also presents possible opportunities for intervention work. It is clear from both their work and the additional support offered from these analyses that EMA is highly acceptable for young adults who have experienced homelessness, both housed and unhoused, with several special considerations, particularly around housing status and confidentiality.

Limitations
Despite the strengths of analyzing this innovative method for vulnerable young adults, there are several limitations that must be acknowledged. The results suggest that some participants did not always answer honestly. We do not have a way of knowing the responses to trust or not using this method with this population. Future work may want to design studies to test the validity of this method with this population, perhaps adapting the study design to include self-checks within survey protocols and questions about honesty at the conclusion of each EMA survey. In addition, relying on focus groups as the qualitative methodology could potentially lead to group think more than individual interviews, particularly regarding sensitive topics where group interactions could be detrimental to the discussion. Inquiring about sensitive topics is needed for epidemiological studies; however, a focus solely on what may be perceived as negative aspects without the inclusion of a strengths-based perspective lacks an equity lens and was felt by some SPs. Finally, we considered the difference between completion and compliance within the EMA context and ultimately decided to use compliance as the number of prompts completed out of those received based on the study aims. Care should be taken when interpreting findings, specifically if the interest is in distilling information regarding the technological aspects of EMA methods, as this study did not consider technical issues as a focus of analysis, although it is briefly reported.

Conclusions
Although with caveats, this study produced evidence in favor of the use of intensive longitudinal designs with young adults who were formerly and are currently homeless. Intensive longitudinal methods are well suited to capture experiences associated with the chaotic and unstable environments of homelessness, addressing the limitations of cross-sectional and traditional longitudinal designs. However, the findings of this study show that chaotic and unstable environments must be considered at every step of the research process. These findings have implications for research development and design, data collection, and analysis.