skip to main content
10.1145/3613904.3642630acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Free Access

Promoting Engagement in Remote Patient Monitoring Using Asynchronous Messaging

Published:11 May 2024Publication History

Abstract

Remote patient monitoring is becoming increasingly instrumental to healthcare delivery but can substantially hamper the interpersonal communication that underlies standard clinical practice. In this work, we explore the benefits imparted to patients, clinicians, and researchers by an asynchronous messaging feature within a platform called COVIDFree@Home. We created COVIDFree@Home to assist the healthcare system in a large metropolitan city in North America during the COVID-19 pandemic. Clinicians used COVIDFree@Home to monitor the self-reported symptoms and vital signs of over 350 COVID-19 patients post-infection. Using thematic analysis of user-initiated messages, we found the messaging feature helped maintain protocol adherence while allowing patients to ask questions about their health and clinicians to convey empathetic care. This feedback cycle also led to higher quality data for hospitalization prediction, as the revisions significantly improved the AUROC of a machine learning model trained on demographic variables, vital signs data, and self-reported symptoms from 0.53 to 0.59.

Skip 1INTRODUCTION Section

1 INTRODUCTION

The proliferation of consumer-grade medical devices and wearables with physiological sensors has made it easier than ever for healthcare systems to support populations with chronic conditions or low-risk illness remotely [31]. For the purposes of our work, we define remote patient monitoring as a subset of telehealth that involves the streamlined flow of health data (e.g., vital signs, self-reported symptoms) from patients to clinicians beyond traditional healthcare settings [52]. Allowing patients to remain at home until necessary can significantly reduce temporal and financial burdens for both patients and clinicians [59]. Remote monitoring also improves data-driven clinical decision-making, which can ultimately reduce acute care use [30, 72, 76].

The aforementioned benefits of remote patient monitoring come with their own set of challenges. Among them is the fact that healthcare delivery is typically built upon interpersonal communication and face-to-face interactions between clinicians and patients. Removing these feedback mechanisms can leave patients feeling unsure about their health status or unheard by their healthcare providers [10, 66, 89, 91]. Patients may not feel comfortable with the interfaces or technologies associated with the remote monitoring platform [22], which can lead them to unintentionally provide low-quality data to clinicians. When presented with such data, clinicians often have to spend extra time validating the information in their records to avoid potentially wasteful decisions [5]. The efficacy of a remote monitoring platform is also contingent upon patient adherence to the data collection regimen; over time, patients may lose the motivation to continue providing data, leading to gaps in the monitoring or withdrawal altogether [60, 82].

In this work, we argue that incorporating an asynchronous messaging feature into a remote patient monitoring platform can alleviate these concerns and yield tangible benefits for three key stakeholder groups: patients, clinicians, and researchers. Electronic health record (EHR) systems like Epic’s MyChart1 already allow patients and clinicians to exchange messages asynchronously inside web portals and smartphone apps. However, the topics of these messages and the patterns of these discussions have not been deeply analyzed in the context of remote patient monitoring. Diving deeper into these interactions would not only reveal the benefits of asynchronous messaging but also inform the design of future messaging interfaces between patients and clinicians.

To generate these insights, we studied the role of the asynchronous messaging feature in the COVIDFree@Home platform. COVIDFree@Home was deployed in response to the COVID-19 pandemic in Toronto to better understand the symptomatology of the virus while supporting the local healthcare system. Patients who tested positive for COVID-19 but were suitable for outpatient care were recruited onto the platform. They downloaded a mobile app that probed them to fill out a symptom questionnaire twice daily, and they were also given an oximeter and thermometer to be used twice daily so that they could have their vital signs monitored. The data was made available to clinicians on a web-based platform that could be cross-referenced with the EHR systems at participating hospitals. Both patients and clinicians could also use the mobile application and web-based platform respectively to send messages to one another regarding any concerns or questions. Over the project’s lifespan of 1.5 years, COVIDFree@Home supported 359 patients and identified 16 patients who required escalation of care.

After we provide a brief overview of patients’ engagement with COVIDFree@Home overall, we examine the usage of the messaging feature in particular to demonstrate that it did not add significant overhead to clinicians’ workflows. We then present the results of thematic analysis applied to the content of the messages, which generated evidence that our messaging feature led to the benefits we anticipated for patients and clinicians. We further illustrate these benefits by presenting two case studies that highlight interesting patient-clinician interactions that facilitated the discovery of data-related issues.

These interactions not only improved patient care but also the quality of the dataset that was available to researchers. Having written documentation of data entry mistakes, sensor errors, and emergent issues made it easier for researchers to add, remove, and revise the dataset as necessary. As such, we investigate the performance boost these revisions yield on a few-shot machine learning model trained to predict hospitalization according to demographic variables, vital signs data, and self-reported symptoms. We find that applying dataset corrections documented through the messaging feature improved the model’s area under the ROC curve (AUROC) from 0.53 to 0.59.

To summarize, we examine how asynchronous messaging alleviates the lack of interpersonal communication in existing remote health monitoring platforms and the tangible benefits it renders for patients, clinicians, and digital health researchers. We convey this contribution through the following components of our work:

A quantitative summary of how the messaging feature was used within the COVIDFree@Home platform,

A qualitative analysis of the messages that were sent through the platform to highlight the messaging feature’s benefits for patients and clinicians, and

A quantitative analysis showing that corrections captured through a messaging feature can improve the performance of a machine learning model to the benefit of researchers, and

A series of implications for front-end designers based on our findings.

Skip 2RELATED WORK Section

2 RELATED WORK

To provide context for our research, we first examine prior research on the opportunities and challenges associated with remote patient monitoring. We then discuss the trade-offs between different digital communication channels for clinical encounters.

2.1 Opportunities in Remote Patient Monitoring

Multiple survey papers have expounded the degree to which remote patient monitoring has improved patient care and enhanced the ability of clinicians to track patients in nontraditional healthcare setting [23, 70, 83]. Nearly 95% of the world’s population has access to mobile networks [79], so remote patient monitoring can support patient care within rural and developing communities where healthcare systems are not easily accessible. Active work is also being done to improve connectivity and network speeds in such communities so that remote health monitoring solutions can be efficiently delivered in these contexts [34, 58].

The health data that underlies remote monitoring platforms can take multiple forms, the most prominent being self-reported data and vital signs. While it has been possible for health data to be self-reported over web portals and phone services for over the past few decades, the prevalence of vital signs data has grown due to the proliferation of wearables like fitness trackers [93]. Such devices are able to passively and continuously capture vital signs data, leading to the generation of objective and fine-grained health data. On top of this, machine learning models can be deployed on the cloud or on the devices themselves to extract advanced metrics relevant to the disease domain. Past work has slowly progressed towards such clinically relevant endpoints [14, 37, 46, 47, 77], but their clinical adoption remains to be seen.

Although remote patient monitoring has been used to support people with medical concerns ranging from weight management [4, 29, 85, 88] and mental disorders [24] to respiratory illness [63, 65, 81] and sleep disorders [25, 56], the onset of the COVID-19 pandemic accelerated the need for scalable patient care [70]. An increasing number of publications are now emerging that describe various efforts to monitor COVID-19 patients in outpatient environments. For example, the GetWell Loop platform by Annis et al. [3] offered patients educational materials about COVID-19 while asking them to fill out daily questionnaires related to their symptoms. Their platform leveraged a virtual workforce of providers and medical students who were tasked with overseeing patient responses and triaging their concerns. The authors focused on quantitative metrics of engagement and therefore did not closely examine the nature of various interactions stakeholders had on their platform.

Another example of a remote patient monitoring system for COVID-19 is demonstrated by Gordon et al. [27]. Their platform required patients to track their symptoms and vital signs using daily questionnaires and medical devices. The authors claimed that enrollment in their system decreased the likelihood of hospital readmission, but they also did not comment on communication between patients and clinicians. While the findings of our work are grounded in a platform similar to the one developed by Gordon et al. [27], our work extends upon this literature by investigating the role that asynchronous messaging can serve in remote patient monitoring platforms.

2.2 Challenges in Remote Patient Monitoring

Despite the benefits that remote patient monitoring has to offer, it also presents numerous challenges for both clinicians and patients. Many clinical workflows already involve a number of complex software-based platforms, most notably EHR systems. Introducing yet another platform to such workflows can incur significant burden and inefficiencies, leading them to be abandoned [12]. Conversely, patient attrition can be high when the burden of generating data outweighs the benefits they experience from the platform [50, 51]. Additionally, high patient engagement can result in positive patient outcomes [57]. Ensuring that this trade-off is beneficial for both stakeholders is important to the deployment of any remote monitoring platform.

Another set of challenges revolves around data quality, particularly with respect to vital signs data. Improper device usage, sensor failure, and irregular data entry can lead to data quality concerns [75]. In the case of body-worn sensors, for instance, Baig et al. [6] note that body movement, electromagnetic interference, and sensor drift can all lead to erroneous measurements. Vital signs data can lack context as well [2]. For example, an elevated heart rate can be attributed not only to a health problem but also to physical activity or caffeine consumption. This overall disconnect between what information clinicians see and what patients are experiencing creates uncertainty [62], forcing clinicians to expend additional effort validating the data they receive before interpreting it [5]. Researchers must also ask similar questions about the data they receive from these platforms if they decide to pursue their own offline analyses or data-driven models.

A third set of challenges entails reduced face-to-face interaction between patients and clinicians [49]. Many platforms have automated processes in place to flag abnormalities in patient data, but clinicians are still tasked with assessing patient data for themselves [1, 48]. Patients may not always be consciously aware of these processes, especially when communication is limited to when there are noteworthy health concerns. This can lead to patients feeling unheard or uncertain about their health status [84].

This work aims to demonstrate that an intentionally designed messaging feature built into a remote patient monitoring platform can assuage all three of the aforementioned challenges. After showing that the messaging feature does not impose significant burden on any of the platform’s stakeholders according to platform usage statistics, we present the findings from a thematic analysis to show that our messaging feature engendered increased engagement and empathy. We then show that the dataset modifications logged through the messaging feature improved data quality in a way that translated to a performance boost in a machine learning model.

2.3 Digital Communication Between Patients and Clinicians

In recent years, advancements in digital communication technologies have paved the way for various digital means of facilitating communication between patients and clinicians outside of clinical settings. Two of the most popular telehealth communication channels are phone and video calls [8, 92]. These modalities support many of the same verbal cues as face-to-face conversations, such as tone and intonation. Video calls also support non-verbal cues that are available during face-to-face conversations, including body language and facial expressions [36, 39, 71]. Clinicians rely on these cues to convey empathy, build rapport, and make additional judgments about their patients; meanwhile, patients rely on these cues to better convey their concerns and enhance their understanding of clinicians’ determinations [36, 71]. Although phone and video calls circumvent the need for patients and clinicians to be co-located, they still require both conversational partners to coordinate and be available at the same time. In the context of remote patient monitoring, asynchronous communication eliminates the need for immediate responses, enabling healthcare providers to review and assess patients’ data at their own pace.

Text-based communication lacks the aforementioned verbal and non-verbal cues, but it pervades many digital health systems because of its convenience and low technology requirements [32]. Even though there are synchronous text-based platforms like live messaging services and mobile chat applications [28], the majority of patient-clinician conversations occur over asynchronous platforms such as email, online forums, and web portals [9, 20]. This includes the messaging features included in many prominent EHR systems, including the previously cited MyChart by Epic. Even though there has been significant discussion about the design and usage of syncronous and asynchronous messaging features in the context of patient-clinician interactions [42, 45, 67, 73], these investigations have stopped short of examining the conversations that are had with respect to remote patient monitoring.

Skip 3The COVIDFree@Home Study Section

3 The COVIDFree@Home Study

The start of the COVID-19 pandemic began with substantial uncertainty surrounding the virus, including both its short and long-term health effects. As the number of cases grew worldwide, doctors quickly became overburdened triaging and managing patients. Additionally, social distancing measures and hospital capacity limits made it challenging to keep patients under the supervision of trained staff and clinical equipment.

It was these issues that led to the introduction of the COVIDFree@Home platform. In the rest of this section, we describe the patient eligibility requirements for being enrolled in our study, our process for designing the platform, and the final design of our platform. This study was conducted with approval from the ethics boards for the participating hospitals. During the analysis of our data, patient privacy was maintained through de-identification, and access to the EHR system was limited to the clinicians and hospital staff.

3.1 Patient Enrollment

The COVIDFree@Home platform was designed to support patients who tested positive for COVID-19 but were feeling well enough to not require hospitalization. Given the onus on hospitals to prioritize healthcare delivery to severe cases, these patients were originally being sent home and instructed to seek medical attention if their health deteriorated. Doing so left patients’ recovery unmonitored, leaving them on their own to make decisions about their health. This also meant that useful data during cases of deterioration was not being collected to further researchers’ understanding of virus symptomatology.

To remedy these shortcomings, individuals who had recently tested positive for COVID-19 were contacted by a research coordinator to participate in the COVIDFree@Home study. On average, patients were recruited 4–5 days after their positive test result. This lag was suboptimal for both patient care and our study but was largely a consequence of the propagation of positive test results from testing facilities to the research team. Although it was possible to miss relevant health events during this time, patients were generally healthy and exhibited few symptoms at the time of recruitment.

Patients who consented to participate in the study were informed about the expectations of the study and sent instructions on installing the COVIDFree@Home app. Patients were also sent a thermometer for measuring core body temperature and a pulse oximeter for measuring oxygen saturation and heart rate. Due to supply chain challenges during the pandemic, the oximeters were sourced from multiple companies and the thermometers were phased from multi-use to single-use models. The medical devices usually arrived after patients installed the mobile app, but patients were encouraged to upload any data they could provide when possible.

3.2 Design Process

Our team was composed of partners with complementary roles: engineers who built the platform, designers who helped optimize the user experience, clinicians who provided insights into patient care, and people who had lived experience with COVID-19 to shed light on patient needs. We utilized an iterative design process with collaborative input from all team members over the course of 6 months to produce the first working version of COVIDFree@Home; new features were added beyond that point, but they are beyond the scope of this work as they do not pertain to the platform’s messaging system or the kinds of data that were collected.

The design process began with an initial set of meetings to define the platform’s goals. Within the scope of this paper, key objectives included providing patients with peace of mind while they recovered and keeping doctors from being overburdened by the system. After creating wireframes of interfaces that would be available to patients and physicians, we underwent iterations of prototyping, application demonstrations, and testing to refine our designs. Because the pandemic precluded in-person interaction, our entire design process was coordinated through virtual meetings. It was through this process that we arrived at the final design presented in the rest of this section.

3.3 System and UI Design

The COVIDFree@Home platform was designed as a patient care tool first and as a research tool second. In other words, platform adjustments and research components were kept limited in order to maximize the care patients received. For the purposes of this work, we describe the mobile app that patients used to report their health data and the interfaces that both patients and clinicians utilized for exchanging messages below.

3.3.1 Mobile App for Reporting Data.

Patients were asked to record their symptoms and vital signs through the app twice per day: once in the morning and once in the evening. They were sent notifications at 10 AM and 8 PM if they had not submitted information by those times. Patients were asked to report the progression of a core set of seven symptoms: fever, sore throat, runny nose, cough, shortness of breath, body ache, and severe fatigue. Using a supplementary window, they were also able to add other symptoms (e.g., loss of taste) to the main symptom page. Symptom progression was rated according to the following scale: new, better, same, worse.

Since the devices for gathering vital signs measurements could not interface with the smartphone app directly, patients were asked to manually enter the readings using numeric text boxes accompanied with bounded sliders to mitigate potential entry mistakes. Beyond supporting the transmission of health data, the app also had pages for reviewing the study protocol, entering information about hospital visits, and troubleshooting. More importantly, the app included functionality for exchanging messages with the clinicians responsible for monitoring the platform.

3.3.2 Messaging Feature.

Figure 1:

Figure 1: The two screens within the patient-facing COVIDFree@Home app that enabled patients to exchange messages with clinicians: (left) the screen for sending messages, and (right) the screen for receiving messages.

Figure 2:

Figure 2: The two pages within the clinician-facing COVIDFree@Home dashboard that were relevant to the messaging feature: (top) a profile page showing the patient’s contact information, clinical notes, and the messaging feature; and (bottom) a chart summarizing the patient’s health data and messaging history. Identifying information has been blurred for privacy.

Clinicians reviewed patient data through the dashboard on roughly a daily basis, prioritizing patients with worsening symptoms or anomalous data. Patients received an automatically generated message from COVIDFree@Home whenever their file had been reviewed. Beyond receiving automated messages, patients were encouraged to use the messaging feature within their app to contact their assigned clinician(s) if they had any questions or concerns about their health. They were also encouraged to use the messaging feature for any questions they had regarding the study protocol and logistics. Patients were given the explicit caveat that using the COVIDFree@Home app was not a replacement for emergency services, and this warning was reiterated throughout the app as well. Clinicians were able to view any messages sent by their patients through a dashboard created specifically for COVIDFree@Home.

The messaging interfaces available to patients are shown in Figure 1, while the corresponding interfaces available to clinicians are shown in Figure 2. Most existing platforms for remote patient monitoring included a messaging feature similar to popular chat interfaces like WhatsApp or standard SMS [35, 86]. However, the clinicians on our team noted that this style of messaging might create an expectation for multiple rounds of synchronous discussion, which would impose added burden on those who would be remotely monitoring patients [15]. To avoid this behavior, we designed the messaging feature within the patient-facing app so that it had separate interfaces for sending and receiving messages. The goal of this design was to encourage patients to send self-contained messages, thereby minimizing the number of back-and-forth exchanges. This intention was also implied by the size of the text box on the interface for sending messages, suggesting that patients should be able to fit everything they needed to say in that area. Meanwhile, the clinician-facing dashboard enabled clinicians to examine patient messages in tandem with relevant demographic and health-related information. Doing so provided important background information about the patient alongside the messages as clinicians reviewed patient files.

Skip 4Engagement with the COVIDFree@Home Platform Section

4 Engagement with the COVIDFree@Home Platform

Before examining the benefits stakeholders obtained from the messaging feature, we first provide an overview of the engagement that patients and clinicians had with the COVIDFree@Home platform. Since the focus of this work is on the messaging feature, we only briefly summarize overall engagement with the platform to provide context and leave further examination of these trends as future work.

4.1 Study Overview

Table 1:
OutcomeCount / Percentage
4
Number of clinicians who used the dashboard18
Number of recruited patients422
Fraction of patients who used the app at least once359/422 = 85.1%
Fraction of hospitalizations16/359 = 4.5%
Fraction of patients withdrawn from the study3/359 = 0.8%
Average number of patients enrolled at one time7.6
7 days
Average fraction of days with at least one report per patient88.7%
2,769
Number of symptom reports per patient (mean ± std)*7.7 ± 5.1
Number of oximeter readings*2,766
Number of oximeter readings per patient (mean ± std)*7.8 ± 5.0
Number of thermometer readings*2,417
Number of thermometer readings per patient (mean ± std)*6.8 ± 5.1
  • *Excluding data provided after patients’ enrollment in the study.

Table 1: Summary statistics for patient enrollment and engagement in the COVIDFree@Home study.

  • *Excluding data provided after patients’ enrollment in the study.

4.1.1 Enrollment.

Our analysis covers data that was collected between November 2020 and May 2022. Eighteen clinicians across 4 hospitals utilized COVIDFree@Home to monitor patients. During that time, 422 patients were recruited to enroll, of whom 356 (84.4%) signed into the smartphone app and submitted data at least once without withdrawing. Of those patients, 259 (72.8%) used Apple products, among whom 13 utilized tablets like the iPad. The remaining 97 (27.2%) patients had Android mobile devices, with the most popular models being Samsung and Google phones. Table 1 enumerates key statistics related to how patients interacted with the platform.

The median duration of enrollment for patients was 7 days. Including the lag between diagnostic testing and recruitment, this period typically covered the duration of patients’ COVID-19 infections. There were five patients who continued to provide data past their infection. When our patient coordinator reached out to these patients to inform them that their infection had passed and they could stop providing the data, they stated that they preferred to use the system as it provided them peace of mind. For the summary statistics in Table 1, we exclude these patients as outliers.

4.1.2 Patient Engagement with COVIDFree@Home.

Across the entire cohort, we obtained roughly 3.1k symptom reports, 3.3k oxygen saturation readings, and 2.9k temperature readings. In other words, we received 8.8, 9.1, and 7.9 submissions on average from each patient, respectively. The difference between these numbers may be partly attributed to patients’ access to the relevant data collection instruments, the perceived burden of collecting data, the perceived importance of the data, and occasional device malfunctions.

4.1.3 Clinician Engagement with COVIDFree@Home.

Clinicians often found time to go over the dashboard during their shifts. Excluding long-tailed outliers beyond an hour, clinicians took a median of 5 seconds and a mean of 44 seconds to review each patient profile. Three-quarters of patient reviews occurred under 25 seconds, and the maximum time spent reviewing a profile was roughly 25 minutes. The long tail in the distribution can be attributed to cases when a patient’s recent health data raised concern, in which case clinicians would take the time to type notes, send a message to the patient, and other actions. These statistics show that clinicians were generally able to review patient profiles quickly.

4.2 Engagement with the Messaging Feature

Table 2 describes the number of messages that were sent through the COVIDFree@Home platform. Roughly 3.3k automated messages were delivered to patients as a result of a clinician reviewing their profile. Excluding those messages, 324 messages were exchanged on the platform, with 102 (31.5%) being sent by patients and 222 (68.5%) being sent by clinicians. In the context of the study duration, the low burden of these messages on clinicians is highlighted. On average, clinicians received 0.24 messages and sent 0.70 messages per day.

Table 2:
OutcomeValue
3,299
Number of user-initiated messages sent324
Fraction of messages sent by patients102/324 = 31.5%
Fraction of messages sent by clinicians222/324 = 68.5%
Average number of messages sent by patients per day0.23 ± 0.52
Average number of messages sent by clinicians per day0.67 ± 1.39

Table 2: Summary statistics for usage of the messaging feature in the COVIDFree@Home platform.

Figure 3:

Figure 3: (top) The distribution of messages sent by patients and clinicians according to the time of day. (bottom) A graph showing the proportion of messages that were given a response by the other conversational partner within a given time delay. This graph only involves the message pairs related to the same topic.

As shown on the left side of Figure 3, patients most often sent messages during mornings and evenings with a dip in the afternoon. Since patients were sent notifications if they had not uploaded data by 10 AM and 8 PM, this trend reveals patients often decided to read and send messages around these times as well. Clinicians demonstrated similar patterns, although their messages were often earlier than the automatic notifications to maximize the chance that they would be viewed by patients. The times at which clinicians sent messages were also more confined to their work hours. The right side of Figure 3 shows the time it took for clinicians and patients to reply to one another’s messages. To generate this distribution, the message logs were first divided into a series of contiguous conversations according to the topic of the message content. Any time the message sender switched between messages within a given conversation, the elapsed time between them was calculated and added to the graph. The average time it took for all platform users to reply was 7 hours; patients took 4 hours to reply on average, whereas clinicians took 9 hours.

We neither draw explicit comparisons between these averages nor report the response rates of the two groups to avoid mischaracterizing their responsiveness. Patients were told during enrollment that they would receive replies to their messages within 24 hours, and Figure 3 shows that this goal was achieved. Nevertheless, clinicians often leveraged this flexibility to balance their COVIDFree@Home duties with other priorities. Patients also occasionally sent messages at night, so although receiving a response the following morning would be reasonable, these cases produced longer response times. Regarding the response rates themselves, there were many messages that did not require a response, and this determination was not as simple as annotating whether or not the message contained a question. There were also situations when the conversational partner responded to a message in a way that was not captured by the messaging feature. For example, when clinicians requested additional data from patients, the latter would upload data without confirming that they did so in the messaging feature. Conversations were sometimes continued over the phone or a videoconferencing platform, so the final resolution of the topic was never captured on the messaging feature.

Despite these caveats, our observations show that the messaging feature was leveraged by both patients and clinicians in a way that did not impose substantial overhead. Both stakeholder groups often sent messages near the times at which the automated notifications were initiated, so this design decision may have influenced asynchronous messaging behavior.

Skip 5MESSAGING FEATURE ANALYSIS Section

5 MESSAGING FEATURE ANALYSIS

In this section, we delve deeper into the content of the messages that were exchanged by both patients and clinicians. We then describe two specific examples of times when the messaging feature allowed clinicians to intervene and correct inaccurate data.

Figure 4:

Figure 4: The breakdowns of the message types that were sent by (left) clinicians and (right) patients.

Figure 5:

Figure 5: The topic of the messages that were sent by (top) clinicians and (bottom) patients.

5.1 Categorization of Message Topics

The content of the messages exchanged over the COVIDFree@Home platform was annotated using thematic analysis, more specifically open coding [80]. Using an inductive approach, two researchers independently generated codebooks on a subset of messages, after which they convened to agree on a shared codebook. The researchers then independently coded all messages according to that shared codebook. Although multiple codes could have been applied to the same message, the researchers were instructed to select a unique code that they felt was most relevant to each message in order to facilitate the calculation of an inter-rater agreement score. The tool used for this analysis was a spreadsheet that was configured to facilitate the coding process. This process led to an inter-rater agreement of 81.5% for patient messages and 83.1% for clinician messages according to Krippendorff’s alpha. After the first round of coding, the researchers deliberated over conflicting codes and reached 100% agreement in order to generate a final label for each message. A breakdown of the types of messages that were sent is shown in Figure 4, and the distribution of the message topics is shown in Figure 5. Example messages for each topic can be found in Appendix A.

5.1.1 Clinician-initiated Messages.

The categories listed on the left side of Figure 5 are described below:

Request for More Data: The most common set of messages involved requests for patients to provide additional vital signs data through the COVIDFree@Home smartphone app, particularly when clinicians wanted to verify a recent measurement that appeared concerning. Two examples of such exchanges are described in Section 5.3, where data was identified as inaccurate through messaging and outside communication.

Absence Check-in: When patients stopped uploading data for a period of time, clinicians inquired into whether they were experiencing difficulties with their health or the COVIDFree@Home platform. Patients occasionally responded by noting that they were either too sick or too busy to upload data. At other times, patients confirmed that they were having technical problems with the app that were later remedied.

Study Logistics: Clinicians often responded to patient-initiated questions related to the study protocol, such as the expected duration of their enrollment in the study. Clinicians also helped patients navigate the app or address concerns about their equipment.

Wellness Checks: The second-most common message topic involved inquiries about patients’ general health and wellbeing without any specific mention of their health data. Given the uncertainty surrounding COVID-19 at the time, these messages met patients’ emotional needs and provided empathetic care by giving them assurances that an expert was looking after their health [74]. In some cases, patients responded to these messages by reporting their health status, making them similar to Requests for More Data. More often than not, however, these messages were considered to be in their own category since clinicians did not ask follow-up questions when patients did not provide new information about their health.

Health Data Interpretation: When patients uploaded data, clinicians sometimes acknowledged viewing the new information and provided them with feedback about their health. For example, when a patient uploaded an oxygen saturation that was higher than their previous readings, their clinician sent a message noting that their health seemed to be improving.

Health Information and Recommendations: As expected, patients occasionally sought professional advice regarding their health or medications they could take to mitigate their symptoms. Clinicians answered these questions through the COVIDFree@Home messaging feature when they could, but in more complicated situations, they sometimes scheduled a phone call or a videoconferencing session to have a deeper conversation.

Expressions of Gratitude: To further demonstrate empathetic care, clinicians sent notes of appreciation to patients who promptly replied to messages or maintained high adherence to the study protocol.

5.1.2 Patient-initiated Messages.

The categories listed on the right side of Figure 5 are described below:

Health Questions: Unlike with mobile health platforms involving wearables in which users only have access to self-reported data, patients using COVIDFree@Home had direct access to all of their health data. Therefore, patients frequently asked about the significance of their symptoms and the meaning of their vital signs measurements. This topic of communication may have also promoted engagement with the platform since patients may have derived direct benefit from providing data to the clinical team.

Study Questions: As mentioned earlier, patients often contacted the clinicians to ask for clarifications about the study protocol — when they would receive the medical devices, when they could stop providing data, etc.

Prompted Health Updates: These messages were sent in response to clinician-initiated questions regarding patients’ health.

Prompted Study Logistic Updates: Likewise, these messages were sent in response to clinician-initiated questions regarding patients’ enrollment in the study.

Unprompted Health Updates: Patients frequently provided updates about their health without being requested for more information by the clinicians. It should be noted that 14 of the 29 messages in this category were sent by one patient who was particularly active with the COVIDFree@Home platform.

Unprompted Study Logistic Updates: One patient took the initiative to inform clinicians about changing their enrollment status. This individual noted that they felt fully recovered and no longer needed their health to be monitored (contrary to the study protocol), so they informed the clinicians that they would not be uploading any more data.

Health Data Corrections: In some instances, patients uploaded data that they later realized was not accurate, so they sent follow-up messages notifying clinicians of the error.

Study Troubleshooting: On occasion, patients asked clinicians for help on how to use the COVIDFree@Home application. These problems often involved issues related to in-app navigation, particularly for logging symptoms outside of the core set listed on the symptom page. Although COVIDFree@Home never experienced any downtime, some patients reported visual glitches that impeded their ability to contribute data. Issues that could not be solved by clinicians were forwarded to the research team and addressed accordingly.

Expressions of Gratitude: These messages often involved patients thanking clinicians for responding to their messages or for monitoring their health in general. There were two instances when this was the sole content of the message; however, there were many more instances when messages with other codes included some form of gratitude before continuing the main topic of the conversation.

5.2 Conversation Patterns

Figure 6:

Figure 6: The conversation patterns that were observed (top) from clinicians to patients and (bottom) from patients to clinicians.

Figure 6 illustrates common sequences across the various message categories. These figures were generated by identifying pairs of messages that were deemed to be connected by the same researchers who coded the messages themselves. It is worth noting that not all messages from Figure 5 are represented in Figure 6, as messages that did not receive a response were excluded. The most common exchanges between patients and clinicians often dealt with either the study protocol or patients’ concerns about their health. As expected, most Study Questions and Health Questions sent by patients were responded to by clinicians with messages about Study Logistics and Health Information and Recommendations, respectively. Many messages extended beyond information exchange, including elements of gratitude, empathy, and rapport building. Clinicians often received Expressions of Gratitude when they provided patients with information about Study Logistics, and patients received Expressions of Gratitude themselves when they provided clinicians with updates on their status.

Messages from both stakeholders eventually led to new information that allowed patients to receive more personalized and timely care. When clinicians sent Requests for More Data and Wellness Checks, responsive patients either replied with a Prompted Health Update or simply uploaded new data directly. Patients even provided health data outside of the primary monitoring mechanisms through Unprompted Health Updates. These exchanges also helped generate a more reliable and complete dataset of patient health that can be used by researchers to model symptom progression.

5.3 Case Studies

To further illustrate the benefit of the messaging feature to clinicians and patients, we provide examples of when conversations through COVIDFree@Home directly led to data corrections.

5.3.1 Patient #996653.

This patient reported an oxygen saturation of 82% on their first day in the study. Since this reading was well below the internal threshold for concern of 92%, the attending clinician sent the patient a message that day for another reading. The patient did not respond or upload another oxygen saturation measurement, so the clinician called the patient. According to the clinical notes taken in the COVIDFree@Home dashboard, the patient reported that they were dizzy but otherwise felt fine. The clinician recommended the patient go to the emergency department due to their low oxygen saturation reading, but the patient declined. The clinician then insisted that the patient should update the clinical team about their health the following day. On that day, the clinical team and the patient had a videoconferencing session where they identified that the oximeter was faulty. The patient was advised to use a spare oximeter that another family member was using. For the remainder of the patient’s enrollment, their oximeter readings were over 96% and no other issues were encountered during their recovery.

5.3.2 Patient #358687.

This patient reported a heart rate of 99 beats per minute and an oxygen saturation of 85%. As before, the oxygen saturation reading fell below the internal threshold for concern, so the attending clinician later messaged them to request another reading. It was eventually discovered that the patient had confused the sliders for heart rate and oxygen saturation, so they meant to upload an oxygen saturation of 99% and a heart rate of 85 beats per minute. The remaining measurements of the patient were submitted correctly.

 

These two cases highlight situations when abnormal sensor readings were later identified to be incorrect either due to faulty equipment or patient misunderstandings. By having the ability to view patient data and send messages directly through the same interface, it was convenient for clinicians to validate anomalous data. This validation was later documented through both clinical notes and follow-up messages with patients, making it easier for researchers to curate the dataset within the COVIDFree@Home so that it accurately reflects patients’ trajectories.

Skip 6MACHINE LEARNING FOR HOSPITALIZATION PREDICTION Section

6 MACHINE LEARNING FOR HOSPITALIZATION PREDICTION

One of the goals of the COVIDFree@Home study was to curate a dataset that could help researchers and clinicians predict when a patient may require hospitalization. The messaging feature led to heightened data availability and corrections that were recorded in the COVIDFree@Home platform, hypothetically improving the quality of the dataset. Therefore, we demonstrate how these corrections result in an improvement in machine learning model accuracy.

6.1 Dataset Modifications

Table 3:
OutcomeCount / Percentage
9/16 = 56.3%
Number of symptom records added61/3,567 = 1.7%
Number of oxygen saturation records added15/3,725 = 0.4%
Number of heart rate records added1/3,725 = 0.02%
Number of temperature records added5/3,252 = 0.2%
Number of oxygen saturation records removed8/3,725 = 0.2%
Number of temperature records removed1/3,252 = 0.03%

Table 3: Summary statistics of the modifications that were made to the COVIDFree@Home dataset.

The dataset initially only had seven labeled cases of patient hospitalizations, but thanks to follow-up by the clinicians, nine cases that had not been logged as hospitalizations were eventually updated in the system. The clinicians also played a pivotal role in correcting symptom and vital signs data, adding 61 symptom records using a combination of messages through the platform and clinical notes left after virtual consultations. The clinicians also identified nine vital signs measurements that were later determined to be incorrect due to data entry mistakes, malfunctioning devices, or incorrect use of the app. In total, 91 modifications were made to patient-reported data, and a full description of modifications is shown in Table 3.

6.2 Model Design

Figure 7:

Figure 7: (left) The architecture of the few-shot learning model used to predict patient hospitalizations. The inputs to the model are feature vectors from two distinct patients, and the output is the likelihood the patients shared the same outcome. (right) The experimental setup used to illustrate the model lift, highlighting the higher quality of the data obtained through data corrections from the messaging feature.

Our dataset included 16 hospitalization cases out of 356 patients (4.5%), comparable to hospitalization rates observed in other studies involving low-risk COVID-19 patients [3]. Nevertheless, this rate led to an exceedingly imbalanced dataset for traditional machine learning. We instead used few-shot learning [87] to leverage the few positive labels we had to train a model. We reframed our prediction task such that the goal was to identify whether two patients belonged to the same class — in this case, those in need or not in need of hospitalization. During testing, individual patients were compared against known cases of hospitalization or non-hospitalization set aside as a support set from the training data; test patients were assigned to the class corresponding to the most similar patient from the support set. Transforming the positive labels from patient hospitalization status to whether or not two patients belong to the same class increased the number of positives available during model training. Using few-shot learning also multiplied the yield of each hospitalization case since the model used pairs of patients as input.

We used the following dimensions of data as input into the model: age, gender, oxygen saturation measurements, heart rate measurements, temperature measurements, and the presence of the seven main symptoms recorded by the app, totaling 12 features to be used for classification. Demographic information was included in our model since age and gender are known to be covariates of heart rate [78] and temperature [26]. To address the fact that patients enrolled in the study at different times after self-reported onset of COVID-19, we trained the model on data from the patient’s first seven days relative to onset rather than enrollment. For example, if a patient was enrolled on their third day after testing positive for COVID-19, we only used data from their first five days in the study — their third, fourth, fifth, sixth, and seventh days after COVID-19 onset. Each day was split at 2 PM to create two half-days. Multiple data uploads within a half-day were merged such that only the highest heart rate, the highest temperature, and the lowest oxygen saturation readings were kept. Meanwhile, multiple symptom reports within a half-day were merged such that a symptom was recorded as present if it was marked as such in any of the reports.

Figure 7 illustrates our model architecture, which was a Siamese neural network consisting of fully connected, batch normalization, and dropout layers [40]. The model was trained to predict the probability that two input patients would have the same outcome. The input vector for each patient included (12 features) × (14 half-days) = 168 features in total. Instances of missing days and reports were imputed using mean imputation within each feature.

6.3 Model Evaluation

We used 70-30% train-test splits in our experiments, meaning that 250 patients were used for training and 106 patients were used for testing. The ordering of the input patient pairs mattered since the weights of the model sub-networks were independent; in other words, the input pair (patient1, patient2) could have led to a different result than the input pair (patient2, patient1). Therefore, we generated 250 × 249 = 62.3k patient pairs for model training. The support set contained 3 hospitalized patients and 3 non-hospitalized patients who were held out from the training set. We repeated this splitting procedure 100 times with random splits to establish the robustness of our results.

We compared the model performance according to two datasets: one with the original features in the dataset, and another taking into account the feature additions and corrections obtained through the messaging feature. Although dataset corrections also increased the number of hospitalization cases available in the dataset from 7 to 16, both experiments used the final set of 16 labeled hospitalization cases to prevent the potential improvement from being attributable to the redistribution of labels. The dataset modifications impacted 28 (8.0%) patients in the training and test sets, and 2 (33%) patients in the support set. Additionally, 4 (25%) hospitalized patients had their data modified.

Figure 8:

Figure 8: The ROC curves for the hospitalization prediction model trained on two different datasets: one without clinician modifications and one with clinician modifications.

The model performance with the two datasets is shown in Figure 8. Although the revisions impacted fewer than 100 entries in our dataset of over 10k reports, we saw that they had a noticeable effect on the models. The model that was evaluated on the dataset without modifications achieved an average AUROC score of 0.53 ± 0.18, while the model evaluated on the other dataset achieved an average AUROC score of 0.59 ± 0.20. The difference between these distributions was statistically significant according to a paired-samples t-test (t(99) = 2.18, p <.05).

Skip 7DISCUSSION Section

7 DISCUSSION

COVIDFree@Home was originally conceived as a research study for bettering our understanding of patient trajectories after COVID-19 infection using remote patient monitoring. Through the integration of a messaging feature, we were able to provide tangible benefits for patients, clinicians, and researchers. In the rest of this section, we first elaborate upon the benefits experienced by each group. We then provide recommendations for future digital health studies beyond remote patient monitoring. Finally, we describe the limitations of our findings and opportunities for future work.

7.1 Benefits for Patients

Although remote monitoring provides an additional channel for patient care, the lack of interpersonal communication and face-to-face interactions can result in patients feeling uncertain or unheard about their health concerns [13]. This was particularly true during the COVID-19 pandemic, when there was significant uncertainty around the symptoms and outcomes associated with the virus. Including a messaging feature in the COVIDFree@Home platform mitigated some of these worries for patients, conveying that there was a team of clinicians that was closely reviewing their data. Patients received a notification whenever their profile was reviewed by a clinician, and they also received messages from clinicians asking about their health. Similar feedback mechanisms in prior work have been shown to improve adherence in remote patient monitoring [7]. We also saw patients express gratitude to their clinicians for reviewing their data, further supporting the notion that they appreciated the care they felt they were receiving.

Beyond increased engagement, we found that the messaging feature led to collaborative reflection [53]. Some patients who struggled to make sense of their health data utilized our messaging feature to leverage clinicians’ domain expertise and seek advice. The fact that patients were taking the time to contemplate the progression of their health had its own benefits, as prior work has shown that such reflection can lead patients to make more informed decisions about their own health [33, 44, 54]. Collaborative reflection through the messaging feature also made it easier for clinicians to investigate potentially anomalous data and ensure that they were not being misled into incorrect patient recommendations. As highlighted in Section 5.3, conversations over patients’ data occasionally led to the identification of faulty equipment or incorrect data reporting. Without asynchronous messaging, diligent clinicians would be able to identify and remedy such issues over the phone or a videoconferencing session. Still, the messaging feature made this process more convenient while automatically documenting the conversation for all stakeholders.

7.2 Benefits for Clinicians

One of the major benefits experienced by clinicians was the fact that the messaging feature fostered consistent patient engagement with respect to reporting symptoms and uploading vital signs data. The COVIDFree@Home platform included a few automated messages for this purpose: the reminders that were sent whenever patients did not upload data by the prescribed windows and the notifications that were sent whenever a clinician was actively reviewing the patient’s data. We initially considered providing patients with more information about which data was being reviewed or why their data was being reviewed; however, concerns were raised as we were developing COVIDFree@Home that doing so would often lead to additional patient concerns. Even though these automated messages were primarily designed to give patients a sense of comfort, we hypothesize that such notifications also added a level of accountability for them to upload data. On top of that, the messaging feature gave clinicians a channel through which they could communicate with patients who were lagging behind in their protocol adherence. In some cases, it was discovered that patients were feeling too sick to upload data; conversely, there were situations when patients felt that they had recovered and no longer saw a reason to upload new data. These explanations led to important changes in care delivery that otherwise would not have been considered.

Although not an actual benefit of the messaging feature, we closely monitored the number of messages that were exchanged through the COVIDFree@Home platform to ensure that clinicians were not overwhelmed with patient communication. This was important to us since inefficient clinical workflows are quickly abandoned for simpler or more effective solutions [61]. With asynchronous text messaging, patients were inherently rate-limited in how they communicated with their clinicians. We showed in Section 4.2 that clinicians sent and received fewer than one message per day on average, suggesting minimal overhead on their workflow.

7.3 Benefits for Digital Health Researchers

The messaging feature in the COVIDFree@Home platform imbued the researchers involved in the study with additional confidence in the data that was being collected. As part of their responsibilities for using COVIDFree@Home, clinicians reviewed new uploads by patients within a day. They were also able to follow up with patients whenever data seemed anomalous, which led to either confirmation of interesting cases or revisions of incorrect entries. In other words, clinicians’ interactions with the COVIDFree@Home platform generated more reliable labels for key events in patients’ trajectories and important context to the observed data. We demonstrated through the evaluation in Section 6 that these modifications led to non-trivial improvements in the performance of a machine learning model for predicting hospitalizations.

The fact that patients experienced added benefit from the messaging feature may have also led to greater adherence to the system, resulting in a more complete dataset with fewer missing values. The COVIDFree@Home study allowed us to curate a dataset from over 350 patients describing their road to recovery or hospitalization. Since our analyses span 18 months, our dataset encapsulates patient experiences with different COVID-19 variations and vaccination levels. This breadth and depth of data will further the research community’s understanding of COVID-19 and perhaps other influenza-like illnesses.

7.4 Design Implications for Front-End Designers

Beyond the stakeholders noted throughout our paper, our work has notable design implications for front-end designers involved in remote patient monitoring efforts or studies involving passive data collection. These kinds of studies are useful for recruiting beyond homogeneous and easily accessible populations like university students [16, 19], yet many of them experience high attrition in the absence of strong financial, social, or individual incentives for participants [41, 60]. Whereas many studies only consider the flow of data from participants to researchers, we hope that our COVIDFree@Home platform demonstrates a different model that provides other forms of incentive for all stakeholders.

One user-centered challenge to adopting a remote monitoring-style platform has to do with the complexity of the data being collected. In our case, we only incorporated a handful of simple vital signs measurements into the COVIDFree@Home platform: temperature, heart rate, and oxygen saturation. The research community has explored a host of potential biomarkers for detecting cases of COVID-19, such as cough frequency [43], cough quality [17], and speech quality [64]. Although these biomarkers have shown great promise, they are still in active development and are known to be prone to bias across different environments, devices, and languages [18]. By focusing on data that could be readily interpreted, clinicians were immediately able to have conversations with patients about their health. Patients were also able to apply their knowledge and intuition to their own health data, which can have varied outcomes depending on how they consider measurement uncertainty and extrapolate trends [38].

When designing interfaces for studies intended to develop or validate new biomarkers, front-end designers should consider whether all relevant stakeholders are able to to derive benefit from at least a subset of the data being collected. In some cases, that may mean vouching for the inclusion of validated vital signs that can instantly be appreciated like heart rate, body temperature, activity levels, or even sleep and stress scores. These validated vital signs are likely to be collected anyway as potential confounds or correlates, but giving study participants access to this data can make them actively engaged in the data collection process.

Our messaging feature also proved invaluable in sustaining engagement, and the way it was designed helped us address the various needs of our stakeholders. Had we focused strictly on patients’ priorities, we would have created the messaging feature in such a way that would have likely overwhelmed clinicians. While clinicians were eager to better serve their patients and to address their health concerns in a timely manner, they were keen on doing this in a scalable manner that did not put unreasonable expectations on their responsiveness [90]. With these considerations in mind, both patients and clinicians benefited from the design of our asynchronous messaging feature. The flexibility it afforded was particularly useful for smaller inquiries that were impractical for scheduling formal meetings yet too important to overlook during routine check-ins. For example, clinicians often utilized asynchronous messaging to send Requests for More Data after reviewing patient data on the dashboard. Conversely, participants frequently used asynchronous messaging to send both Prompted and Unprompted Health Updates. The asynchronous nature of our platform served as an outlet for these "spur-of-the-moment" inquiries that might have otherwise been delayed or forgotten.

As we were designing our messaging feature, another important consideration was how it was advertised to patients. The messaging interface on the COVIDFree@Home app was designed to subtly influence patients to send a single comprehensive message rather than a long chain of shorter messages, similar to the affordances of email versus text messaging [21]. This was achieved by separating the functionality for sending and receiving messages and including a large text box for message entry rather than the standard single-line field used in most chat interfaces. To further curb overreliance on the messaging feature, patients were informed at multiple stages of the protocol that COVIDFree@Home was not a replacement for emergency services. Establishing these expectations in concert with clinicians, hospitals, and other governing bodies was critical to reducing hospital liability and mitigating the chance that patients would not seek appropriate treatment.

7.5 Limitations

Our work was not without its limitations and shortcomings. First and foremost, our dataset contained a limited number of patients who required hospitalization after enrollment (16/359 = 4.5%). Since the COVIDFree@Home study was designed to support patients who were suitable for outpatient care, we did not recruit those who initially presented with severe cases of infection. It was therefore expected that many of these individuals would not require care escalation, which may have introduced a bias into our dataset that limits its generalizability to other healthcare scenarios. However, the focus of our machine learning experiment was not to demonstrate the performance of a deployable model, but rather the improvement we observed by applying data corrections obtained through the messaging logs. Furthermore, insights from the messaging content shine a light on the experiences of a patient cohort that still incurs significant financial burdens on healthcare systems [11].

We were also only able to report on data that appeared within the COVIDFree@Home platform. Patients were free to measure their vital signs or track their symptoms without entering the data into our smartphone app. Patients and clinicians were also allowed to communicate with one another using other communication channels. We were informed of a few such cases during our study, such as instances when clinicians sent a message to notify patients of an impending conversation over the phone.

Finally, the COVIDFree@Home study was conducted in a major metropolitan city in North America known for its multiculturalism, yet our findings are limited in that they were only derived from a single population. Patient-clinician relationships and communication practices in general are tied to cultural and socioeconomic norms [55, 68, 69]. Furthermore, differences in how healthcare is structured across countries can impact the expectations of both patients and clinicians. Future work could investigate whether the proposed benefits of asynchronous messaging generalize to other population groups.

Skip 8CONCLUSION Section

8 CONCLUSION

As researchers scale their digital health studies to reach diverse populations, maintaining engagement and high data quality are important challenges that must be addressed. Our COVIDFree@Home study demonstrates that the inclusion of an asynchronous messaging feature within a remote patient monitoring platform can benefit patients, clinicians, and researchers. Patients were able to use the messaging feature to ask questions about the study protocol and get personalized feedback on their health. Meanwhile, clinicians were able to use the messaging feature to encourage patients to upload more data, convey empathetic care, and more efficiently investigate anomalous health data. As researchers, we benefited from knowing that the data had been inspected by domain experts as it was being collected, which led to improved accuracy in a machine learning model designed to predict patient hospitalizations. We hope that our work inspires other HCI researchers to incorporate remote monitoring and messaging as components in their data collection protocols.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

The COVIDFree@Home project is supported by the Canadian Institutes of Health Research (CIHR) FRN VR4 -172746, Coronavirus Variants Rapid Response Network (CoVaRR-Net) grant (#175622), the PSI Foundation and the Data Sciences Institute at the University of Toronto. Additionally, we acknowledge Nisha Patel’s contributions on supporting the COVIDFree@Home study.

A Examples of Messages Sent Through COVIDFree@Home

Table 4:
Message TopicExamples
“Hello, could you enter your oxygen saturation again?”
“Please check your sats a few more times today if you can”
“Hello, can you enter your heart rate in again?”
“Is everything ok? We noticed you haven’t entered anything in a while.”
“Hello, we noticed you have not entered information since Nov 27, have they all resolved?”
“Hi, Just checking in - are you able to input your symptoms or vitals? If you’re having any issues, please do let me know.”
“Hello, has public health told you that your COVID has resolved?”
“Thank you for your message and question. If you are out of quarantine then there is no need for you to continue to enter symptoms. Thank you for your participation!”
“I asked for one of our team to call you to check in.”
“Hello, How are you doing today?”
“How are you feeling today?”
“doing ok?”
“sats are looking better. seems like your breathing is better”
“Did you really have an oxygen saturation of 83 earlier today?”
“I see your O2 level is better today, hopefully you have been checking frequently and have found this number stabilize? Breathing still doing well?”

Health Information and

“a mild cough could be normal. if otherwise feeling ok, may resolve with time”
“Normally we would not need to hospitalize for covid unless less than 92.”
“normal oxygen saturation would be over 94”
“Hi, Thanks for participating in the study. Hope you feel better soon.”
“Thank you for entering.”
“Thanks for entering your information and participating.”

Table 4: Examples of messages sent by clinicians who used the COVIDFree@Home platform. Personally identifiable information is redacted.

Table 5:
Message TopicExamples
“I have pain in my middle back both side, what can I do to feel better”
“how low should be the oxygen reading. I have 92 or less and keep having fever. should I be concerned of pneumonia?”
“I have a new persisting headache. should I be concerned?”
“When is my appointment today?”
“hi, feeling pretty much normal again. :) Do you want me to continue reporting?”
“hello, what address should I send the Oximeter back to? thanks!”
“Hello - trying this as a way to respond. My oxygen Saturation is now at 91. I feel exhausted but breathing is okay.”
“I wouldn’t say my symptoms have fully resolved, although they are much improved. I still have some nasal congestion and occasional cough.”
“Actually sore throat is far better than 1st day loss of sense of smell and taste and it’s getting better as well”
“Only contact with public health was when I first tested positive. No contact since then.”
“Yes cleared by Public health and Occ health and can return to work. Thank you.”
“Answer to your question. I have been off isolation since January 25”
“Can not Smell or Taste anything”
“note slept most of the day but feeling quite a bit better.”
“note last night asthma was triggered. controlled with 2puffs ventolin.”
“Hi-I am assuming I should stop submitting since my isolation ended yesterday but let me know if I should continue. was informed by <City> public health yesterday tat I had the UK variant.”
“Hello, I was having some issues with the device so I resubmitted my results. The 37/98/88 can replace the readings sent prior to that. Thanks!”
“hello, my blood oxygen levels from March 31st were not accurately measured as my oximeter never arrived from the hospital. all other data is accurate.”
“I’m unable to enter measurements. says device not registered”
“Hi - it’s not letting me register the oximeter, can someone help?”
“Thermometers are defective. Blue circles are highlighted before using it.”
“I will stop submitting then. Thanks for reviewing my reports. I really appreciated the advice of Dr. <Redacted> of the COVID group. He was very responsive and helpful as I went through this.”
“Awesome!Thank you.”

Table 5: Examples of messages sent by patients who used the COVIDFree@Home platform. Personally identifiable information is redacted.

Footnotes

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

9.8 MB

References

  1. Ahmad A Aalam, Colton Hood, Crystal Donelan, Adam Rutenberg, Erin M Kane, and Neal Sikka. 2021. Remote patient monitoring for ED discharges in the COVID-19 pandemic. Emergency Medicine Journal 38, 3 (2021), 229–231. https://doi.org/10.1136/emermed-2020-210022 arXiv:https://emj.bmj.com/content/38/3/229.full.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  2. Robab Abdolkhani, Kathleen Gray, Ann Borda, and Ruth DeSouza. 2019. Patient-generated health data management and quality challenges in remote patient monitoring. Journal of the American Medical Informatics Association Open 2, 4 (2019), 471–478. https://doi.org/10.1093/jamiaopen/ooz036Google ScholarGoogle ScholarCross RefCross Ref
  3. Tucker Annis, Susan Pleasants, Gretchen Hultman, Elizabeth Lindemann, Joshua A Thompson, Stephanie Billecke, Sameer Badlani, and Genevieve B Melton. 2020. Rapid implementation of a COVID-19 remote patient monitoring program. Journal of the American Medical Informatics Association 27, 8 (2020), 1326–1330. https://doi.org/10.1093/jamia/ocaa097Google ScholarGoogle ScholarCross RefCross Ref
  4. Stephen D Anton, Eric LeBlanc, H Raymond Allen, Christy Karabetian, Frank Sacks, George Bray, and Donald A Williamson. 2012. Use of a computerized tracking system to monitor and provide feedback on dietary goals for calorie-restricted diets: the POUNDS LOST study. Journal of Diabetes Science and Technology 6, 5 (2012), 1216–1225. https://doi.org/10.1177/193229681200600527Google ScholarGoogle ScholarCross RefCross Ref
  5. Ashish Atreja, Sandesh Francis, Sravya Kurra, and Rajesh Kabra. 2019. Digital medicine and evolution of remote patient monitoring in cardiac electrophysiology: A state-of-the-art perspective. Current treatment options in cardiovascular medicine 21, 12 (2019), 1–10. https://doi.org/10.1007/s11936-019-0787-3Google ScholarGoogle ScholarCross RefCross Ref
  6. Mirza Mansoor Baig, Hamid GholamHosseini, Aasia A Moqeem, Farhaan Mirza, and Maria Lindén. 2017. A systematic review of wearable patient monitoring systems–current challenges and opportunities for clinical adoption. Journal of medical systems 41, 7 (2017), 1–9. https://doi.org/10.1007/s10916-017-0760-1Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Marion Ball, Sasha Ballen, Catalina Danis, Alexis Concordia, and Martha Jean Marty Minniti. 2015. No patient engagement, no chance for adherence. Journal of Healthcare Information Management 29 (2015), 24–7.Google ScholarGoogle Scholar
  8. Karthik S Bhat, Mohit Jain, and Neha Kumar. 2021. Infrastructuring Telehealth in (In)Formal Patient-Doctor Contexts. Proceedings of ACM Human-Computer Interaction 5, CSCW2, Article 323 (oct 2021), 28 pages. https://doi.org/10.1145/3476064Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Anna Bell Björk, Helene Hillborg, Marika Augutis, and Göran Umefjord. 2017. Evolving techniques in text-based medical consultation–Physicians’ long-term experiences at an Ask the doctor service. International journal of medical informatics 105 (2017), 83–88.Google ScholarGoogle Scholar
  10. Jessica Campbell, Deborah Theodoros, Trevor Russell, Nicole Gillespie, and Nicole Hartley. 2019. Client, provider and community referrer perceptions of telehealth for the delivery of rural paediatric allied health services. Australian Journal of Rural Health 27, 5 (2019), 419–426. https://doi.org/10.1111/ajr.12519Google ScholarGoogle ScholarCross RefCross Ref
  11. Kathleen Carey and Theodore Stefos. 2016. The cost of hospital readmissions: evidence from the VA. Health care management science 19, 3 (2016), 241–248. https://doi.org/10.1007/s10729-014-9316-9Google ScholarGoogle ScholarCross RefCross Ref
  12. Barbara Cherry, Michael Carter, Donna Owen, and Carol Lockhart. 2008. Factors affecting electronic health record adoption in long-term care facilities. Journal for healthcare quality 30, 2 (2008), 37–47. https://doi.org/10.1111/j.1945-1474.2008.tb01133.xGoogle ScholarGoogle ScholarCross RefCross Ref
  13. CM Chichirez and VL Purcărea. 2018. Interpersonal communication in healthcare. Journal of medicine and life 11, 2 (2018), 119.Google ScholarGoogle Scholar
  14. Chia-Fang Chung, Qiaosi Wang, Jessica Schroeder, Allison Cole, Jasmine Zia, James Fogarty, and Sean A Munson. 2019. Identifying and planning for individualized change: Patient-provider collaboration using lightweight food diaries in healthy eating and irritable bowel syndrome. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 3, 1 (2019), 1–27. https://doi.org/10.1145/3314394Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Karen Church and Rodrigo De Oliveira. 2013. What’s up with WhatsApp? Comparing mobile instant messaging behaviors with traditional SMS. In Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services. Association for Computing Machinery, New York, NY, USA, 352–361.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Megan Civitello, Alexander Hogan, Sigrid Almeida, Michael Brimacombe, Glenn Flores, Jessica Hollenbach, 2022. Remote Participant Recruitment for Pediatric Research During the COVID-19 Pandemic. Iproceedings 8, 1 (2022), e39272. https://doi.org/10.2196/39272Google ScholarGoogle ScholarCross RefCross Ref
  17. Harry Coppock, Alex Gaskell, Panagiotis Tzirakis, Alice Baird, Lyn Jones, and Björn Schuller. 2021. End-to-end convolutional neural network enables COVID-19 detection from breath and cough audio: a pilot study. BMJ innovations 7, 2 (2021), 356–362. https://doi.org/10.1136/bmjinnov-2021-000668Google ScholarGoogle ScholarCross RefCross Ref
  18. Harry Coppock, Lyn Jones, Ivan Kiskin, and Björn Schuller. 2021. COVID-19 detection from audio: seven grains of salt. The Lancet Digital Health 3, 9 (2021), e537–e538. https://doi.org/10.1016/S2589-7500(21)00141-2Google ScholarGoogle ScholarCross RefCross Ref
  19. Timothy M Daly and Rajan Nataraajan. 2015. Swapping bricks for clicks: Crowdsourcing longitudinal data on Amazon Turk. Journal of Business Research 68, 12 (2015), 2603–2609. https://doi.org/10.1016/j.jbusres.2015.05.001Google ScholarGoogle ScholarCross RefCross Ref
  20. Amol Deshpande, Shariq Khoja, Julio Lorca, Ann McKibbon, Carlos Rizo, Donald Husereau, and Alejandro R Jadad. 2009. Asynchronous telehealth: a scoping review of analytic studies. Open Medicine 3, 2 (2009), e69.Google ScholarGoogle Scholar
  21. Jill P Dimond, Casey Fiesler, Betsy DiSalvo, Jon Pelc, and Amy S Bruckman. 2012. Qualitative data collection technologies: A comparison of instant messaging, email, and phone. In Proceedings of the 2012 ACM International Conference on Supporting Group Work. Association for Computing Machinery, New York, NY, USA, 277–280.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Louisa Edwards, Clare Thomas, Alison Gregory, Lucy Yardley, Alicia O’Cathain, Alan A Montgomery, Chris Salisbury, 2014. Are people with chronic diseases interested in using telehealth? A cross-sectional postal survey. Journal of medical Internet research 16, 5 (2014), e3257. https://doi.org/10.2196/jmir.3257Google ScholarGoogle ScholarCross RefCross Ref
  23. Frederico Arriaga Criscuoli de Farias, Carolina Matté Dagostini, Yan de Assuncao Bicca, Vincenzo Fin Falavigna, and Asdrubal Falavigna. 2020. Remote patient monitoring: a systematic review. Telemedicine and e-Health 26, 5 (2020), 576–583. https://doi.org/10.1089/tmj.2019.0066Google ScholarGoogle ScholarCross RefCross Ref
  24. Maria Faurholt-Jepsen, Maj Vinberg, Mads Frost, Ellen Margrethe Christensen, Jakob E Bardram, and Lars Vedel Kessing. 2015. Smartphone data as an electronic biomarker of illness activity in bipolar disorder. Bipolar disorders 17, 7 (2015), 715–728. https://doi.org/10.1111/bdi.12332Google ScholarGoogle ScholarCross RefCross Ref
  25. Nurit Fox, AJ Hirsch-Allen, Elizabeth Goodfellow, Joshua Wenner, John Fleetham, C Frank Ryan, Mila Kwiatkowska, and Najib T Ayas. 2012. The impact of a telemedicine monitoring system on positive airway pressure adherence in patients with obstructive sleep apnea: a randomized controlled trial. Sleep 35, 4 (2012), 477–481. https://doi.org/10.5665/sleep.1728Google ScholarGoogle ScholarCross RefCross Ref
  26. Ivayla I Geneva, Brian Cuzzo, Tasaduq Fazili, and Waleed Javaid. 2019. Normal body temperature: a systematic review. In Open forum infectious diseases. Oxford University Press US, Oxford University Press, Wellington Square, Oxford, OX1 2JD United Kingdom, 032.Google ScholarGoogle Scholar
  27. William J Gordon, Daniel Henderson, Avital DeSharone, Herrick N Fisher, Jessica Judge, David M Levine, Laura MacLean, Diane Sousa, Mack Y Su, and Robert Boxer. 2020. Remote patient monitoring program for hospital discharged COVID-19 patients. Applied clinical informatics 11, 05 (2020), 792–801. https://doi.org/10.1055/s-0040-1721039Google ScholarGoogle ScholarCross RefCross Ref
  28. Rebecca Grainger, Bonnie White, Catherine Morton, Karen Day, 2017. A Health Professional–Led Synchronous Discussion on Facebook: Descriptive Analysis of Users and Activities. JMIR formative research 1, 1 (2017), e7257.Google ScholarGoogle Scholar
  29. Jessica Greene, Rebecca Sacks, Brigitte Piniewski, David Kil, and Jin S Hahn. 2013. The impact of an online social network with wireless monitoring devices on physical activity and weight loss. Journal of primary care & community health 4, 3 (2013), 189–194. https://doi.org/10.1177/2150131912469546Google ScholarGoogle ScholarCross RefCross Ref
  30. Frances Griffiths, Antje Lindenmeyer, John Powell, Pam Lowe, Margaret Thorogood, 2006. Why are health care interventions delivered over the internet? A systematic review of the published literature. Journal of medical Internet research 8, 2 (2006), e498. https://doi.org/10.2196/jmir.8.2.e10Google ScholarGoogle ScholarCross RefCross Ref
  31. Donald M Hilty, Christina M Armstrong, Amanda Edwards-Stewart, Melanie T Gentry, David D Luxton, and Elizabeth A Krupinski. 2021. Sensor, wearable, and remote patient monitoring competencies for clinical care and training: scoping review. Journal of Technology in Behavioral Science 6, 2 (2021), 252–277. https://doi.org/10.1007/s41347-020-00190-3Google ScholarGoogle ScholarCross RefCross Ref
  32. Donald M Hilty, John Torous, Michelle Burke Parish, Steven R Chan, Glen Xiong, Lorin Scher, and Peter M Yellowlees. 2021. A literature review comparing clinicians’ approaches and skills to in-person, synchronous, and asynchronous care: moving toward competencies to ensure quality care. Telemedicine and e-Health 27, 4 (2021), 356–373.Google ScholarGoogle ScholarCross RefCross Ref
  33. Margriet IJzerman-Korevaar, Alexander de Graeff, Steffie Heijckmann, Daniëlle Zweers, Bernard H Vos, Marloes Hirdes, Petronella O Witteveen, and Saskia CCM Teunissen. 2021. Use of a symptom diary on oncology wards: effect on symptom management and recommendations for implementation. Cancer Nursing 44, 4 (2021), E209–E220. https://doi.org/10.1097/NCC.0000000000000792Google ScholarGoogle ScholarCross RefCross Ref
  34. Mohammad Jabirullah, Rakesh Ranjan, Mirza Nemath Ali Baig, and Anish Kumar Vishwakarma. 2020. Development of e-health monitoring system for remote rural community of India. In 2020 7th International Conference on Signal Processing and Integrated Networks (SPIN). IEEE, IEEE, 3 Park Avenue, 17th Floor, New York, NY, 767–771. https://doi.org/10.1109/SPIN48934.2020.9071209Google ScholarGoogle ScholarCross RefCross Ref
  35. John M. Jakicic, Kelliann K. Davis, Renee J. Rogers, Wendy C. King, Marsha D. Marcus, Diane Helsel, Amy D. Rickman, Abdus S. Wahed, and Steven H. Belle. 2016. Effect of Wearable Technology Combined With a Lifestyle Intervention on Long-term Weight Loss: The IDEA Randomized Clinical Trial. JAMA 316, 11 (09 2016), 1161–1171. https://doi.org/10.1001/jama.2016.12858 arXiv:https://jamanetwork.com/journals/jama/articlepdf/2553448/joi160104.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  36. Brandy M Jenner and Kit C Myers. 2019. Intimacy, rapport, and exceptional disclosure: a comparison of in-person and mediated interview contexts. International Journal of Social Research Methodology 22, 2 (2019), 165–177.Google ScholarGoogle ScholarCross RefCross Ref
  37. Maulik R Kamdar and Michelle J Wu. 2016. PRISM: a data-driven platform for monitoring mental health. In Biocomputing 2016: Proceedings of the Pacific Symposium. World Scientific, World Scientific, 5 Toh Tuck Link, Singapore 596224, 333–344. https://doi.org/10.1142/9789814749411_0031Google ScholarGoogle ScholarCross RefCross Ref
  38. Matthew Kay, Dan Morris, MC Schraefel, and Julie A Kientz. 2013. There’s no such thing as gaining a pound: Reconsidering the bathroom scale user interface. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. Association for Computing Machinery, New York, NY, USA, 401–410. https://doi.org/10.1145/2493432.2493456Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Leah T Kelley, Michelle Phung, Vess Stamenova, Jamie Fujioka, Payal Agarwal, Nike Onabajo, Ivy Wong, Megan Nguyen, R Sacha Bhatia, and Onil Bhattacharyya. 2020. Exploring how virtual primary care visits affect patient burden of treatment. International Journal of Medical Informatics 141 (2020), 104228. https://doi.org/10.1016/j.ijmedinf.2020.104228Google ScholarGoogle ScholarCross RefCross Ref
  40. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, 2015. Siamese neural networks for one-shot image recognition, In ICML deep learning workshop. ICML’15: Proceedings of the 32nd International Conference on International Conference on Machine Learning 2, 1.Google ScholarGoogle Scholar
  41. Samantha Kolovson, Abhishek Pratap, Jaden Duffy, Ryan Allred, Sean A Munson, and Patricia A Areán. 2020. Understanding participant needs for engagement and attitudes towards passive sensing in remote digital health studies. In Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare. Association for Computing Machinery, New York, NY, USA, 347–362. https://doi.org/10.1145/3421937.3422025Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Ha-Kyung Kong and Karrie Karahalios. 2020. Addressing Cognitive and Emotional Barriers in Parent-Clinician Communication through Behavioral Visualization Webtools. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376181Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Jordi Laguarta, Ferran Hueto, and Brian Subirana. 2020. COVID-19 artificial intelligence diagnosis using only cough recordings. IEEE Open Journal of Engineering in Medicine and Biology 1 (2020), 275–281. https://doi.org/10.1109/OJEMB.2020.3026928Google ScholarGoogle ScholarCross RefCross Ref
  44. Kyoung Suk Lee, Terry A Lennie, Sherry Warden, Joy M Jacobs-Lawson, and Debra K Moser. 2013. A comprehensive symptom diary intervention to improve outcomes in patients with HF: a pilot study. Journal of cardiac failure 19, 9 (2013), 647–654. https://doi.org/10.1016/j.cardfail.2013.07.001Google ScholarGoogle ScholarCross RefCross Ref
  45. Brenna Li, Tetyana Skoropad, Puneet Seth, Mohit Jain, Khai Truong, and Alex Mariakakis. 2023. Constraints and Workarounds to Support Clinical Consultations in Synchronous Text-based Platforms. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Daniyal Liaqat, Mohamed Abdalla, Pegah Abed-Esfahani, Moshe Gabel, Tatiana Son, Robert Wu, Andrea Gershon, Frank Rudzicz, and Eyal De Lara. 2019. WearBreathing: Real world respiratory rate monitoring using smartwatches. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 1–22. https://doi.org/10.1145/3328927Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Daniyal Liaqat, Salaar Liaqat, Jun Lin Chen, Tina Sedaghat, Moshe Gabel, Frank Rudzicz, and Eyal de Lara. 2021. Coughwatch: Real-world cough detection using smartwatches. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, IEEE, 8333–8337. https://doi.org/10.1109/ICASSP39728.2021.9414881Google ScholarGoogle ScholarCross RefCross Ref
  48. Salaar Liaqat, Daniyal Liaqat, Tatiana Son, Andrea Gershon, Moshe Gabel, Robert Wu, and Eyal de Lara. 2022. Hindsight is 20/20: Retrospective lessons for conducting longitudinal wearable sensing studies. In 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). IEEE, 3 Park Avenue, 17th Floor, New York, NY, 545–550.Google ScholarGoogle ScholarCross RefCross Ref
  49. Maria Lluch. 2011. Healthcare professionals’ organisational barriers to health information technologies—A literature review. International journal of medical informatics 80, 12 (2011), 849–862. https://doi.org/10.1016/j.ijmedinf.2011.09.005Google ScholarGoogle ScholarCross RefCross Ref
  50. Xi Lu, Edison Thomaz, and Daniel A Epstein. 2022. Understanding People’s Perceptions of Approaches to Semi-Automated Dietary Monitoring. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–27. https://doi.org/10.1145/3550288Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Deborah Lupton. 2013. The digitally engaged patient: Self-monitoring and self-care in the digital health era. Social Theory & Health 11, 3 (2013), 256–270. https://doi.org/10.1057/sth.2013.10Google ScholarGoogle ScholarCross RefCross Ref
  52. Lakmini P Malasinghe, Naeem Ramzan, and Keshav Dahal. 2019. Remote patient monitoring: a comprehensive study. Journal of Ambient Intelligence and Humanized Computing 10, 1 (2019), 57–76. https://doi.org/10.1007/s12652-017-0598-xGoogle ScholarGoogle ScholarCross RefCross Ref
  53. Gabriela Marcu, Anind K Dey, and Sara Kiesler. 2014. Designing for collaborative reflection. In Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare. ACM, New York, NY, USA, 9–16. https://doi.org/10.4108/icst.pervasivehealth.2014.254987Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Susan Mayor. 2017. Electronic self reporting of symptoms may improve survival in patients with metastatic cancer. BMJ 357 (2017), 2413––2422. https://doi.org/10.1136/bmj.j2724 arXiv:https://www.bmj.com/content/357/bmj.j2724.full.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  55. James C McCroskey and Virginia P Richmond. 1990. Willingness to communicate: Differing cultural perspectives. Southern Journal of Communication 56, 1 (1990), 72–77. https://doi.org/10.1080/10417949009372817Google ScholarGoogle ScholarCross RefCross Ref
  56. Monique Mendelson, Isabelle Vivodtzev, Renaud Tamisier, David Laplaud, Sonia Dias-Domingos, Jean-Philippe Baguet, Laurent Moreau, Christian Koltes, Léonidas Chavez, Gilles De Lamberterie, 2014. CPAP treatment supported by telemedicine does not improve blood pressure in high cardiovascular risk OSA patients: a randomized, controlled trial. Sleep 37, 11 (2014), 1863–1870. https://doi.org/10.5665/sleep.4186Google ScholarGoogle ScholarCross RefCross Ref
  57. Sonali R. Mishra, Shefali Haldar, Ari H. Pollack, Logan Kendall, Andrew D. Miller, Maher Khelifi, and Wanda Pratt. 2016. "Not Just a Receiver": Understanding Patient Behavior in the Hospital Environment. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3103–3114. https://doi.org/10.1145/2858036.2858167Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Damilola D Olatinwo, Adnan Abu-Mahfouz, and Gerhard Hancke. 2019. A survey on LPWAN technologies in WBAN for remote health-care monitoring. Sensors 19, 23 (2019), 5268. https://doi.org/10.3390/s19235268Google ScholarGoogle ScholarCross RefCross Ref
  59. S Perl, P Stiegler, B Rotman, G Prenner, P Lercher, M Anelli-Monti, M Sereinigg, V Riegelnik, E Kvas, C Kos, 2013. Socio-economic effects and cost saving potential of remote patient monitoring (SAVE-HM trial). International journal of cardiology 169, 6 (2013), 402–407. https://doi.org/10.1016/j.ijcard.2013.10.019Google ScholarGoogle ScholarCross RefCross Ref
  60. Abhishek Pratap, Elias Chaibub Neto, Phil Snyder, Carl Stepnowsky, Noémie Elhadad, Daniel Grant, Matthew H Mohebbi, Sean Mooney, Christine Suver, John Wilbanks, 2020. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. NPJ digital medicine 3, 1 (2020), 1–10. https://doi.org/10.1038/s41746-020-0224-8Google ScholarGoogle ScholarCross RefCross Ref
  61. Alistair D Quinn, Dorian Dixon, and Brian J Meenan. 2016. Barriers to hospital-based clinical adoption of point-of-care testing (POCT): a systematic narrative review. Critical reviews in clinical laboratory sciences 53, 1 (2016), 1–12. https://doi.org/10.3109/10408363.2015.1054984Google ScholarGoogle ScholarCross RefCross Ref
  62. Shriti Raj, Joyce M Lee, Ashley Garrity, and Mark W Newman. 2019. Clinical data in context: towards Sensemaking tools for interpreting personal health data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1–20. https://doi.org/10.1145/3314409Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Linda M Rasmussen, Klaus Phanareth, Hendrik Nolte, and Vibeke Backer. 2005. Internet-based monitoring of asthma: a long-term, randomized clinical study of 300 asthmatic subjects. Journal of Allergy and Clinical Immunology 115, 6 (2005), 1137–1142. https://doi.org/10.1016/j.jaci.2005.03.030Google ScholarGoogle ScholarCross RefCross Ref
  64. Kotra Venkata Sai Ritwik, Shareef Babu Kalluri, and Deepu Vijayasenan. 2020. COVID-19 patient detection from telephone quality speech data. arXiv preprint arXiv:2011.04299 1 (2020), 1–11. https://doi.org/10.48550/arXiv.2011.04299Google ScholarGoogle ScholarCross RefCross Ref
  65. Dermot Ryan, David Price, Stan D Musgrave, Shweta Malhotra, Amanda J Lee, Dolapo Ayansina, Aziz Sheikh, Lionel Tarassenko, Claudia Pagliari, and Hilary Pinnock. 2012. Clinical and cost effectiveness of mobile phone supported self monitoring of asthma: multicentre randomised controlled trial. Bmj 344 (2012), 1–15. https://doi.org/10.1136/bmj.e1756Google ScholarGoogle ScholarCross RefCross Ref
  66. Hyeyoung Ryu, Andrew B.L. Berry, Catherine Y Lim, Andrea Hartzler, Tad Hirsch, Juanita I Trejo, Zoë Abigail Bermet, Brandi Crawford-Gallagher, Vi Tran, Dawn Ferguson, David J Cronkite, Brooks Tiffany, John Weeks, and James Ralston. 2023. “You Can See the Connections”: Facilitating Visualization of Care Priorities in People Living with Multiple Chronic Health Conditions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 473, 17 pages. https://doi.org/10.1145/3544548.3580908Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Perihan Savas. 2011. A case study of contextual and individual factors that shape linguistic variation in synchronous text-based computer-mediated communication. Journal of Pragmatics 43, 1 (2011), 298–313.Google ScholarGoogle ScholarCross RefCross Ref
  68. Barbara C Schouten and Ludwien Meeuwesen. 2006. Cultural differences in medical communication: a review of the literature. Patient education and counseling 64, 1-3 (2006), 21–34. https://doi.org/10.1016/j.pec.2005.11.014Google ScholarGoogle ScholarCross RefCross Ref
  69. Susan M Labuda Schrop. 2011. The relationship between patient socioeconomic status and patient satisfaction: Does patient-physician communication matter?Kent State University, 800 E Summit St, Kent, OH 44240, USA.Google ScholarGoogle Scholar
  70. Dhruv R Seshadri, Evan V Davies, Ethan R Harlow, Jeffrey J Hsu, Shanina C Knighton, Timothy A Walker, James E Voos, and Colin K Drummond. 2020. Wearable sensors for COVID-19: a call to action to harness our digital infrastructure for remote patient monitoring and virtual assessments. Frontiers in Digital Health 2 (2020), 8. https://doi.org/10.3389/fdgth.2020.00008Google ScholarGoogle ScholarCross RefCross Ref
  71. Lauren E Sherman, Minas Michikyan, and Patricia M Greenfield. 2013. The effects of text, audio, video, and in-person communication on bonding between friends. Cyberpsychology: Journal of psychosocial research on cyberspace 7, 2 (2013), 1–13.Google ScholarGoogle Scholar
  72. David Simons, Tadashi Egami, and Jeff Perry. 2006. Remote patient monitoring solutions. In Advances in Health care Technology Care Shaping the Future of Medical. Springer, One New York Plaza, Suite 4600 New York, 505–516. https://doi.org/10.1007/1-4020-4384-8Google ScholarGoogle ScholarCross RefCross Ref
  73. Vess Stamenova, Payal Agarwal, Leah Kelley, Jamie Fujioka, Megan Nguyen, Michelle Phung, Ivy Wong, Nike Onabajo, R Sacha Bhatia, and Onil Bhattacharyya. 2020. Uptake and patient and provider communication modality preferences of virtual visits in primary care: a retrospective cohort study in Canada. BMJ open 10, 7 (2020), e037064.Google ScholarGoogle Scholar
  74. Sheila K Stevens, Rebecca Brustad, Lena Gilbert, Benjamin Houge, Timothy Milbrandt, Karee Munson, Jennifer Packard, Brooke Werneburg, and Mustaqeem A Siddiqui. 2020. The use of empathic communication during the COVID-19 outbreak. Journal of Patient Experience 7, 5 (2020), 648–652.Google ScholarGoogle ScholarCross RefCross Ref
  75. Laura Tabacof, Christopher Kellner, Erica Breyman, Sophie Dewil, Stephen Braren, Leila Nasr, Jenna Tosto, Mar Cortes, and David Putrino. 2021. Remote patient monitoring for home management of coronavirus disease 2019 in New York: a cross-sectional observational study. Telemedicine and e-Health 27, 6 (2021), 641–648. https://doi.org/10.1089/tmj.2020.0339Google ScholarGoogle ScholarCross RefCross Ref
  76. Monica L Taylor, Emma E Thomas, Centaine L Snoswell, Anthony C Smith, and Liam J Caffery. 2021. Does remote patient monitoring reduce acute care use? A systematic review. BMJ open 11, 3 (2021), e040232. https://doi.org/10.1136/bmjopen-2020-040232Google ScholarGoogle ScholarCross RefCross Ref
  77. Mintu P Turakhia, Manisha Desai, Haley Hedlin, Amol Rajmane, Nisha Talati, Todd Ferris, Sumbul Desai, Divya Nag, Mithun Patel, Peter Kowey, 2019. Rationale and design of a large-scale, app-based study to identify cardiac arrhythmias using a smartwatch: The Apple Heart Study. American heart journal 207 (2019), 66–75. https://doi.org/10.1016/j.ahj.2018.09.002Google ScholarGoogle ScholarCross RefCross Ref
  78. Ken Umetani, Donald H Singer, Rollin McCraty, and Mike Atkinson. 1998. Twenty-four hour time domain heart rate variability and heart rate: relations to age and gender over nine decades. Journal of the American College of Cardiology 31, 3 (1998), 593–601.Google ScholarGoogle ScholarCross RefCross Ref
  79. International Telecommunication Union. 2022. Most of the world population is covered by a mobile-broadband signal, but blind spots remain. https://www.itu.int/highlights-report-activities/highlights-report-activities/agenda_section/most-of-the-world-population-is-covered-by-a-mobile-broadband-signal-but-blind-spots-remain/Google ScholarGoogle Scholar
  80. Mojtaba Vaismoradi, Hannele Turunen, and Terese Bondas. 2013. Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences 15, 3 (2013), 398–405. https://doi.org/10.1111/nhs.12048Google ScholarGoogle ScholarCross RefCross Ref
  81. Victor Van der Meer, Moira J Bakker, Wilbert B van den Hout, Klaus F Rabe, Peter J Sterk, Job Kievit, Willem JJ Assendelft, Jacob K Sont, Nurses SMASHING (Self-Management in Asthma Supported by Hospitals, ICT, and General Practitioners) Study Group. 2009. Internet-based self-management plus education compared with usual care in asthma: a randomized trial. Annals of Internal Medicine 151, 2 (2009), 110–120. https://doi.org/10.7326/0003-4819-151-2-200907210-00008Google ScholarGoogle ScholarCross RefCross Ref
  82. Thijs Vandenberk, Dorien Lanssens, Valerie Storms, Inge M Thijs, Lotte Bamelis, Lars Grieten, Wilfried Gyselaers, Eileen Tang, Patrick Luyten, 2019. Relationship between adherence to remote monitoring and patient characteristics: observational study in women with pregnancy-induced hypertension. JMIR mHealth and uHealth 7, 8 (2019), e12574. https://doi.org/10.2196/12574Google ScholarGoogle ScholarCross RefCross Ref
  83. Ashok Vegesna, Melody Tran, Michele Angelaccio, and Steve Arcona. 2017. Remote patient monitoring via non-invasive digital technologies: a systematic review. Telemedicine and e-Health 23, 1 (2017), 3–17. https://doi.org/10.1089/tmj.2016.0051Google ScholarGoogle ScholarCross RefCross Ref
  84. Holly Walton, Cecilia Vindrola-Padros, Nadia E Crellin, Manbinder S Sidhu, Lauren Herlitz, Ian Litchfield, Jo Ellins, Pei Li Ng, Efthalia Massou, Sonila M Tomini, 2022. Patients’ experiences of, and engagement with, remote home monitoring services for COVID-19 patients: A rapid mixed-methods study. Health Expectations 25, 5 (2022), 2386–2404. https://doi.org/10.1111/hex.13548Google ScholarGoogle ScholarCross RefCross Ref
  85. Jing Wang, Susan M Sereika, Eileen R Chasens, Linda J Ewing, Judith T Matthews, and Lora E Burke. 2012. Effect of adherence to self-monitoring of diet and physical activity on weight loss in a technology-supported behavioral intervention. Patient preference and adherence 6 (2012), 221. https://doi.org/10.2147/PPA.S28889Google ScholarGoogle ScholarCross RefCross Ref
  86. Julie B. Wang, Lisa A. Cadmus-Bertram, Loki Natarajan, Martha M. White, Hala Madanat, Jeanne F. Nichols, Guadalupe X. Ayala, and John P. Pierce. 2015. Wearable Sensor/Device (Fitbit One) and SMS Text-Messaging Prompts to Increase Physical Activity in Overweight and Obese Adults: A Randomized Controlled Trial. Telemedicine and e-Health 21, 10 (2015), 782–792. https://doi.org/10.1089/tmj.2014.0176 arXiv:https://doi.org/10.1089/tmj.2014.0176PMID: 26431257.Google ScholarGoogle ScholarCross RefCross Ref
  87. Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM computing surveys (csur) 53, 3 (2020), 1–34. https://doi.org/10.48550/arXiv.1904.05046Google ScholarGoogle ScholarCross RefCross Ref
  88. Carolien A Wijsman, Rudi GJ Westendorp, Evert ALM Verhagen, Michael Catt, P Eline Slagboom, Anton JM de Craen, Karen Broekhuizen, Willem van Mechelen, Diana van Heemst, Frans van der Ouderaa, 2013. Effects of a web-based intervention on physical activity and metabolism in older adults: randomized controlled trial. Journal of medical Internet research 15, 11 (2013), e2843. https://doi.org/10.2196/jmir.2843Google ScholarGoogle ScholarCross RefCross Ref
  89. Lauren Wilcox, Rupa Patel, Anthony Back, Mary Czerwinski, Paul Gorman, Eric Horvitz, and Wanda Pratt. 2013. Patient-Clinician Communication: The Roadmap for HCI. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA ’13). Association for Computing Machinery, New York, NY, USA, 3291–3294. https://doi.org/10.1145/2468356.2479669Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Lauren Wilcox, Rupa Patel, Yunan Chen, and Aviv Shachak. 2013. Human factors in computing systems: focus on patient-centered health communication at the ACM SIGCHI conference. Patient Education and Counseling 93, 3 (2013), 532–534.Google ScholarGoogle ScholarCross RefCross Ref
  91. Sarah M Wood, Julia Pickel, Alexis W Phillips, Kari Baber, John Chuo, Pegah Maleki, Haley L Faust, Danielle Petsis, Danielle E Apple, Nadia Dowshen, 2021. Acceptability, feasibility, and quality of telehealth for adolescent health care delivery during the COVID-19 pandemic: Cross-sectional study of patient and family experiences. JMIR Pediatrics and Parenting 4, 4 (2021), e32708. https://doi.org/10.2196/32708Google ScholarGoogle ScholarCross RefCross Ref
  92. Jedrek Wosik, Marat Fudim, Blake Cameron, Ziad F Gellad, Alex Cho, Donna Phinney, Simon Curtis, Matthew Roman, Eric G Poon, Jeffrey Ferranti, 2020. Telehealth transformation: COVID-19 and the rise of virtual care. Journal of the American Medical Informatics Association 27, 6 (2020), 957–962.Google ScholarGoogle ScholarCross RefCross Ref
  93. Robert Wu, Daniyal Liaqat, Eyal de Lara, Tatiana Son, Frank Rudzicz, Hisham Alshaer, Pegah Abed-Esfahani, Andrea S Gershon, 2018. Feasibility of using a smartwatch to intensively monitor patients with chronic obstructive pulmonary disease: prospective cohort study. JMIR mHealth and uHealth 6, 6 (2018), e10046. https://doi.org/10.2196/10046Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Promoting Engagement in Remote Patient Monitoring Using Asynchronous Messaging

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in
                • Published in

                  cover image ACM Conferences
                  CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
                  May 2024
                  18961 pages
                  ISBN:9798400703300
                  DOI:10.1145/3613904

                  Copyright © 2024 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 11 May 2024

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article
                  • Research
                  • Refereed limited

                  Acceptance Rates

                  Overall Acceptance Rate6,199of26,314submissions,24%
                • Article Metrics

                  • Downloads (Last 12 months)144
                  • Downloads (Last 6 weeks)144

                  Other Metrics

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader

                HTML Format

                View this article in HTML Format .

                View HTML Format