Skip to main content

Feasibility of three times weekly symptom screening in pediatric cancer patients

Abstract

Objective

Primary objective was to determine the feasibility of three times weekly symptom reporting by pediatric cancer patients for eight weeks.

Methods

We included English-speaking patients 8–18 years of age with cancer. Patients were sent reminders by text or email to complete Symptom Screening in Pediatrics Tool (SSPedi) three times weekly for eight weeks. When patients reported at least one severely bothersome symptom, the symptom report was emailed to the primary healthcare team. Patient-reported outcomes were obtained at baseline, week 4 ± 1 and week 8 ± 1. Symptom documentation, intervention provision for symptoms and unplanned healthcare encounters were determined by chart review at weeks 4 and 8. The primary endpoint was feasibility, defined as at least 75% patients achieving adherence with at least 60% of SSPedi evaluations. We planned to enroll successive cohorts until this threshold was met.

Results

Two cohorts consisting of 30 patients (cohort 1 (n = 20) and cohort 2 (n = 10)) were required to meet the feasibility threshold. In cohort 1, 11/20 (55%) met the SSPedi completion threshold. Interventions applied after cohort 1 included engaging parents to facilitate pediatric patient self-report, offering mechanisms to remember username and password and highlighting potential benefits of symptom feedback to clinicians. In cohort 2, 9/10 (90%) met the SSPedi completion threshold and thus feasibility was met. Patient-reported outcomes and chart review outcomes were obtained for all participants in cohort 2.

Conclusions

Three times weekly symptom reporting by pediatric patients with cancer for eight weeks was feasible. Mechanisms to enhance three times weekly symptom reporting were identified and implemented. Future studies of longitudinal symptom screening can now be planned.

Peer Review reports

Background

Substantial gains in survival among pediatric patients with cancer has led to increasing attention focused on improving quality of life and controlling symptoms [1]. Pediatric oncology patients experience a high prevalence of severely bothersome symptoms while receiving cancer treatments [2]. We know from studies in adult cancer patients that routine collection of patient-reported outcomes improves patient-clinician communication [3], reduces distress [4] and improves quality of life [5, 6]. Consequently, in adult oncology practice, screening and assessment of symptoms are important priorities [7,8,9,10]. In contrast to these advances in adult cancer care, efforts in children are limited [11, 12].

In order to address this gap, we created the Symptom Screening in Pediatrics Tool (SSPedi), which is a self-report symptom screening and assessment tool for pediatric patients 8–18 years of age receiving cancer treatments. Building on SSPedi, we then developed Supportive care Prioritization, Assessment and Recommendations for Kids (SPARK), which is a web-based platform that consists of a symptom screening component centered on SSPedi and a supportive care clinical practice guideline component (Fig. 1) [13]. SPARK provides reminders for pediatric patients to complete symptom screening by text or email. When the patient reports at least one severely bothersome symptom, SPARK sends an email to the primary healthcare team with the patient’s symptom report.

Fig. 1
figure 1

SPARK Landing Page and Patient Portal

Creation of the symptom screening tool and web application are necessary steps but are not sufficient in themselves to enable routine utilization. Identifying approaches to facilitate symptom screening in clinical practice is a required step toward improving symptom control. We previously established the feasibility of daily completion of symptom screening for five days among pediatric cancer patients who were either admitted to hospital or seen in clinic for five consecutive days [14, 15]. We next planned to address longitudinal completion of symptom screening among pediatric cancer patients over a longer period of time, including when patients were at home. Consequently, the primary objective was to determine the feasibility of three times weekly symptom reporting by pediatric patients using the SPARK platform for eight weeks. Feasibility threshold was defined as 75% of patients achieving adherence with at least 60% of SSPedi evaluations. Secondary objectives were to describe patient-reported outcomes, symptom documentation, intervention provision for symptoms and unplanned healthcare encounters.

Methods

This was an open label, single center feasibility study enrolling pediatric cancer patients at The Hospital for Sick Children in Toronto, Canada. This study was approved by the Research Ethics Board at The Hospital for Sick Children and all participants provided informed consent or assent (as appropriate). This study was registered with clinicaltrials.gov on 19/02/2019 (NCT04275102).

Subjects

We included children and adolescents with cancer who were 8–18 years of age at enrollment who had received or who had a plan to receive any cancer treatment and who were English-speaking. Exclusion criteria were cognitive disability or visual impairment (even with corrective lens) that precluded use of SPARK.

Procedures

Potential participants were identified by research staff and recruited from the inpatient wards and outpatient clinics. Patients required a device to access SPARK to complete SSPedi; the device could be a smart phone, tablet or computer. If the patient did not have access to a device, tablets were available for loan. For consenting patients, demographic information was obtained from the patient or the patient’s health records. Information included sex, age at enrollment, race, diagnosis, metastatic disease, treatments received (chemotherapy, surgery, radiotherapy or hematopoietic stem cell transplantation), inpatient status at enrollment, time from diagnosis and patient’s native/spoken language(s).

Consenting participants were added to the SPARK platform by research team members. Information recorded in SPARK included whether the patient preferred to receive reminders by email, text or both, preferred days and times for the three times weekly reminders and the names and email addresses of the primary healthcare team who would receive SPARK reports. SPARK reports were sent if the patient reported at least one severely bothersome symptom (SSPedi score of 3 or 4 on the 5-point Likert scale ranging from 0 to 4). The SPARK reports included the patient’s SSPedi symptoms depicted graphically and links to pediatric cancer supportive care guidelines. Healthcare professionals receiving SPARK reports had to have an email domain that matched the institutional email domain as one approach to protecting patient privacy. Other approaches were that SPARK underwent a security and privacy evaluation, and no one outside of the enrolling institution had access to personal health information.

At enrollment, a clinical research associate taught the patient to expect to receive reminders to complete SSPedi based on their preferred mechanism (email or text) and how to log-in to SPARK to complete SSPedi upon receiving these reminders. To log-in to SPARK, the patient had to choose a username and password. Patients were given a reminder information sheet including the days and times of their reminders as well as their username and password. In contrast to the teaching provided to patients, healthcare team recipients of SPARK reports did not receive formal training in interpreting the report as our previous research showed these reports were easy to understand [13].

A clinical research associate monitored adherence with SSPedi assessments. If a participant missed two SSPedi assessments in a row, they were contacted in person or by email to ensure they were receiving their reminders and asked if they wanted to change their reminder schedule or reset their SPARK password. Active intervention lasted for eight weeks starting from the date of enrollment.

Patient-reported outcomes (SSPedi, the PROMIS fatigue scale, and the PedsQL 3.0 Acute Cancer Module) were obtained by a clinical research associate at baseline, week 4 ± 1 and week 8 ± 1. They were collected either in person during a clinic visit or hospital admission, or remotely by telephone or web conferencing platform. Symptom documentation, intervention provision for symptoms and unplanned healthcare encounters were determined by chart review at weeks 4 and 8. Interventions provision included pharmacological interventions, non-pharmacological interventions (such as physical activity) and consultation services.

Outcomes

The primary endpoint was feasibility, defined as at least 75% patients achieving adherence with at least 60% of SSPedi evaluations (more specifically 15 of 24 SSPedi assessments).

Secondary endpoints were potential efficacy endpoints for future randomized trials. These included SSPedi scores, fatigue, quality of life, symptom documentation and intervention provision, and unplanned healthcare encounters (emergency department visits, unplanned clinic visits or unplanned hospital admissions).

The total SSPedi score is the sum of each of the 15 SSPedi item’s Likert scores, resulting in a total score that ranges from 0 (no bothersome symptoms) to 60 (worst bothersome symptoms). The recall period is yesterday or today. The total SSPedi score is reliable, valid and responsive to change in pediatric patients 8–18 years of age with cancer or hematopoietic stem cell transplant recipients [2]. We also reported the number of patients reporting severely bothersome symptoms, defined as those reporting a symptom was “a lot” or “extremely” bothersome (score of 3 or 4 on the 5-point Likert scale ranging from 0 to 4).

Fatigue was measured using the PROMIS fatigue scale. The PROMIS fatigue item bank measures the experience of fatigue and the impact of fatigue on activities. The recall period is the last 7 days. A standardized score is provided where 50 ± 10 represents the mean and standard deviation of a United States general population [16]. A higher PROMIS score represents more of the concept being measured and consequently, it reflects worse fatigue. It is reliable and valid in pediatric patients 5–18 years of age with cancer [17]. Quality of life was measured using the PedsQL 3.0 Acute Cancer Module [18]. The 7-day recall version was used. This measure is a multidimensional instrument that is reliable and valid in pediatric patients with cancer [18]. It assesses pain and hurt, nausea, procedural anxiety, treatment anxiety, worry, cognitive problems, perceived physical appearance and communication. The total score is the sum of all the items over the number of items answered. Scores are transformed on a 0 to 100 scale where higher scores indicate better health.

Symptom documentation and intervention provision were abstracted at weeks 4 and 8. The health records were examined over a three-day window between the day prior and the day following the assessment day where the assessment day was the date in which the PROMIS fatigue scale was obtained. All documentation including notes, orders such as medications and flowsheets were included in the review process. We abstracted whether each SSPedi symptom was documented within each of the two abstraction windows (weeks 4 and 8). We also abstracted whether an intervention was provided for each SSPedi symptom within each of the two abstraction windows. Clinical research associates were trained using a standard procedure to identify documentation of symptoms including synonyms and interventions as previously described [19]. Two trained clinical research associates independently abstracted symptom documentation and intervention provision. Any discrepancies were resolved by consensus and if consensus could not be achieved, a third trained clinical research associate adjudicated.

Finally, we identified the number of unplanned healthcare encounters defined as emergency department visits, unplanned clinic visits or unplanned hospital admissions between enrollment (excluding enrollment day) and day 56. Planned clinic visits and admissions were defined as those predetermined at the time of treatment plan initiation. All other healthcare encounters were considered unplanned. We reviewed the health records to determine whether any of the 15 SSPedi symptoms were documented during emergency department visits, unplanned clinic visits or at presentation for an unplanned admission.

Sample size and statistics

We planned to initially enroll 20 participants and if feasibility metrics were not met at that time, to enroll successive cohorts of 10 participants until feasibility metrics were met or a maximum of 60 participants had been enrolled. After each cohort, the study team met to discuss the results and decide whether modifications to the approach were required and whether feasibility metrics were met. All statistics were descriptive.

Results

Fig. 2 shows the flow diagram of patient identification, and reasons for exclusion and declining participation. Two cohorts consisting of the initial 20 patients (cohort 1) and one subsequent cohort of 10 patients (cohort 2) were required to meet the feasibility metrics. Consequently, we enrolled 30 patients in total between March 5 and November 25, 2021. Of the 30 patients, one came off study prior to the week 4 assessment and withdrew permission for chart review for the weeks 4 and 8 endpoints. Week 4 endpoints were obtained for all remaining 29 patients and week 8 patient-reported outcomes were obtained for 28 patients (one missed assessment). Table 1 shows patient characteristics by patient cohort. Overall, 10 (33.3%) were 8–12 years of age and 20 (66.7%) were 13–18 years of age. The most common diagnosis type was leukemia.

Fig. 2
figure 2

Flowchart of Participant Identification and Selection

Table 1 Participant Demographic Characteristics

Additional file 1: Appendix 1 illustrates more specific information about the feasibility metrics. While the median number of SSPedi completed was similar in cohort 1 (21 SSPedi completed) and cohort 2 (22 SSPedi completed), the number that met the 60% threshold was only 11/20 (55%) in cohort 1. Thus, the 75% pre-determined threshold was not met.

Table 2 summarizes the challenges identified and the interventions instituted to address them. The challenges were: (1) patients unwilling to complete SSPedi on their own; (2) forgetting SPARK username and password; (3) unaware of potential benefits of symptom feedback to primary healthcare team; and (4) unclear on how to use SPARK on their own device. Interventions to address these challenges included the following: (1) engaging parents to enable pediatric patient self-reporting of symptoms; (2) suggesting strategies to help them remember their username and password; (3) highlighting that the primary healthcare team will receive a SPARK report if the patient reports at least one severely bothersome symptom; and (4) training patients and parents to use SPARK on their own device. After instituting these approaches, 9/10 patients in cohort 2 met the 60% threshold and thus, feasibility was established.

Table 2 Challenges and Interventions to Improve Adherence with Symptom Screening

Table 3 describes SSPedi total scores and the number reporting severely bothersome symptoms at baseline, week 4 and week 8. Median SSPedi scores (interquartile range (IQR)) at baseline, week 4 and week 8 were 10 (6–12), 5 (3–12) and 6 (2–11) (Fig. 3). The number of patients reporting at least one severely bothersome symptom at baseline, week 4 and week 8 were 14 (46.7%), 7 (24.1%) and 4 (14.3%). Table 3 also illustrates median PROMIS fatigue scale scores and PedsQL 3.0 Acute Cancer Module scores by time point. The most common severely bothersome symptoms reported at baseline were “feeling tired” (5, 16.7%) and “feeling more or less hungry than you usually do” (5, 16.7%).

Table 3 SSPedi and Patient-reported Outcomes by Assessment Time Point (N = 30)
Fig. 3
figure 3

Total SSPedi Score by Assessment Time Point

Table 4 summarizes symptom documentation and intervention provision by time point. The most commonly documented symptoms at week 8 were “hurt or pain (other than headache)” (6, 20.7%), “feeling tired” (5, 17.2%) and “throwing up or feeling like you may throw up” (5, 17.2%). There was no documentation at either week 4 or week 8 for “changes in how your body or face look”, “mouth sores” and “changes in taste”. The most commonly treated symptoms were “throwing up or feeling like you may throw up”, “hurt or pain (other than headache)” and “headache”. The following symptoms were never treated at either week 4 or week 8: “changes in how your body or face look”, “feeling tired”, “feeling more or less hungry than you usually do”, “changes in taste” or “diarrhea”.

Table 4 Symptom Documentation and Intervention Provision by Time Point Independent of SSPedi Score

Additional file 1: Appendix 2 describes symptom documentation and intervention provision for symptoms stratified by the patient reporting that they were “not at all bothered” by the symptom (SSPedi score of 0), they were “a little” or “medium” bothered by the symptom (SSPedi score of 1 or 2) and they were “a lot” or “extremely bothered” by the symptom (SSPedi score of 3 or 4). In general, symptom documentation and intervention provision were not more common in those who reported more bothersome symptoms. Of note, among the 4 patients who reported they were severely bothered by feeling tired, symptom documentation was noted for one patient, and none received an intervention. Among the 4 patients who reported they were severely bothered by “throwing up or feeling like you may throw up”, symptom documentation was present for none and an intervention was provided for two.

Additional file 1: Appendix 3 shows the number of unplanned encounters per patient. Seventeen patients had at least one unplanned healthcare encounter during the eight-week study period, with these encounters being evenly divided across emergency department visits, unplanned clinic visits and unplanned hospital admissions.

Unsolicited qualitative feedback from patients and parents noted that SSPedi completion was easy and simple. Parents in cohort 2 noted that logging into SPARK on behalf of their child helped facilitate symptom reporting. One patient commented that completing SSPedi allowed her to reflect more on how she was feeling in terms of specific symptoms compared to when doctors asked her how she was feeling overall. One parent noted that participation in the study gave them a better understanding of how their child was feeling.

Discussion

We found that after implementing interventions to enhance adherence with symptom reporting, three times weekly administration of SSPedi for eight weeks was feasible for pediatric cancer patients who were 8–18 years of age. It was also feasible to collect patient-reported outcomes at weeks 4 and 8. The main approaches identified to improve symptom screening were enabling pediatric patients to self-report symptoms by engaging with parents, providing approaches to remember the SPARK username and password, highlighting the potential benefits of clinicians receiving symptom reports and teaching patients to log-in to SPARK using their own device.

Our study is important because few pediatric cancer trials have evaluated longitudinal symptom reporting [20]. An important example is the PediQUEST study that included children with advanced cancer. In that study, patient-reported outcomes were completed weekly for those in clinic or on the ward, and by phone monthly for those not attending clinic [21]. Parents provided proxy-response if the pediatric patient refused to self-report. Another important study administered the PROMIS instruments longitudinally at three time points over one course of chemotherapy for pediatric cancer patients [22]. Assessments were obtained either in person or by telephone. A key distinction is that our approach uses an electronic platform to provide reminders to complete symptom reporting and thus, more closely mirrors clinical implementation. This transition from obtaining symptom reports using clinical research associates vs. using electronic platforms and more automated approaches will be a key consideration as we transition from research to practice.

We chose a three times weekly symptom screening frequency based upon the preferences of pediatric oncology clinicians participating in a cluster randomized trial of symptom screening [23]. The ideal frequency of routine symptom screening is not known. It is interesting that in Canada, among adult cancer programs, symptom screening typically either occurs infrequently or only with clinic visits [24]. Consequently, the concept of asking pediatric cancer patients to report symptoms three times weekly regardless of setting (home, clinic or inpatient) using an automated platform is novel. There are advantages to measuring symptoms at home as this is likely a better assessment of ongoing symptoms that require intervention.

Despite providing symptom reports to clinicians, the rates of symptom documentation and intervention provision were relatively low in this study. However, without a control group, the impact of symptom feedback to clinicians is not known. In our study, SPARK reports sent to clinicians included links to clinical practice guidelines to address the reported symptoms. It is possible that access to guidelines alone will not be sufficient to achieve practice change. We have hypothesized that adaptation of care pathways based on clinical practice guidelines may be an effective way to improve clinical practice guideline-concordant care [23, 25]. While describing symptoms was not a primary objective of this study, we also found that most patients had at least one severely bothersome symptom. This finding is concordant with other research, which found that symptoms including pain, fatigue, nausea and vomiting are common in pediatric oncology patients [22, 26,27,28,29].

The strength of our study was the utilization of standardized processes and procedures to measure chart review endpoints and the use of two reviewers to abstract symptom documentation and intervention provision. However, our study is limited by its conduct at a single center and its single group design. Feasibility of a single group trial does not guarantee feasibility of a randomized trial since patients and families may refuse randomization. One approach to overcome this issue could be a cluster randomized trial so that all patients at a given site would either be in the intervention or the control group.

In conclusion, three times weekly symptom reporting by pediatric patients with cancer for eight weeks was feasible. Mechanisms to enhance three times weekly symptom reporting were identified and implemented. Future studies of longitudinal symptom screening can now be planned.

Availability of data and materials

The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Canadian Cancer Society’s Steering Committee On Cancer Statistics. Canadian Cancer statistics. Toronto: Canadian Cancer Society; 2011.

    Google Scholar 

  2. Dupuis LL, Johnston DL, Baggott C, Hyslop S, Tomlinson D, Gibson P, Orsey A, Dix D, Price V, Vanan M, Portwine C, Kuczynski S, Spiegler B, Tomlinson GA, Sung L. Validation of the Symptom Screening in Pediatrics Tool in Children Receiving Cancer Treatments. J Natl Cancer Inst. 2018;110(6):661–668. https://doi.org/10.1093/jnci/djx250.

  3. Yang LY, Manhas DS, Howard AF, Olson RA. Patient-reported outcome use in oncology: a systematic review of the impact on patient-clinician communication. Support Care Cancer. 2018;26(1):41–60.

    Article  CAS  Google Scholar 

  4. Berry DL, Hong F, Halpenny B, Partridge AH, Fann JR, Wolpin S, et al. Electronic self-report assessment for Cancer and self-care support: results of a multicenter randomized trial. Journal of clinical oncology : official journal of the American society of. Clin Oncol. 2014;32(3):199–205.

    Google Scholar 

  5. Mooney K, Berry DL, Whisenant M, Sjoberg D. Improving Cancer Care Through The Patient Experience: How To Use Patient-Reported Outcomes In Clinical Practice. Am Soc Clin Oncol Educ Book. 2017;37:695–704.

    Article  Google Scholar 

  6. Basch E, Deal AM, Kris MG, Scher HI, Hudis CA, Sabbatini P, et al. Symptom monitoring with patient-reported outcomes during routine Cancer treatment: a randomized controlled trial. Journal of clinical oncology : official journal of the American society of. Clin Oncol. 2016;34(6):557–65.

    CAS  Google Scholar 

  7. Carelle N, Piotto E, Bellanger A, Germanaud J, Thuillier A, Khayat D. Changing patient perceptions of the side effects of Cancer chemotherapy. Cancer. 2002;95(1):155–63.

    Article  Google Scholar 

  8. Coates A, Abraham S, Kaye SB, Sowerbutts T, Frewin C, Fox RM, et al. On The Receiving End--Patient Perception Of The Side-Effects Of Cancer Chemotherapy. Eur J Cancer Clin Oncol. 1983;19(2):203–8.

    Article  CAS  Google Scholar 

  9. De Boer-Dennert M, De Wit R, Schmitz PI, Djontono J, Beurden V, Stoter G, et al. Patient perceptions of the side-effects of chemotherapy: the influence of 5ht3 antagonists. British Journal of. Cancer. 1997;76(8):1055–61.

    Article  Google Scholar 

  10. Griffin AM, Butow PN, Coates AS, Childs AM, Ellis PM, Dunn SM, et al. On The Receiving End. V: patient perceptions of the side effects of Cancer chemotherapy in 1993. Ann Oncol. 1996;7(2):189–95.

    Article  CAS  Google Scholar 

  11. Wolfe J, Orellana L, Cook EF, Ullrich C, Kang T, Geyer JR, et al. Improving the care of children with advanced Cancer by using an electronic patient-reported feedback intervention: results from the Pediquest randomized controlled trial. J Clin Oncol. 2014;32(11):1119–26.

    Article  Google Scholar 

  12. Ac D, Tr M, Sa M, Bb R, Km C, Lj R, et al. Validity and reliability of the us National Cancer Institute's patient-reported outcomes version of the common terminology criteria for adverse events (pro-Ctcae). Jama Oncol. 2015;1(8):1051–9.

    Article  Google Scholar 

  13. Cook S, Vettese E, Soman D, Hyslop S, Kuczynski S, Spiegler B, et al. Initial development of supportive care assessment, prioritization and recommendations for kids (Spark), a symptom screening and management application. BMC Med Inform Decis Mak. 2019;19(1):9.

    Article  Google Scholar 

  14. Vettese E, Cook S, Soman D, Kuczynski S, Spiegler B, Davis H, et al. Longitudinal evaluation of supportive care prioritization, assessment and recommendations for kids (Spark), a symptom screening and management application. BMC Cancer. 2019;19(1):458.

    Article  Google Scholar 

  15. Cook S, Vettese E, Tomlinson GA, Soman D, Schechter T, Kuczynski S, et al. Feasibility Of A Randomized Controlled Trial Of Symptom Screening And Feedback To Healthcare Providers Compared With Standard Of Care Using The Spark Platform. Support Care Cancer. 2020;28(6):2729–34.

    Article  Google Scholar 

  16. Northwestern University. Promis [Webpage]. Northwestern University; 2017 [Updated 2021; Cited 2021 04 February]. Available From: https://www.Healthmeasures.Net/Score-And-Interpret/Interpret-Scores/Promis.

  17. Hinds PS, Nuss SL, Ruccione KS, Withycombe JS, Jacobs S, Deluca H, et al. Promis pediatric measures in pediatric oncology: valid and clinically feasible indicators of patient-reported outcomes. Pediatr Blood Cancer. 2013;60(3):402–8.

    Article  Google Scholar 

  18. Varni JW, Burwinkle TM, Katz ER, Meeske K, Dickinson P. The Pedsql in pediatric Cancer: reliability and validity of the pediatric quality of life inventory generic Core scales, multidimensional fatigue scale, and Cancer module. Cancer. 2002;94(7):2090–106.

    Article  Google Scholar 

  19. Hyslop S, Davis H, Duong N, Loves R, Schechter T, Tomlinson D, et al. Symptom documentation and intervention provision for symptom control in children receiving Cancer treatments. Eur J Cancer. 2019;109:120–8.

    Article  Google Scholar 

  20. O’sullivan C, Dupuis LL, Sung L. A review of symptom screening tools in pediatric Cancer patients. Curr Opin Oncol. 2015;27(4):285–90.

    Article  Google Scholar 

  21. Wolfe J, Orellana L, Ullrich C, Cook EF, Kang TI, Rosenberg A, et al. Symptoms and distress in children with advanced Cancer: prospective patient-reported outcomes from the Pediquest study. Journal of clinical oncology : official journal of the American society of. Clin Oncol. 2015;33(17):1928–35.

    Google Scholar 

  22. Hinds PS, Wang J, Cheng YI, Stern E, Waldron M, Gross H, et al. Promis pediatric measures validated in a longitudinal study design in pediatric oncology. Pediatr Blood Cancer. 2019;66(5):E27606.

    Article  Google Scholar 

  23. Dupuis LL, Grimes A, Vettese E, Klesges LM, Sung L. Barriers to symptom management care pathway implementation in pediatric Cancer. BMC Health Serv Res. 2021;21(1):1068.

    Article  Google Scholar 

  24. Tran K, Zomer S, Chadder J, Earle C, Fung S, Liu J, et al. Measuring patient-reported outcomes to improve Cancer care in Canada: an analysis of provincial survey data. Curr Oncol. 2018;25(2):176–9.

    Article  CAS  Google Scholar 

  25. Dupuis LL, Grimes A, Vettese E, Klesges LM, Sung L. Readiness to Implement Symptom Management Care Pathways in Pediatric Cancer. Res Sq. 2020;rs.3.rs–136225. [Preprint]. https://doi.org/10.21203/rs.3.rs-136225/v1.

  26. Montegomery KE, Raybin JL, Ward J, Balian C, Gilger E, Murray P, et al. Using patient-reported outcomes to measure symptoms in children with advanced Cancer. Cancer Nurs. 2020;43(4):281–9.

    Article  Google Scholar 

  27. Linder LA, Newman A, Bernier Carney KM, Wawrzynski S, Stegenga K, Chiu YS, et al. Symptoms and daily experiences reported by children with Cancer using a game-based app. J Pediatr Nurs. 2022;65:33–43.

    Article  Google Scholar 

  28. Raybin JL, Hendricks-Ferguson V, Cook P, Jankowski C. Associations between demographics and quality of life in children in the first year of Cancer treatment. Pediatr Blood Cancer. 2021;68(12):E29388.

    Article  Google Scholar 

  29. Reeve BB, Mcfatrich M, Mack JW, Maurer SH, Jacobs SS, Freyer DR, et al. Validity and reliability of the pediatric patient-reported outcomes version of the common terminology criteria for adverse events. J Natl Cancer Inst. 2020;112(11):1143–52.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the children who participated in our study.

Funding

LS is supported by the Canada Research Chair in Pediatric Oncology Supportive Care.

Author information

Authors and Affiliations

Authors

Contributions

MC and LS drafted the manuscript. MC, LC, GD, CTT, and SC were involved in data collection. All authors contributed to the study design and interpretation, revised and approved the manuscript, and agree to be accountable to all aspects of the work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lillian Sung.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the SickKids research ethics board. All research was performed in accordance with the Declaration of Helsinki. Written informed consent was obtained from study participants or their parents/legal guardians.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary file: Appendices. Appendix 1:

Feasibility Metrics by Cohort. Appendix 2: Symptom Documentation and Intervention Overall and by SSPedi Scores. Appendix 3: Number of Unplanned Healthcare Encounters (N=29). Appendix 4: Symptoms Associated with Unplanned Healthcare Encounters Documentation and Intervention Overall and by SSPedi Scores.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calligan, M., Chakkalackal, L., Dadzie, G. et al. Feasibility of three times weekly symptom screening in pediatric cancer patients. BMC Cancer 23, 4 (2023). https://doi.org/10.1186/s12885-022-10400-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12885-022-10400-1

Keywords