Article Text

Original research
How digital health translational research is prioritised: a qualitative stakeholder-driven approach to decision support evaluation
  1. Adeola Bamgboje-Ayodele1,
  2. Steven M McPhail2,
  3. David Brain2,
  4. Richard Taggart3,
  5. Mitchell Burger3,
  6. Lenert Bruce4,
  7. Caroline Holtby4,
  8. Malcolm Pradhan5,
  9. Mark Simpson6,
  10. Tim J Shaw1,
  11. Melissa T Baysari1
  1. 1Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Camperdown, New South Wales, Australia
  2. 2Australian Centre for Health Service Innovation and Centre for Healthcare Transformation, Queensland University of Technology, Brisbane, Queensland, Australia
  3. 3Sydney Local Health District, NSW Health, Camperdown, New South Wales, Australia
  4. 4Murrumbidgee Local Health District, NSW Health, Wagga Wagga, New South Wales, Australia
  5. 5Alcidion Pty Inc, Sydney, New South Wales, Australia
  6. 6eHealth NSW, Chatswood, New South Wales, Australia
  1. Correspondence to Dr Adeola Bamgboje-Ayodele; adeola.ba{at}sydney.edu.au

Abstract

Objectives Digital health is now routinely being applied in clinical care, and with a variety of clinician-facing systems available, healthcare organisations are increasingly required to make decisions about technology implementation and evaluation. However, few studies have examined how digital health research is prioritised, particularly research focused on clinician-facing decision support systems. This study aimed to identify criteria for prioritising digital health research, examine how these differ from criteria for prioritising traditional health research and determine priority decision support use cases for a collaborative implementation research programme.

Methods Drawing on an interpretive listening model for priority setting and a stakeholder-driven approach, our prioritisation process involved stakeholder identification, eliciting decision support use case priorities from stakeholders, generating initial use case priorities and finalising preferred use cases based on consultations. In this qualitative study, online focus group session(s) were held with stakeholders, audiorecorded, transcribed and analysed thematically.

Results Fifteen participants attended the online priority setting sessions. Criteria for prioritising digital health research fell into three themes, namely: public health benefit, health system-level factors and research process and feasibility. We identified criteria unique to digital health research as the availability of suitable governance frameworks, candidate technology’s alignment with other technologies in use,and the possibility of data-driven insights from health technology data. The final selected use cases were remote monitoring of patients with pulmonary conditions, sepsis detection and automated breast screening.

Conclusion The criteria for determining digital health research priority areas are more nuanced than that of traditional health condition focused research and can neither be viewed solely through a clinical lens nor technological lens. As digital health research relies heavily on health technology implementation, digital health prioritisation criteria comprised enablers of successful technology implementation. Our prioritisation process could be applied to other settings and collaborative projects where research institutions partner with healthcare delivery organisations.

  • Health informatics
  • QUALITATIVE RESEARCH
  • Quality in health care

Data availability statement

Data are available on reasonable request. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

  • This study used an implementation science lens to explore the differences between criteria for prioritising traditional and digital health research in the context of clinician-facing decision support systems.

  • We used an interpretive listening model for priority setting, which allowed us to understand the rationale behind the choice of each criterion for research prioritisation.

  • A follow-up consultation with health service representatives weeks following the prioritisation exercise strengthened our approach and was imperative to confirm the ongoing value and relevance of the use cases for research.

  • Our qualitative approach demonstrates scientific rigour by adhering to principles of credibility, confirmability, dependability and transferability, but the study was limited by its focus on decision support rather than digital health in a broader sense.

Introduction

Digital health encompasses a broad set of scientific concepts and technologies, including artificial intelligence (AI), analytics, mobile applications, telemedicine and wearables, which are increasingly being applied in healthcare for diagnosis, treatment and care management.1 A key use of digital health technology is decision support (DS), which includes systems designed to enhance health-related decisions and actions by providing users with pertinent, organised clinical and patient information.2 Benefits of DS systems, such as improved patient outcomes, quality of clinical documentation and cost-effectiveness, have been documented in the literature.3 4 However, research has also shown that DS implementation is often hampered by poor user acceptance, uptake and workflow integration.5

In acknowledging the challenges associated with the integration of DS systems, and clinician-facing digital health technologies more broadly, into practice, a collaborative translational research project was initiated between university, health services and industry partners to investigate how DS systems can be successfully implemented and used to optimise healthcare delivery from both a clinician (end-user) and governance perspective. This was particularly topical as the COVID-19 pandemic had accelerated uptake of digital health technologies, and DS systems were being applied in ways not previously planned for prior to the pandemic. For example, a DS system developed during the pandemic used machine learning to forecast key parameters such as emergency rooms attendance and regional medical supplies, to facilitate monitoring, management and prediction of medical equipment logistic needs6 With multiple candidate DS systems available for implementation, the first phase of this project involved prioritising and selecting use cases for our research that enable a focus on embedding into practice.

A systematic literature review of good practices in research priority setting highlighted that a range of factors should be considered before (eg, planning, inclusiveness, information gathering), during (eg, developing the criteria for prioritisation) and following prioritisation exercises (eg, transparency).7 Of the good practices identified, ‘the criteria for prioritisation’ was a key dimension of successful prioritisation studies. However, the criteria used to prioritise research varied widely across studies. While some studies did not use or report criteria to identify research priorities,8 9 in those that did, the criteria spanned five domains: public health benefits (burden of disease, patient-centredness, potential risk from inaction, addresses inequities), stakeholder benefits (alignment with organisation’s mission, importance to stakeholders), health economic benefits (cost, potential value, sustainability), scientific value (effectiveness, novelty, current knowledge) and feasibility (current resource flows including data and participant cohorts, ethical considerations, potential for translation).10–15

In a recent scoping review of 167 health research priority setting studies linked to funders, the criterion most widely applied to establish priorities was the importance to stakeholders (72%), while translational value (29%), feasibility (18%) and other criteria (ranged between 2% and 9%) were adopted to a lesser extent across the studies.16 It is interesting to note that ‘importance to stakeholders’ may not necessarily represent a unique criterion, but rather encapsulates a range of criteria for prioritisation.

While significant research exists on how traditional health research is prioritised, there is limited research on how to prioritise, and what criteria are used when prioritising digital health research,7 17–21 particularly following the exponential increase in use of digital health technology during COVID-19.22 23 In a recent study investigating the alignment of key stakeholders’ priorities for patient-facing digital health technologies, the authors found a mismatch between prioritisation criteria for the stakeholder groups citing that external policy, regulatory demands, internal organisational workflow and integration needs were often prioritised over patient needs and patient preferences.24 Another study focused on the use of digital technology in mental health found that the questions generated from stakeholders revolved around six broad themes: access, audience, rights, delivery, risks and outcomes.25

These studies provide us with a starting point for understanding how digital health research is prioritised but to date, no research has examined the criteria used for prioritising clinician-facing technologies, such as DS systems, for research. The aim of this study was to (a) identify the criteria for prioritising digital health research and examine how these differ from criteria for prioritising traditional health research and (b) determine priority DS system use cases for a collaborative research programme.

Methods

This study is reported based on the Standards for Reporting Qualitative Research guideline (see online supplemental file 1). In this qualitative study, we used an interpretive listening model for priority setting as our primary approach to data collection. The model involves linking and exchanging views between the research funders, researchers and end users to reach consensus on relevant, feasible and implementable research priorities.26 It comprises five steps that span across three phases26 and uses learnings from a systematic literature review of good practices in research priority setting7: Preprioritisation (identify stakeholders and determine requirements for the consultation), prioritisation (determine the criteria for prioritisation in a workshop) and postprioritisation (validate priority research themes via transparency with stakeholders). This model was selected as it is helpful for forecasting future research priorities, not limited to a single topic, allows verbal clarification and is flexible.

Of the good practices for priority setting identified from the literature, we chose to also incorporate the ‘identification of criteria for prioritisation’7 into the prioritisation phase of the listening model to give stakeholders an opportunity to explain how and why priority research themes were chosen. This novel approach differs from that used in previous priority setting exercises for traditional health condition focused research, which have focused on identifying priorities using predefined criteria.27 Our approach adopts an interpretive lens as we seek to understand the rationale behind the choice of each criterion rather than dictating criteria to be used for prioritisation without rich insights into the motivation for each criterion.

Procedure

In our preprioritisation phase, we identified 11 key purposively selected stakeholders from each of the organisations involved in the collaborative research project including; researchers (n=5), health service representatives advocating for DS implementation (n=4), a government representative (n=1) and a DS commercial partner representative (n=1). The health service representatives were from one regional hospital and one metropolitan hospital in New South Wales, Australia. The purpose of the stakeholder consultation was predefined by the primary aim of the collaborative translational digital health project, which was focused on optimising the implementation of DS systems into practice.

During prioritisation, an online workshop spanning 1 hour was held with 10 of the stakeholders, and an additional interview was held with the remaining stakeholder in the following week (October 2021). During our prioritisation exercise, an experienced facilitator coordinated the workshop steering the conversation, listening and taking notes to elicit: (a) key criteria for prioritisation and (b) case studies for DS. Following the workshop, the interview held with the final stakeholder involved raising the items discussed at the workshop and gaining the thoughts of the participant on each of the key points. The workshop and interview were audiorecorded. In the postprioritisation phase, a summary of the use cases the stakeholders agreed on, and the criteria stakeholders used in making those decisions were sent by email to the stakeholders to validate the priority research themes. All stakeholders agreed with the content of the summary. Please see interview guide in online supplemental appendix A.

In allowing for the fast pace at which technology implementation was moving during the pandemic, a follow-up consultation was held several weeks (December 2021) following the prioritisation exercise with additional health service representatives (n=4) within the same services originally involved to reflect on the ongoing value and relevance of the DS use cases. This resulted in a total of 15 stakeholders engaged in the process, and included clinicians (n=4), clinician researchers (n=2), managers (n=2), policy-maker (n=1) and researchers (n=6) with expertise spanning one or more areas across medicine, nursing, pharmacy, psychology, health economics and information technology.

Patient and public involvement

Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Data analysis

All qualitative data were transcribed verbatim. Non-identifiable transcripts were then analysed by AB-A and MB using inductive thematic analysis.28 First, transcripts were read to gain meaning, and emergent themes were identified and then coded. This was followed by an inductive coding of all data to create codes and themes. An iterative process was used to revisit codes and themes as data analysis progressed, with analysis refined and consolidated as required. Discrepancies in assignment of codes were discussed and resolved between MB and AB-A until agreement on themes related to the highest priority areas for DS solutions was achieved.

Scientific rigour

In qualitative studies, four criteria have been primarily used to improve research quality and increase trustworthiness29: credibility, confirmability, dependability and transferability. To demonstrate credibility, the online sessions were facilitated and coordinated by an experienced senior researcher with expertise in digital health and DS system implementation. Also, we conducted respondent validation in the postprioritisation phase,30 by sharing the research results with participating stakeholders to check for accuracy. To demonstrate confirmability, we used the investigators triangulation approach by consensus decision-making through collaboration, discussion and engagement with the team and considered varying individual perspectives. In addition, we validated the data collected by using the investigators’ field notes as a form of triangulation.

To demonstrate the dependability of our research process, a detailed draft of the study protocol was developed by members of the research team in consultation with key stakeholders. To ensure data coding accuracy, we recorded, transcribed, proofread and carefully checked all audio recordings. Similar to an existing precedent,31 we used a purposive sampling technique (see methods) to ensure that participants were representative of the various stakeholder groups with specific interests in the project, demonstrating transferability. To further increase transferability, we aimed to provide a rich and detailed description of the stakeholders. However, to protect anonymity, all identifiable information was removed from the data to ensure selected quotes cannot be directly linked to stakeholders.

Results

Criteria for prioritising traditional health and digital health research

The results of this study are categorised into three main themes that describe the criteria for prioritising DS use cases12: (1) public health benefit, (2) health system-level factors and (3) research process and feasibility. Within each category, a number of criteria appeared to be unique to digital health research, and these are highlighted in table 1.

Table 1

Criteria for prioritising decision support (DS) research

Public health benefit

A key driver for DS use case selection was the value that the DS system would offer to patients, clinicians and the community at large. In particular, participants discussed five components of public health benefit: Whether DS addresses a meaningful problem for the health service; time and effort currently being expended on the problem; health economic benefits; efficiency and productivity gains; and scalability to other conditions.

Whether a use case addresses a meaningful problem was an important criterion raised by the stakeholders as they stated the need to resolve a significant problem with a feasible solution for the health service, particularly for the patients, clinicians and other staff involved. This was stressed by one of the stakeholders who said, ‘I think that it’s most important that we get a use case that’s meaningful to the health services involved. …. And so if we can address high value, high, you know, sort of yield sort of problems in those spaces, I'm confident the research, output and impact will be excellent, as well (S002)’.

Furthermore, some stakeholders highlighted the significant time and effort both clinicians and organisations are expending on addressing key challenges, and suggested that this criterion should steer use case selection. For instance, one stakeholder mentioned that ‘The other bit of information that I think would be really good is to have an electronic antimicrobial stewardship decision support tool, because we are wasting time in calling infectious diseases doctors, we may have the information in electronic medical record, we know what the current guidelines are, what’s the most appropriate antibiotic…(S003)’, while another stated that‘I would have picked, you know, risk, like deterioration of COVID positive patients in home monitoring… we could use this, this opportunity, this research project as an opportunity for, given we're putting so much time and effort into, COVID response (S006)’. Interestingly, during the follow-up consultation, health service participants reported that considerably less time and effort was being expended on remote monitoring of COVID-19 patients, as fewer hospital presentations were being seen for this cohort.

Another key issue raised by stakeholders was the potential economic benefits to the health service and the community that could be derived from a DS use case. Some stakeholders viewed the short-term financial impact of a DS use case, suggesting that it is important to answer questions such as, ‘how much does it cost? Or does it save us money? Or, or is it just going to be an extra thing we pay for and doesn’t do anything? (S002)’. Other stakeholders considered the medium to long term benefits with comments such as‘…from a health economic point of view, that’s a really big issue, because, if you look at … community based programs, that they’ve often failed, because they haven’t been able to get the right cohorts… that are going to make the biggest impact to readmission rates. So the diverticulitis and the NICU sort of scenarios around candidate detection, are really good examples (S007)’.

In addition, other stakeholders thought it was important to choose DS use cases that leverage the potential efficiency gains DS provide in a bid to optimise clinical processes and workflow, noting that clinicians are humans who are prone to errors ‘we rely on super humans, people that will think of everything all the time and sadly that doesn’t happen’ S003. For instance, one stakeholder argued for automating treatment procedures. ‘I think that the benefit of proving your efficiency is only doing it once.… People make drug errors because they rely on their memory when its everywhere, you know, it’s on your phone now. So I, and I think I like the idea of antimicrobial stewardship (S003)’ while another argued for automation in breast cancer screening, ‘…automatic breast screening, you know, for breast cancer, …, I think that’s where we can really use these sorts of automations to have a big impact from an efficiency standpoint …Where you want to leverage the efficiency, the repeatability, the automation, to drive the productivity gain (S006).

Similarly, the ability for a DS system to generate learnings that could be applied to other conditions was also raised as an important criterion, with one stakeholder saying ‘And the COVID, anything you use for COVID, could be used for similar could be used for COPD, could be used for other respiratory conditions or other chronic conditions. So I think it would have definitely, scalability in the future (S004)’. During the follow-up consultation, stakeholders reiterated that selecting a use case where learnings from technology implementation for COVID-19 could be applied would be preferable.

Health system-level factors

Stakeholders also raised the importance of taking into account factors that could impact the implementation of the selected DS use case such as the timeframe for implementation of DS system; and change management processes.

Important considerations for the selection of DS use cases included the time required to implement the tool and to conduct a subsequent evaluation of the tool, given the limited time for the translational research project, the resources required to deliver the project within the timeframe, including the availability of the required expertise and the clinical algorithms underlying the DS, clinical engagement to validate clinical algorithms, and alignment with other projects underway. Time constraints was highlighted by a stakeholder who stated, ‘I think it’s important that the three years sounds like a long time, but it isn't. We need to get data pretty quickly, if we're going to get the deadlines met… I love the use cases that [S006] has mentioned, around medical imaging, I just don't think they are going to get there in time, from a feasibility point of view, because we haven't got that expertise on the project or that software deployed in the project yet (S005)’. Another stakeholder highlighted one of the available clinical algorithms, which would make it easier to implement a COVID-19 DS use case within the timeframe ‘So I think what [other stakeholder] was saying, I think is the, that those areas are just really good ones to attack. And we've got, you know, collaborators and customers in the UK who are willing to share their COVID algorithms, because they've got a lot more experience with COVID over there (S007)’.

Furthermore, two key change management issues were raised by stakeholders that could impact the outcome of DS system implementation. These were workflow integration and organisational culture to influence change. A stakeholder commented that, ‘… it’s all those things that [other stakeholder] was already talking about, you know, what’s the response around it? How’s it changed workflow? Like, should there be a change in workflow? Who’s actually reacting to those alerts, does it go to the to the charge nurse? Does it go to a separate team? You know, there’s a whole wide array, and it seems to be that rather than the algorithm, it’s actually the workflow around that that actually impacts whether it’s effective or not (S002)’. Another stakeholder added that,‘we can have an accurate algorithm, but we don't have the right workflow integration and culture in the organization to influence change’ (S007).

Criteria unique to digital health

Unique to digital health, two criteria were raised including the availability of suitable governance structures to facilitate system integration into production; and alignment with broader technologies.

Stakeholders noted that the governance structures in place in the health service environment, where the DS system would be implemented, should be considered as this may impact the ease of implementation and evaluation. One stakeholder stated, ‘The other one is around the governance structures within the organizations like, what is the sign off to, to allow that to go into production? And what is the phasing of that?’(S007). Another stakeholder noted that the effort of gaining governance approvals must be considered in relation to the effectiveness of the intervention. ‘…getting the governance in place for something we know where the tech, the technology works, and the clinical models are reasonably well known’ (S006).

Additionally, some stakeholders highlighted the need to select DS use cases that are aligned to both health service needs but also state-wide technologies and priorities. One stakeholder stated, ‘I think it’s really important that we do get that piece around understanding what’s going to be the right political move to both support each of our participating health services and the broader [state-wide] participation as well (S002)’.

Research process and feasibility

The feasibility of data collection with the health service environment was a criterion raised by most stakeholders. In particular, the ease of data extraction, the availability of a patient cohort for recruitment, and learnings from other sites, were considered important factors to consider in selecting DS use cases.

Stakeholders considered the availability of data, the ease of data extraction and whether outcomes could be evident from the data, when selecting the DS use cases. One stakeholder raised the importance of data availability, ‘Now, the just the other only point I will make, though, is that just to consider our feasibility around, if there’s some candidates for case, use cases, where we can easily get the data, easily implement, evaluate, whatever, that we just consider that in this discussion’(S002) and another commented on the issue of data access,‘The other thing which may be interesting for people is so there’s the DS intervention, which again, as [other stakeholder] said, relies on data access.’ (S007).

Similarly, some stakeholders mentioned the need to consider whether patients relevant to a specific use case will be available for recruitment to the study when needed, as an important criterion for selecting a DS use case. One stakeholder stated that, ‘… looking at the specific cohorts that we've already applied in [health service named], which will be COVID community patients, and which there will be for the next two, three years, plenty of data coming in, because we're going to have COVID in community for a while’ (S005). During the follow-up consultation, stakeholders noted that fewer COVID-19 patients would be available for research than anticipated several months before.

Stakeholders also highlighted the importance of learning from the experiences of other sites who have implemented the DS system, and suggested that the availability of preliminary data or lessons learnt should inform DS system selection. For example, a stakeholder stated that ‘there is already a lot of data that was gathered from the [other health service] experience. So we could get started on evaluating that, perfect clinician experience of using that data, there was a pre and post implementation survey … So that’s a good source of data to go and look at that again’(S005).

Criterion unique to digital health

Unique to digital health, the possibility of generating data-driven insights from a use case was another criterion for prioritisation. The feasibility of generating evaluation outcomes from existing data was of high importance, with one stakeholder stating that ‘sometimes the criteria might be around what is easy to evaluate, or what is easier to evaluate in terms of data driven evaluation.… if we retrospectively go over the data and say, okay, we've picked up sepsis at this point or some between–the-flags, you know, was it accurate or not, or whatever, could [it] have been done earlier. But what can we tell from the data?’ (S007).

Initial DS use cases identified by stakeholders

Overall, a total of 12 DS use cases that could benefit from implementation evaluation and optimisation were raised by the stakeholders in the prioritisation session, as shown in figure 1.

Figure 1

Initial use cases identified by stakeholders.

Final DS use cases for research

Using the criteria highlighted above, our prioritisation exercise resulted in the identification of three DS use cases for implementation: remote monitoring of patients with pulmonary conditions, detection of sepsis and automated breast screening. The three selected use cases generally fulfilled the majority of the 13 criteria highlighted in table 1. However, the prioritisation decision was largely driven by practical elements such as time to implementation, time and effort currently expended on the problem, alignment to broader technologies and the learnings from other sites.

Discussion

This qualitative study used a stakeholder-driven approach to identify the criteria for prioritising digital health research and examine how these differ from criteria for prioritising traditional health research; and to determine priority DS system use cases for a collaborative research programme. Our findings reveal that criteria used to prioritise digital health research are consistent, on the whole, with criteria used for prioritising traditional health condition focused research, and relate to public health benefit, health system-level factors and research process and feasibility. However, we also identified some factors which are unique to digital health research including the possibility of data-driven insights (research process and feasibility), availability and suitability of governance frameworks to facilitate system integration and alignment with state-wide digital health technologies (health system-level factors).

The possibility of generating data-driven insights from a DS system was a key determinant for the selection of priority DS use cases in this study. As data-driven digital health technologies have the potential to inform more precise diagnoses, identify at risk patients for intervention, develop more personalised treatment plans and provide better understanding of medical outcomes within complex patient populations,32 stakeholders recognised the importance of prioritising digital health projects that have the capability of delivering these big data resources. In a recent systematic review of 254 articles that used data-driven approaches to gain insight into care processes and care pathways, the results revealed that data-driven approaches can provide empirical information relevant to healthcare planning, management and practice.33 Thus, data-driven evaluations provide significant immediate and future benefits to the health ecosystem, making it an important criterion when prioritising digital health technologies for research.

Unlike traditional health condition focused research, digital health implementation research is particularly dependent on successful implementation of a system into clinical settings, which is complex, multifactorial and does not always go as planned.34 As a result, many of the criteria raised by participants to prioritise research constitute conditions necessary for successful digital health implementation. Suitable governance frameworks guiding system integration through a standardised language that clearly define the parameters for accomplishing a health system process or digital health strategy,35 and evolve from legislative requirements predating the digital era, have been recognised as one of the requirements for successful digital health implementation.36 Similarly, a qualitative stakeholder-based study in the UK investigated the barriers and facilitators to large scale digital health implementation and found that uncertainty around information governance hindered implementation.37 It is not so surprising then that governance was raised by participants as a health system factor critical for facilitating successful implementation. The absence of a suitable governance framework may negatively impact implementation of clinician-facing technologies, like DS systems and AI, which have raised a number of complex social, ethical, legal and liability issues that can contribute to poor clinician uptake of digital health technologies.

Alignment with broader state-wide public healthcare system digital health technologies and strategy was important to stakeholders in our study as government-driven digital health priorities shape the context in which the DS system will be implemented. This aligns with the ‘wider system’ domain of the non-adoption, abandonment, scale-up, spread and sustainability (NASSS) framework—one of the determinants for successful and sustainable digital health implementation.38 In a qualitative study using the NASSS framework to evaluate a DS system for the treatment of cardiovascular risk in primary care, it was shown that the wider institutional environment was not sufficiently aligned resulting in suboptimal uptake of the technology.39 While that study considered the wider institutional environment from professional, financial and regulatory perspectives, other studies have considered alignment from political and digital health strategy perspectives.40 41 For example, a digital health evaluation report from Norway cited that the national digital health strategy prioritised certain technologies over others such as messaging services over videoconferencing.41 This suggests that government-driven digital health priorities do influence the choice of digital health technologies and the potential for implementation, scale-up and spread.

Our study was conducted during the COVID-19 pandemic (October to December 2021) and so reveals some interesting insights about technology implementation and priorities during this time. Although COVID-19 had accelerated uptake of digital health technologies, including DS systems, and this was discussed as a priority area by stakeholders in our initial prioritisation session; during our follow-up consultation, COVID-19 was no longer viewed as a priority. Instead, participants were keen to apply their learnings from implementation of technology during COVID-19 to other use cases. This result highlights the need for flexibility when prioritising digital health research, to accommodate the rapid pace at which health technologies are implemented and changed which differs from the long lead times needed to establish traditional collaborative research projects.

Limitations

The prioritisation sessions were focused on DS systems rather than digital health in general, therefore, our results should be interpreted with caution when drawing conclusions about technologies more broadly. Also, we acknowledge the limited number of participants, and make-up of the group as there was only one industry representative and no patients nor members of the public were involved.

Conclusion

This study identified three priority DS use cases for our collaborative research programme and found that the importance stakeholders assign to a research priority is determined by multiple criteria related to public health benefit, health system-level factors and research process and feasibility. Our study revealed that the criteria used for determining digital health research priority areas are more nuanced than that of traditional body system or health condition focused research, and cannot be purely viewed through a technological lens. The availability of suitable governance frameworks, alignment with broader technologies and data-driven insights are of importance to stakeholders when prioritising digital health research. Interestingly, as digital health research relies on a health technology being implemented into practice, the majority of criteria used to prioritise research comprised enablers of successful health technology implementation and uptake by end-users. Given the breadth of our translational digital health research, our stakeholder-driven approach and criteria for prioritisation could be applied to other settings and collaborative projects where research institutions partner with implementing organisations.

Data availability statement

Data are available on reasonable request. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Ethics statements

Patient consent for publication

Ethics approval

This study involves human participants and ethical approval was received from the Sydney Local Health District Research Ethics and Governance Office (Reference number: X21-0362 and 2021/ETH11708). Participants gave informed consent to participate in the study before taking part.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors MTB and SMM conceptualised the research design. MTB collected and analysed the data with the assistance of AB-A. AB-A drafted the manuscript. AB-A, SMM, DB, RT, MB, LB, CH, MP, MS, TJS and MTB contributed to data interpretation and writing of the manuscript. AB-A, SMM, DB, RT, MB, LB, CH, MP, MS, TJS and MTB read and approved the final manuscript. The corresponding author (AB-A) is the guarantor.

  • Funding This research was supported by Digital Health CRC Limited (DHCRC)—project number: 0085. DHCRC is funded under the Commonwealth's Cooperative Research Centres (CRC) Program. SMM is supported by an NHMRC administered fellowship (#1161138).

  • Competing interests Malcolm Pradhan was an employee at the time the study was conducted and has financial interest of Alcidion Corporation Pty Ltd.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.