Usability and Acceptability of Clinical Dashboards in Aged Care: Systematic Review

Background The use of clinical dashboards in aged care systems to support performance review and improve outcomes for older adults receiving care is increasing. Objective Our aim was to explore evidence from studies of the acceptability and usability of clinical dashboards including their visual features and functionalities in aged care settings. Methods A systematic review was conducted using 5 databases (MEDLINE, Embase, PsycINFO, Cochrane Library, and CINAHL) from inception to April 2022. Studies were included in the review if they were conducted in aged care environments (home-based community care, retirement villages, and long-term care) and reported a usability or acceptability evaluation of a clinical dashboard for use in aged care environments, including specific dashboard visual features (eg, a qualitative summary of individual user experience or metrics from a usability scale). Two researchers independently reviewed the articles and extracted the data. Data synthesis was performed via narrative review, and the risk of bias was measured using the Mixed Methods Appraisal Tool. Results In total, 14 articles reporting on 12 dashboards were included. The quality of the articles varied. There was considerable heterogeneity in implementation setting (home care 8/14, 57%), dashboard user groups (health professionals 9/14, 64%), and sample size (range 3-292). Dashboard features included a visual representation of information (eg, medical condition prevalence), analytic capability (eg, predictive), and others (eg, stakeholder communication). Dashboard usability was mixed (4 dashboards rated as high), and dashboard acceptability was high for 9 dashboards. Most users considered dashboards to be informative, relevant, and functional, highlighting the use and intention of using this resource in the future. Dashboards that had the presence of one or more of these features (bar charts, radio buttons, checkboxes or other symbols, interactive displays, and reporting capabilities) were found to be highly acceptable. Conclusions A comprehensive summary of clinical dashboards used in aged care is provided to inform future dashboard development, testing, and implementation. Further research is required to optimize visualization features, usability, and acceptability of dashboards in aged care.


Introduction
Health information technologies are increasingly being used in the health care sector, including in aged care, due to their capacity to improve workflow efficiencies and quality of care [1,2]. A technology rapidly gaining momentum in health is electronic clinical dashboards. These typically provide a summary of vital clinical data relating to individual patients to increase users' understanding of their health care needs and care, display trends in patient-reported clinical outcomes, and support decision-making [3,4]. Limited examples of clinical dashboards currently exist within aged care [5,6].
Aged care has a diverse workforce with varying levels of health and digital literacy. In order to address the needs of older adults (defined as individuals aged 65 years and older) in care, their families, and the workforce, dashboards ideally should be designed to support the perspectives and requirements of all relevant stakeholders in aged care. However, there is limited research on how best to present data to support the interpretation of resident outcomes [7]. Furthermore, while the use of visual information can help reduce information overload and improve understanding of data for users in general [4], it is unclear how different types of visual displays used in dashboards may affect comprehension and decision-making for aged care users.
It has been shown that the way in which information is presented (eg, icon displays vs tables, pie charts, and bar graphs) can impact the accuracy of decisions taken by health professionals [4], but limited work has examined whether interpretation of the visual information is dependent upon the expertise, knowledge, and experience of various dashboard users. Aged care organizations are being encouraged to adopt dashboards to improve the quality of care and resident safety [8]; however, dashboards can be used to communicate information to different users, including patients, clinicians, or others.
The aim of this review was to thus identify the visual features of clinical dashboards that are usable and acceptable to the varied number of users in aged care settings in order to help guide future development, design, and implementation of dashboards in aged care.

Search Strategy
Adhering to recommended procedures for systematic reviews (ie, PRISMA [Preferred Reporting Items for Systematic Reviews and Meta-Analyses] guidelines) [9], we conducted a literature search for peer-reviewed empirical studies until April 27, 2022, using a predefined search strategy in the following databases: MEDLINE, Embase, Scopus, PsycINFO, and CINAHL. Primary search terms were dashboard, aged population, aged care, acceptability, and usability; papers were limited to 2000 to April 2022, human subjects, and in English (see search strategy in Table S1 in Multimedia Appendix 1). To increase the comprehensiveness of the search, we scanned the reference lists and cited documents of included peer-reviewed articles (ie, snowballing) to identify any relevant articles missed by the searches.

Inclusion and Exclusion Criteria
We included peer-reviewed primary studies reporting a usability or acceptability evaluation of a clinical dashboard for use in aged care environments, including home-based community care, retirement villages, and long-term care (Table S2 in Multimedia Appendix 1). All study designs were included.

Study Selection
All potential studies were exported into a reference citation manager and duplicates were removed. The primary author (JS) removed additional duplicates. A random selection of 10% of the abstracts (n=200) was then screened by the 2 authors (JS and FS). An interreviewer agreement was high (100%), with no disagreement on which papers should proceed to full-text screening. FS conducted the remainder of the abstract review. Full-text articles were then obtained for screening by JS and FS, with AN moderating the final list of articles. Please see PRISMA diagram for a detailed summary ( Figure 1).

Overview
Data extraction was completed independently by 2 reviewers (JS and LD) and checked by an additional reviewer (AN). The data extraction tool was piloted to ensure complete documentation of the qualitative and quantitative components of the included studies. Once finalized, data were extracted on study general characteristics (eg, year, country, type of dashboards, participants, and study design), sample characteristics (eg, age and gender), dashboard visual features (eg, charts), acceptability and usability ratings, study findings, and recommendations.

Acceptability
Acceptability was defined as the users' judgement on the appropriateness of the dashboard and its design features, which included sensitivity to their needs as well as usage levels and utility [10]. Adopting the theoretical framework of acceptability [11], perceived user acceptability was explored for the overall dashboard as well as specific design features as described by the study (eg, bar charts). Detailed examples of acceptability scoring are shown in Table 1.
Briefly, acceptability was categorized according to technology acceptability statements in validated technology usability tools or through other in-house developed surveys that were focused on users' responses to acceptability. For example, statements such as "I found the system unnecessarily complex" in the System Usability Scale [12]; "I think the visual perception of the dashboard is rich" in the Questionnaire for User Interaction Satisfaction [13][14][15]; and "Using this dashboard would enable me to accomplish tasks more quickly" in the Technology Acceptance Model (TAM) [16] were used to rate acceptability of the dashboards or its features. Acceptability was scored according to the confirmed metrics of these tools and were classified as low, medium, or high, for each scale. For example, with the TAM model, acceptability was defined as low (<50% agreement), medium (50%-70% agreement), and high (>70% agreement) [16].
For qualitative articles, general and specific dashboard features that were perceived positively by all stakeholders in a single study were coded as high acceptability, features that included a mix of both positive and negative stakeholder feedback were coded as medium acceptability, and features that were perceived to provide minimal to no added value for stakeholders (eg, low staff engagement [18] or required significant improvements [19]) were categorized as having low acceptability.  [15] >70% agreement 50%-70% agreement <50% agreement Technology Acceptance Model [16] 70%-100% agreement 50%-70% agreement <50% agreement In-house survey on the overall dashboard (eg, "the anticoagulation dashboard is necessary for high-quality home health patient care" [17]) and specific dashboard features (eg, "The graph combining edema status and weight is useful [17])

Qualitative
Containing a mix of both negative and positive comments (eg, "On the right track but not quite there." [20]; "Whether the system really works remains to be seen. At least it is [better] than nothing." [21]) Negative appraisals (eg, "The tablet is extra work, and for people with dementia, it's very important for me to give them extra time." [19])

Participant feedback
• Positive appraisal for overall dashboard use (eg, "I find this to be a very helpful tool in a team approach working together with the physician and other team members for the best possible outcomes for our patients" [17]) • Positive appraisal for specific dashboard feature (eg, "I have just received one alert, a yellow one, I contacted the older adult the day after...she was happy that it works, and it really works." [22])

Qualitative
Positive appraisals (eg, "Oh, I love it. I have a sense of being cared for!" [21]; "The electronic form flows nicely. It is set up just like the paper form, is easy to follow and is one less thing on my desk." [23]) Mix of appraisals (eg, "We had difficulty logging into the system in the beginning." [18]; "The system has a learning curve, so training is necessary" but "we can identify fixable usability challenges using scenario based training" [23]) Negative appraisals (eg, "there are no options that we might like to have clicked, that the clients are, for example, chronically or acutely confused." [19]; "The staff struggled with the challenge of responding to acute events versus detecting trends and patterns of behavioural decline and determining how to integrate such monitoring into their daily schedules" [18]) Participant feedback a Acceptability subscores of the quantitative scales were used to compute the overall acceptability of the dashboards.

Usability
Usability was defined as the extent to which the dashboard could be used by the specified users to achieve their goals effectively, efficiently, and with satisfaction [24,25]. Usability was also rated for overall dashboard use and specific dashboard features using previously described methods focused on usability items in the tools (eg, System Usability Scale, Questionnaire for User Interaction Satisfaction, and TAM) for assessing low, medium, and high usability (eg, Dowding et al [26], Lanzarone et al [27]). These items typically focused on the dashboard's effectiveness (ie, can stakeholders achieve their goals) and efficiency (ie, amount of effort and resources required to achieve their goal) metrics. For further information, refer to our scoring system described in Table 1.

Qualitative Data
For qualitative studies, acceptability and usability were synthesized using a thematic analysis [28] where main themes regarding the acceptability or usability of the dashboard (including its individual visual features) were first identified independently by JS and LD. Any discrepancies that arose were solved through discussion with the third member of the review team (AN). Themes were reviewed and amended by the review team and were subsequently organized into overarching topics for clarity and conciseness. A similar process was also adopted identifying the recommendations to improve acceptability and usability. Where possible, synthesis was made according to different dashboard user types (eg, resident, caregiver, health care professional).

Quantitative Data
A narrative synthesis of quantitative articles was used to specify whether clinical dashboards and their features were considered acceptable and usable. Interreviewer disagreement on data extracted was resolved through discussion among the research team. The review team included academics with backgrounds in psychology (JS), aged care (LD and KS), public health (FS and MR), epidemiology (JW, MR, and KS), digital health (JW, AN, MR, and MB), pharmacy (KS, MR, and NW), human factors (MB), and data science (NW). The results were synthesized as a narrative review.

Quality Assessment
Study quality was assessed using the Mixed Methods Appraisal Tool (MMAT) [29] by three authors (JS, KS, and MR). This tool allows the appraisal of the methodological quality of 5 categories of studies: qualitative research, randomized controlled trials, nonrandomized studies, quantitative descriptive studies, and mixed methods studies. Each study category has 5 assessment criteria, which are scored as either "yes-criterion met," "no-criterion not met," or "unclear/can't tell whether criterion met" [29]. Mixed methods studies are assessed against the relevant study categories, as well as the mixed methods studies category.
Two reviewers independently scored each study, and disagreements were discussed with a third reviewer to come to a consensus on the rating. An overall quality score was assigned to each study following the method described by MMAT [29]. The score was the overall percent of quality criteria met for an individual study. For multimethod studies, the overall quality score was the score for the lowest-scoring component.

Overview
After excluding duplicates, our search strategy identified 2575 potentially relevant articles ( Figure 1). After excluding articles that did not meet our inclusion criteria, a total of 14 peer-reviewed articles were included, although 2 articles were reported on the same dashboard [26,30,31] and were described collectively. Articles were most frequently excluded because they did not report an evaluation of a clinical dashboard.
A summary of the methodological frameworks and theories used to develop or evaluate the dashboards is provided in Table  S4 in Multimedia Appendix 1 [16,17,30,32,[35][36][37][38][39][40][41][42][43][44][45][46]. Most dashboards (8/12) used a developmental framework [17,20,22,23,26,27,30,34,35], including feedback intervention theory [47], and most also used an evaluation framework (7/12) [19,22,23,26,27,30,34,35], with the most common being the TAM [16] and the UK's Medical Research Council complex intervention evaluation framework [48]. medication discrepancies, appropriate prescribing practices; administrative includes care pathways and changes to services/care an older adult is receiving; Falls refers to the incidence of older adult falls). c HC: home or community care. Refers to in-home care, domiciliary care, community care, and social care provided within the home in which the older adult is living compared to care provided in group accommodation, clinics, and nursing homes, and also 3 independent living retirement communities. d R: respite care. Refers to planned or unplanned short-term care for older adults to provide a temporary break for caregivers. e LTC: long-term care. Refers to those in residential aged care, nursing homes, or long-term care facilities who provide permanent accommodation for those who require consistent and ongoing services to assist with activities of daily living.
Other functionalities included interactive forms dedicated to client assessment and service planning (11/12) [17,19,20,22,23,26,27,30,[32][33][34][35], which included initial assessments, transitions in client care, client-level monitoring (eg, vital signs), as well as the management and coordination of aged care service operations to suit clients' needs. The ability for stakeholders to communicate and interact was also described (6/12) [17,18,20,23,27,32]. c Refers to any graphical representation of data (eg, charts, graphs, and maps). d Includes initial assessment and transitions in older adult care, monitoring (eg, vital signs), and the management and coordination of aged care service operations to suit older adult needs. e Includes capability of communicating between users of the dashboard and data sharing. f Refers to whether the dashboard/tool provided prevalence or incidence data or indicated the potential to compute these data for reporting purposes. JMIR Aging 2023 | vol. 6 | e42274 | p. 9 https://aging.jmir.org/2023/1/e42274 (page number not for citation purposes) g Refers to visual applications that directly or indirectly provide geographical area or location (eg, of staff and clients).
h Refers to whether the dashboard/application provided links to additional physical resources or complementary information, guidelines, and recommendations outside that of the information within the dashboard/application (eg, through links to external websites/files). i Refers to whether the dashboard/application had the capability to display changes in events over time.
j Physical resource was a pharmacist to prescribe or deprescribe based on evidence-based guidelines.
k Advised the pharmacist of "actionable older adults receiving care" and recommended appropriate prescribing with the provider.
l Involved reorganization and allocation of staff and dispatch of emergency vehicles.

Overall Acceptability and Usability of Dashboards
A summary of the users' overall perceived acceptability and usability of the dashboards is presented in Table 4. Using the criteria described in the methods, perceived usability was mixed, with 4 studies reporting low [18,19,22,32], 5 medium [20,23,26,27], and 4 high usability [17,30,34,35]. Discrepancies between studies related to whether the dashboard was easy to learn, operate, and navigate, with some stakeholders feeling very confident using the dashboard [34] and others reporting difficulties with dashboard functionalities [17,23,27,33].
In terms of acceptability, most studies reported medium to high acceptance (10/11), with only 1 study revealing low acceptance [19]. While most respondents were willing to use the dashboard in their workplace (eg, 94.4% agreement [34]), uptake was low (eg, across 3 years, more than half of staff members logged in less than once [18]) and initial enthusiasm declined over time (eg, [18]).
There was no distinct pattern of dashboard type (eg, clinical and administrative), platform (eg, ICT application and computer), or focus area (eg, health status, administration, and medication) on reported dashboard usability or acceptability.
Interviews: Users reported the absence of core assessment scales in the records, systems being not interoperable, and frustration with organizational support for system access and training.
Interviews: Users reported some enthusiasm about interest areas (eg, sleep and medication adherence) and appreciated realtime metrics (eg, sleep duration) being captured.

Medium
Survey c : Low proportion of users who logged into the dashboard (44%). Interviews: users reported technical difficulties and continued unfamiliarity with the system.

•
Older adults inhome care (95) • Staff (25) Clinical, ambient Wild et al [18] (2021) a Usability refers to the extent to which the dashboard could be used by the specified users to achieve their goals effectively and efficiently. b Acceptability refers to the satisfaction with the dashboard and future adoption by specified users.
The ability to update, alert, and generate reports for primary stakeholders was the most frequently used feature and was reported to be highly acceptable across all dashboard types. In general, features with high acceptability were bar charts, tables, icons, symbols, images, and color coding to organize and display information, as well as the use of radio buttons, the ability to expand and collapse information, and multiple displays to facilitate easy customization of the dashboard for different users. A small number of studies also described positional coding, checkboxes, and a completeness bar, which had high acceptability. One study of 195 nurses used a dashboard with spider and radar graphs, and these were reported as too complex [31].
There was only 1 study in-home care exploring older adults' acceptability for line graphs, icons, and displays, all which were rated as medium. Nurses tended to report communication features (eg, ability to converse with other users in the system) as low to medium [27,32], whereas older adults report it as high [22]. Compared to other user groups, older adults' acceptability of alert features was variable, ranging from low to high acceptability.

Problems Identified With Dashboard Acceptability and Usability
Thirteen studies described problems hindering user acceptability and usability of dashboards. The main issues that decreased the overall acceptability and usability of the clinical dashboards included hardware problems, display options, and training. For older adults in home, respite, and long-term care, accessibility of a smart tablet was hindered by locking the tablet, having the incorrect pin code, and forgetting to charge the device [22]. Older adults within each care setting also appreciated a larger text display size and found the 3-step question design difficult when inputting information for a dashboard (eg, yes/no and subsequent questions as they have to recall the previous answer) [22]. For registered nurses, the existing workload prevented daily log-ins despite instructions [18,22]. Similarly, reliance on agency or outsourced workers meant that many staff did not have log-ins and prevented the use of the dashboard [33].
Training on how to use and navigate the dashboard was provided for most dashboard users; however, participant feedback on training ranged from low [32,33] to high satisfaction across studies [19,23,27,34,35]. In some papers, 3 training classroom sessions were sufficient [23], and in others, "on-the-job" training was preferred as an alternative to classroom-based learning [19]. In 1 study, more training was requested by new staff with suggestions for a designated nursing staff member to lead the training session, which could be a recorded session to enable easy dissemination [35].
Suggested areas for improvement across papers mostly related to reducing user workloads, ensuring the security and privacy of resident data, and strengthening decision support and communication features. Ensuring data remain private, particularly data on medication and prescribing patterns, was an emerging area for improvement, with a focus on having data available only to the relevant user [20,32]. Furthermore, inputting reasons for medication use would support nurses' and clinician' decision-making on medication administration, identification of discrepancies, and reconciling errors.
Although dashboards could be used to support interactions between different users (eg, staff, providers, and older adults), in 1 study, it was shown that users valued traditional methods of communication, particularly in relation to medication practices (eg, receiving pharmacist notifications separately) rather than logging into the dashboard [32]. This was because users reported spending more time searching for appropriate medication-related information on the dashboard compared to routine practice (ie, predashboard) [20,32] and thus preferred alternative mediums (eg, sourced from electronic notes [32], phone calls [20], and face-to-face conversations [32]) to clarify discrepancies. Suggestions for dashboard functionalities to improve communication and reduce workload included (1) easy-to-navigate workflows [22,27,30]; (2) visual features that allow for better interpretability and usefulness (ie, simple graphs, customizable alerts, and appropriately positioned icons) [19,20,22,31,33]; and (3) timely responses between users to facilitate efficiency and confidence in medication reconciliation and management [20,23,32,35].

Principal Findings
The aim of this review was to assess current evidence about the acceptability and usability of clinical dashboard features and functionalities in aged care environments. In general, users had high acceptability but mixed opinions on usability, with dashboards focused on administration activities having high acceptability. Dashboards that featured an update, alerts, and reports and those with simple visual elements (eg, bar charts, tables, and symbols) were considered highly acceptable, while those with complex features (eg, spider and radar graphs) had low acceptability.
Clinical dashboards are relatively new in aged care settings, despite these apps being used widely within population health and health services [49]. In our review, dashboards were developed to support a wide range of clinical and administrative purposes and had no distinct pattern of usability and acceptability on dashboard type or platform. Rather, our results suggest that the capabilities of the dashboards and how information is displayed to end users are more likely to influence the acceptability and usability of dashboards.
Previous studies reporting on the usefulness of other dashboard visualization features in health care settings may inform future dashboard design in aged care. For instance, clinicians prefer data tables as they perceive numbers as less "biased" than data that are presented in graphics [50][51][52][53]. Although not explored in the studies included in this review, visual aids such as league charts, caterpillar plots, or funnel plots can offer substantial benefits particularly if the purpose of the dashboard includes institutional performance comparisons (eg, comparing several aged care facilities in certain adverse health events). League charts are often desired because of their familiarity and simplicity [50,51]. Caterpillar plots and funnel plots, types of statistical process control techniques, are widely used visual aids for comparing the performance of institutions in certain performance indicator against a benchmark value [54]. Research shows that health care providers prefer caterpillar and funnel plots once they are taught how to read them [52]. A dashboard that includes specific values, as well as organizational comparisons in certain performance indicators may improve service processes and improve delivery of aged care quality [53]. Thus, when designing dashboards, data visualization approaches need to consider the target audience as well as dashboard purpose.
The perceived usefulness and acceptability of dashboards and their features may differ between end users. For instance, in this review, there were differences between older adults and other end users on the perceived usefulness of dashboards, with older adults likely to report usability as low, while other users reported it as medium-high. Such variability in the perceived usefulness of dashboards across end users can be minimized through customizable design [55], that is, engaging and considering the need of end users (eg, clients, staff members, and family) in the dashboard development process. A user-centered design approach would enable designers to gain an in-depth understanding of end user experiences, expectations, and needs for clinical dashboards, which are critical to addressing usability and acceptability issues and enhancing the likelihood of having an impactful and sustainable dashboard [56,57].

Implications and Recommendations for Future Dashboard Development
The findings of this study have important implications to guide future dashboard development. Dashboards often focused on 1 aspect of care (eg, clinical or administrative). While clinical outcomes are an important aspect of aged care quality, there is increasing understanding that a holistic resident or client trajectory should be key to understanding quality [58]. Future dashboards thus need to consider and construct an inclusive picture of resident or client needs to support the care continuum from entry in the system.
Our results found that dashboards typically used in-house collected data, with some using real-time reporting of information [18,27,30,35]. As reporting of quality indicators becomes mandatory in aged care sectors in many countries, the use of a dashboard makes it potentially possible to streamline and automate this process. This may relieve aged care staff of the significant time burden in collating and reporting these data [59]. It could also mean that reported data are more accurate as it removes some opportunities for human error and reports in real time.
Given that dashboards present data visually and aim to support users' decision-making, the use of in-built decision support within a dashboard provides another opportunity for improved quality care. Recommendations in response to information presented in the dashboard could prompt end users to take appropriate actions to improve clinical care [17,26,30,32,35,60]. This review suggests that certain dashboard features are associated with increased usability and acceptability. For example, reduce user workload through customizability and interoperability of the dashboard, provide visual features to support timely interpretation and response, and include links to complementary information to strengthen confidence in clinical decision-making. Extending such decision support to enhance quality care could include alerts for allergies or special care needs, links to published guidelines to make users aware of appropriate care pathways, and medication errors such as duplications and interactions. Implementing evidence-based decision support to inform better care could be seen as highly

Limitations
There are several limitations to our review. The exclusion of gray literature, small number of studies fulfilling the inclusion criteria, and poor quality of the included studies are current drawbacks. Furthermore, most of the studies included in the current review did not explore the potential effect of their dashboards on outcomes and care processes (eg, documentation of care processes and better health outcomes). Due to the nature of reporting in each of the study's findings and the variation in type and size of end user groups, it was not feasible to determine the differences in usability and acceptability between individual groups; thus, our findings are a summary of all respondents. Future research should focus on how the introduction of different types of clinical dashboards could support adherence to quality guidelines and understand dashboard design and usability in terms of mixed versus specific user groups. Identification of areas where dashboards should be most appropriately introduced to target specific initiatives should also be considered (eg, older adults with dementia and home care) to help improve the quality of care. Further work is needed to explore how users understand and interpret dashboard features, their preferences for information presentation, and how the information is used to support care or service planning, decision support, and user behavior.

Conclusions
Users found dashboards in aged care generally highly acceptable, particularly those with simple visual elements and features such as an update, alerts, and reporting functionalities. This review highlights the variability in the usability of dashboards and identified certain design features of dashboards, which are associated with increased usability and acceptability. Four possible advantageous features and functionalities for future dashboard developments within aged care are emphasized. Specifically, customizability and interoperability to account for different end user preferences; incorporating numerical (tables) and graphical (league and caterpillar charts) presentations of data to facilitate accurate individual assessment and comparison (benchmarking) respectively; integrating changes to client care preferences with real-time clinical outcomes for a holistic representation of the care journey; and building in recommendations and alerts for best practice clinical decision-making to reduce error and support appropriate care pathways. However, further research on the development, testing, and implementation of visualization dashboard solutions to support outcome improvement for older adults is required.