Skip to main content

Beyond the threshold: real-time use of evidence in practice

Abstract

In two landmark reports on Quality and Information Technology, the Institute of Medicine described a 21st century healthcare delivery system that would improve the quality of care while reducing its costs. To achieve the improvements envisioned in these reports, it is necessary to increase the efficiency and effectiveness of the clinical decision support that is delivered to clinicians through electronic health records at the point of care. To make these dramatic improvements will require significant changes to the way in which clinical practice guidelines are developed, incorporated into existing electronic health records (EHR), and integrated into clinicians’ workflow at the point of care. In this paper, we: 1) discuss the challenges associated with translating evidence to practice; 2) consider what it will take to bridge the gap between the current limits to use of CPGs and expectations for their meaningful use at the point of care in practices with EHRs; 3) describe a framework that underlies CDS systems which, if incorporated in the development of CPGs, can be a means to bridge this gap, 4) review the general types and adoption of current CDS systems, and 5) describe how the adoption of EHRs and related technologies will directly influence the content and form of CPGs. Achieving these objectives should result in improvements in the quality and reductions in the cost of healthcare, both of which are necessary to ensure a 21st century delivery system that consistently provides safe and effective care to all patients.

Introduction

The creation and codification of medical knowledge has grown at a pace that exceeds the ability of health care providers or patients to make effective use of it. Methods for summarizing evidence have advanced, are increasingly standardized, and the infrastructure (e.g., journals, societies, organized teams of experts, etc.) for promoting these endeavors continues to expand. Clinical practice guidelines (CPG) distill evidence as a means to promote adoption of state-of-the-art care. While translation of evidence to CPG has accelerated, especially in the past decade [1], use of guidelines in clinical practice has not kept pace. One of the many reasons for this adoption gap is that, other than learning by traditional means (e.g., CME is proven to be minimally effective) [2], there are few effective non-technological approaches for disseminating evidence to routine practice.

Health information technology can enable routine and automatic adoption of CPGs through tools such as computerized decision support (CDS), but current CPGs are not expressed or formatted for ready use by these tools. There are high hopes that the adoption gap will be addressed by recent legislation [3, 4] intended to foster the meaningful use of electronic health records (EHRs) and related technologies such as CDS. In this paper, we: 1) discuss the challenges associated with translating evidence to practice; 2) consider what it will take to bridge the gap between the current limits to use of CPGs and expectations for their meaningful use at the point of care in practices with EHRs; 3) describe a framework that underlies CDS systems which, if incorporated in the development of CPGs, can be a means to bridge this gap, 4) review the general types and adoption of current CDS systems, and 5) describe how the adoption of EHRs and related technologies will directly influence the content and form of CPGs.

The challenge of translating CPGs to practice

There are over 2,000 guidelines in the national guideline clearinghouse [5]. It is widely recognized that adoption (i.e., application of the guideline to a specific patient at the point of care) of these guidelines is limited [6]. The volume of knowledge even for a single clinical area is daunting. For example, the guideline for asthma care [7] is over 400 pages long and its recommendations are based on the need for data (e.g., pulmonary function testing, symptoms) that physicians do not routinely collect in codified form from patients, even in an EHR-based setting. The gap between the creation of evidence and its use in practice has largely contributed to a dramatic growth in the volume of best practices “parked” at the threshold of the clinical practice setting [8, 9]. The evidence is clear that the century-old health care education and delivery model will not keep pace [10].

There are inherent barriers to translating CPGs to practice. CPGs are typically promulgated in lengthy documents of written prose or as graphical displays (e.g., decision trees or flow charts), are often ambiguous and non-committal, use highly variable non-standard forms of medication, laboratory test, and procedure names, are largely inaccessible for practical purposes, and, in the absence of the ability to translate CPGs to a structured form of data, are often totally inaccessible via computer applications (except as free text displays) [11]. Often the prose or graphical images within a CPG cannot be easily or reliably translated to logical, operational rules. Even if well-specified operational rules were available, it would be practically impossible for most providers to routinely learn and use CPGs in routine practice.

Learning a CPG represents only one step in the effective use of knowledge [12]. The time that is available to review, test, internalize, and accurately apply CPGs is limited in clinical practice. Additional time must be invested by a physician to process, internalize, adopt, and eventually use a CPG in practice. Continuing education, the dominant method by which physicians formally augment their knowledge, is largely a peripheral activity for physicians and the mode of learning is divorced from clinical practice. Evidence consistently demonstrates that current methods of education have limited impact, at best, on quality of care [13]. The growing adoption of EHRs and other forms of HIT affords a unique opportunity to explore how continuing education can be seamlessly integrated with the daily routine of care delivery to address fundamental challenges with effective use of knowledge [13]. However, even if one were able to keep pace with advances in knowledge, the application of this knowledge at the point of care will still be extremely difficult without some form of cognitive aid [14]. For example, guidelines must be applied to patient-specific data to be useful. Often, the data required to assess eligibility of a particular patient for a given guideline or to determine which of the many different treatment options is applicable to a specific patient is either not available at the point of care or would require too much time to ascertain in a useful form during an encounter.

The challenges of translating knowledge into practice parallel those that have plagued other information-rich service sectors; the ways in which other sectors have overcome these challenges have implications for health care. Information-rich service sectors are those in which the volume of knowledge and data required to deliver state of the art services requires a systematic and integrated translation process and automated and machine-enabled human interactions. For example, financial planning was once dominated by a paternalistic service model. High-quality advice and information were available through “knowledgeable experts”, primarily to those who could pay. With the transition from defined-benefit to defined-contribution pension plans, the consumer-focused market emerged, supported by the availability of sophisticated web-based tools and widespread access to data and information. Consumers began to assume a more active role in their own financial planning. While consumers can and do make irrational decisions in this role, they have the option of being guided by sophisticated, easy-to-use programs (e.g., risk profiling tools that map to fund allocations and automated age-based asset rebalancing, etc.), that narrow the knowledge gap between the consumer and an investment professional. Consumers now have access to high quality information, online tools and back-up human support; while the business seeks to influence the selection of an “optimal” choice, it doesn’t feel “responsible” for the consumers’ ultimate choice. The fundamental shift in the financial planning sector has been motivated by the systematic application of knowledge to data combined with tools that allow consumers to access such information in a manner that is tailored to their individual needs.

Health care information and service is considerably more complex than financial planning. Yet, there are general parallels with regard to patient and consumer information needs (e.g., access to knowledge, evidence on risks and benefits, personal data including preferences for risks and benefits, rules applied to data), and important, well-understood differences (e.g., nature of the markets, complexity, lexicon, regulations, legal risks, operational aspects of services, role of human compassion and understanding) between health and financial management. Notably, health knowledge and the data required to use such knowledge are inordinately complex, making it difficult, if not impossible, to completely separate consumer use of information from the need for a trusted relationship with a provider; consumers can manage their own investment portfolios, but they are unlikely to become their own doctors. However, similar to the changes wrought in the investment service sector, a broad-based solution to making health care knowledge translatable begins with the process by which knowledge is assembled for use in health care.

In health care, there are countless independent groups and entities involved in the creation of evidence and the translation of evidence to practice, albeit, in a non-systematic manner and with conflicting recommendations and lack of integration or harmonization in the same clinical domain. The lack of a conceptual model for a systematic and integrated translation “process,” will continue to ensure that each domain of activity relevant to bringing knowledge to practice will function somewhat independently and continue the ongoing “warehousing”, rather than the effective use, of CPGs. The medical knowledge translation enterprise is unique in comparison to other types of industries in which seemingly independent groups naturally work together in bringing products and services to customers. In medicine, by contrast, groups of individuals work independently of each other without the mission and vision of a larger purpose to ensure that providers and patients can routinely access the knowledge that they need and want. Transformation of the current “process” will largely depend on the virtual integration of many different independent activities, where the notion of integration is guided by how to effectively bring and use knowledge at the point of care (Figure 1). The implications of virtual integration for knowledge translation are as important for what will be required in codifying knowledge as they are for what will be required in the clinical practice setting to make effective use of such knowledge. The last step in the translation process involves the adoption and meaningful use of information technology. This step is fraught with complexity and the influence of, and socio-technical interactions among, physician factors, practice culture, structural factors, and patient factors [15]. Indeed, because each patient is unique, this last step will always require physician judgment about the applicability of the evidence to an individual patient and his/her clinical scenario. While we acknowledge the importance of these factors, we confine our discussion to how CPGs may be developed so that they can be more easily used in clinical settings with EHRs.

Figure 1
figure 1

Virtual integration of major steps in translation of clinical evidence to use at the point of encounter.

In part, virtual integration will be motivated by the accelerated adoption and meaningful use of EHRs and other forms of information technology in clinical practice. This is not to say that the use of the EHR itself will lead to integration. Rather, there have been and will continue to be upstream effects on codifying knowledge that are influenced by those who develop clinical decision support and quality measurement protocols. For example, the desire to perform drug-allergy interaction checking has prompted the need for providers to accurately enter their patients’ allergies, reactions, and severities using a standard clinical vocabulary and to accurately maintain active/current medication lists.

The translation of CPGs to clinical practices with EHRs will strongly depend on the use of CDS. In fact, current CDSs are often the product of translating guidelines to computer code and operationalizing the process, including integration with clinical workflow. The initial implementation of an EHR is often rapidly followed by naïve attempts to implement and use rudimentary forms of CDS (e.g., a hard stop for drug-drug interaction alerts). More robust forms of CDS, however, require the translation of “knowledge” (e.g., as embodied by guidelines) to a structured form before it can be used in an EHR CDS protocol. Despite numerous attempts, to date, there is no universally accepted format for translating guidelines into CDS-related protocols to facilitate adoption. CDS interventions are usually idiosyncratic to a given health care setting with an EHR, are rudimentary, and are often interruptive, unhelpful, and unsatisfying to providers. The lack of well-accepted standards for clinical vocabularies, CDS formats, clinical workflow application, and lack of clinical and patient-reported data in accessible, codified fields, currently limit the ready use of CPGs [16, 17]. In this White paper, we consider what virtual standardization and integration of the knowledge into practice will mean in an era where EHRs are widely used. We first describe a functional taxonomic framework which characterizes how the CDS process works in routine care and consider the implications of this framework for creation of actionable CPGs.

General CDS framework

CDS systems have been developed in a variety of forms to serve a diversity of functions [18]. A common framework can be used to characterize those forms of CDS that involve interactive, point-of-care interventions and may be helpful in revealing one method in which CPGs can be structured to be more actionable. This type of interactive CDS relies on structured patient data as the key input that are processed by knowledge based rules, or statistical algorithms to generate an output[19, 20]. While the desired CDS process will vary depending on the objective, context, and available patient data, the depth, form and quality of the data and the CDS rules will dictate the limits of what is possible with the output (i.e., from simple generic alerts to intuitive and tailored visual displays of information). While the above framework is standard, there is no universally accepted format for translating guidelines into this framework for widespread adoption and use, although several commercial vendors have approved and adopted an HL7 standard called the Arden Syntax for medical logic modules [21, 22].

Wright et al. offer a taxonomy for interactive, point-of-care CDS comprised of four functional features: 1) triggers, or the events that cause decision support rules to be invoked (for example, prescribing a drug); 2) input data elements used by a rule to make patient inferences; 3) interventions, or the possible actions a decision support module can take; and 4) offered choices, or the options available to a decision support user when a rule is invoked (for example, change a medication order) [23]. We describe these functional features and their implications for an optimally-designed CDS process for integrating with CPGs.

The trigger is the initiating step in the CDS process. The patient data that are available in real time will vary by clinical setting and other factors, such as the type of EHR in use or decision to use free text versus discrete data fields. The utility of CDS systems can be optimized by recognizing this variability, defining standards for minimal and optimal data inputs, and offering meaningful utility at both ends of the data availability spectrum. Moreover, data that are used in the triggering process should be based on a reference standard for a given data domain, such as RxNorm for drug names [24], SNOMED-CT for clinical problems [25], and LOINC for laboratory tests [26].

The input data are fundamental to deploying CDS that it is relevant to the right person, with the right information, and output in the right format, features deemed critical to optimizing CDS [20]. Medication orders, laboratory data, problem list and encounter diagnoses codes, and administrative data, for example, can all serve as inputs to a rule process used to generate decision support outputs. In some cases, input data will be poorly represented in an EHR system. For example, the USPSTF guideline for gonorrhea requires an assessment of sexual activity, input data that may not be routinely recorded in a structured format within an EHR (although virtually all EHRs have the capability of recording this information in a coded form, e.g. via the social history tab in the EHR). Theoretically, input data can be obtained directly from patients. Historically, the collection of patient-reported data (PRD) in routine practice has been limited by the operational and logistical challenges associated therewith [27]. Without actionable patient data, the key steps in the translation process (Table 1) are unlikely to occur in a seamless and automated manner. The emergence of web-based technologies will allow for the capture and real time use of structured PRD. However, PRD will not be useful in facilitating translation of CPGs to practice unless data are captured in a reliable, accurate, and actionable form to represent patient experience and can be mapped to existing clinical vocabularies.

Table 1 Translating knowledge for use at the point of encounter: stakeholders and challenges to integration

The intervention refers to the possible actions such as activating a passive or active physician alert, displaying relevant information, or displaying a relevant guideline with supporting patient data. An optimal CDS model should allow knowledge engineers to specify the criteria that govern which interventions are available and the rules that govern the interplay among a trigger, input data, and the intervention [28]. For example, the previously mentioned gonorrhea rule might be triggered for all male patients within a certain age range, but the intervention may be a passive reminder when the patient’s sexual activity is not known versus an active alert when all patient data are readily available. This flexibility in specifying interventions based on input data and triggers is critical; physicians may be less likely to use an alert if it does not specify an action, specifies a generic action, or specifies one that is incongruent with the input data [2931]. CDS will be perceived as more useful if it reduces work demand and less useful if it creates unnecessary demands (e.g., more clicks to order the optimal medication). However, designing CDS protocols in this manner is challenging because of the diversity of treatment management scenarios for a given clinical domain. We have explored such challenges at Geisinger in developing an EHR-based CDS model (“eDiabetes”) for expert treatment guidance and management of HbA1c in diabetes. Four input variables are used to identify relevant treatment advice among 93 distinct possible messages. Notably, each additional input variable increases the specificity of the advice that can be offered, but exponentially increases the size of the CDS database and the challenges in maintaining the knowledge base, rules set, and veracity of the output [32].

Offered choices are the options that can follow the result of a notification intervention. For example, at each office visit a rule may be triggered to evaluate a patient’s low-density lipoprotein level; if the level is elevated, an alert may be evoked to notify the physician to prescribe a statin. The offered choices may then include the option to write a medication order, defer the alert, schedule a re-test, or add a new diagnosis to the problem list, among others [33]. Alternatively, the rule might also check if a statin has been ordered in the past to increase the specificity of the offered choices. The range of options will vary based on the clinical setting, extant workflows, the end user (nurse or physician), available technologies, and the availability of input data. An optimal CDS model must be sufficiently flexible to accommodate a diversity of offered choices given the diversity in the other three functions.

CDS is likely to evolve rapidly over the next decade, but various forms of CDS are likely to involve the above-described features regardless of the level of sophistication. For CPGs to be more actionable in a digital environment, their structure will, to a significant degree, need to mirror this CDS structure. Specifically, CPGs will be more useful if they are structured to define the relevant patient subgroup and/or data (i.e., the triggers and input data), the intervention options, and the offered choices that will guide both the physician and patient in making optimal, evidence-based decisions.

Types and forms of current CDS tools

CDS tools, which are largely based on the translation of CPGs, can be used for diagnostic decision support, preventive care reminders, disease management or protocols for bundles of reminders, and drug dosing/prescribing protocols, among other less common applications [34].

In describing evidence on the effectiveness of forms of CDS, we also consider the likely evolution of CDS protocols. A common view is that future CDS protocols should facilitate the delivery of “the right care to the right person at the right time” [35, 36], representing a more personalized and timely form of guideline based care. Ultimately, the utility of a CDS protocol will be first judged by how often it is actually used when intended and, if used, whether the protocol improves processes of care and patient outcomes. Given the lack of knowledge about CDS protocols, the lack of standards, and the current state-of-the-art, evaluating the comparative effectiveness of CDS protocols will be confounded by numerous factors including the extent of integration with extant workflows, physician demand to make use of the CDS (e.g., choosing and ordering offered choices), the quality of the advice delivery mechanism (e.g., reminder versus patient tailored treatment guidance), and the face validity of the process itself. With regard to these factors, Table 2 summarizes representative CDS applications.

Table 2 Examples of currently deployed CDS tools and their incorporation of CDS factors

Diagnostic decision support

Diagnostic decision support systems (DDSS) [44] represent one of the earliest forms of CDS innovations [45, 46]. DDSS relies on clinician or patient input of relevant data (e.g., signs, symptoms, laboratory values) that are processed by a knowledge base to return potential diagnoses. DDSS systems are often as accurate as clinical experts in making a diagnosis, but are not used in practice [47] and have not been successful in consistently improving outcomes [34]. One reason for the apparent lack of use may be the workflow constraints to obtaining the volume of data in the right format required for an accurate diagnosis [48]. In addition, few of these systems provide guidance on treatment once the diagnosis is determined (no offered choices). From a functional standpoint, DDSSs will have limited utility in routine primary care settings unless they are integrated with other CDS protocols (e.g., recommendations for medication orders).

Reminder/alert systems

Preventive care reminders represent another of the early and most common forms of decision support [49] that is common to most EHRs and particularly focused on preventive care [16]. Alerting protocols are highly heterogeneous and evidence on effectiveness is mixed; they have been shown to improve preventive care [50], but multiple studies have also found high rates of overriding alerts and reminders in physician order entry and decision support systems [29].

Point-of-care computer reminders, a rudimentary form of decision support, can improve effective use of care processes (i.e., alerts for prescription orders, recommended vaccine, test order, clinical documentation) and avoidance of unnecessary care. The median effect of tested forms of alerting (i.e., <10%) when compared to usual care, however, is well below a clinically meaningful threshold for even process measures, let alone patient outcomes [51]. Poorly designed alerts (e.g., too frequent, insufficiently specific, workflow-impeding, etc.) can lead to “alert fatigue”, where physicians ignore both important and unimportant alerts [52]. “Alert fatigue” (leading to ignored alerts, as well as an increased propensity to ignore future alerts) is a side effect of the rapid growth in deployment of alerts, especially protocols that are non-specific, poorly targeted, direct the provider to take action, and, more generally, low in clinical content [29].

Diagnostic imaging

Diagnostic imaging CDS (DI-CDS) offers guidance on appropriate use of imaging procedures for diagnostic purposes. Notably, there is relatively little observational or RCT evidence on effectiveness of imaging. In addition to identifying redundant orders, almost all guidance is based on expert opinion. Diagnostic imaging CDS systems make use of a utility score for a given care scenario. The score is based on the American College of Radiology Appropriateness Criteria [53]. A low score does not necessarily prevent an image order. Physician overrides require documentation that can be used to refine future CDS algorithms. Diagnostic imaging CDS systems interface with EHRs and computerized physician order entry (CPOE) systems to enable physicians to place diagnostic imaging requests as usual, but with the presentation of advice if and when alternative tests should be considered. When a request receives a low score relevant decision support is provided and a request for additional physician entered data may be required. These CDS tools represent an important advance in solving data standard and technical interface challenges, and their adoption is likely to accelerate [54]. For example, state legislatures (e.g., Minnesota, Washington) have or are considering mandating use of imaging CDS [55, 56]. While diagnostic imaging CDS is sometimes viewed as a cost reduction substitute to insurance-mandated prior authorization, management of inappropriate use of diagnostic imaging has potentially significant safety implications (e.g., reducing unnecessary surgery, reducing radiation exposure from CT).

Drug dosing and prescribing

Drug dosing and prescribing systems are designed to guide the selection of a therapeutic agent for a given clinical scenario and to select an optimal dose [57]. Drug-based CDS may also provide warnings about potentially dangerous drug combinations (i.e., drug-drug interactions) [58]. In a review by Garg et al., drug-based CDS improved provider prescribing performance in the majority of evaluated studies, but with minimal impact on patient outcomes [34]. Many of the studies of drug-based CDS focused on a narrow set of medications or conditions (e.g., anticoagulation). Electronic ambulatory care prescribing (eRx) systems are becoming increasingly common tools for automating the medication ordering process across a wide variety of conditions and medications. eRx systems may be integrated into an EHR or be stand-alone applications. A systematic review of 27 studies of electronic prescribing found that half of the included systems had advanced decision support capabilities (e.g., contraindications, allergy checking, checking medication against laboratory results, etc.) in addition to the medication ordering function [59]. Although there is evidence that electronic prescribing can reduce medication errors and adverse drug events, the evidence is limited, particularly in outpatient settings [59]. The inclusion of eRx in the meaningful use requirements set forth by the Office of the National Coordinator for Health Information Technology (which will impact Medicare reimbursement rates) is likely to accelerate research in this area [60].

Condition- or task-specific clinical documentation and order entry forms

Structured data entry is essential for all forms of interactive, clinical decision support, as well as the most difficult aspect of EHRs for clinicians to adopt. In an attempt to capture accurate, coded, clinical data, EHR designers and developers have developed interface terminologies [61], condition- or task-specific clinical documentation [62], and order entry templates or forms [63]. These forms have been shown to improve the quality of patient care documentation as well as outcomes in limited study [64].

Finally, there are a variety of different forms of CDS that present high quality evidence –based forms of CDS. For example, UpToDate, Micromedex, CliniConsult, ClineGuide, etc. are common forms of clinical decision support that provide clinical knowledge at the point of care [65]. In general, while these examples offer the dominant means by which most providers access clinical guidelines at the point of care, most of these implementations lack the essential patient- or condition-specific features of the previously described CDS framework and are substantially less actionable. There is work underway to develop a standard method of accessing context-specific information that exists in a computer system external to the main clinical information system called the “Infobutton” that offers great promise [66].

Current state of CPG and CDS adoption

CPG adoption

CPG adoption, like clinical care, is not a simple binary process (i.e., adoptions occurred or did not). Rather CPG adoption encompasses implicit and explicit elements including awareness (clinicians must know that a guideline exits), evaluation (i.e., clinicians must assess the applicability of a guideline to a specific patient), obtaining and reviewing data, interpreting data, and adherence (physician actually follows the guideline). Studies that do not measure implicit steps (e.g., awareness and evaluation) may falsely conclude that a CPG was not adopted, despite the fact that a physician may consult a guideline (awareness and evaluation) and ultimately decide that it does not apply and so the recommendation is not followed. An evaluation that focuses only on overt adherence will fail to acknowledge that the guideline was used appropriately.

Awareness of a guideline is one step in a process towards guideline adoption [6]. McGlynn (2003) found that patients receive guideline recommended care only about 50% of the time [8]. Improvements in outcome measures have been noted in condition-specific studies [6]. It is unclear whether these findings generalize across a diversity of conditions.

CDS adoption

The relatively poor adoption of CPGs in practice is partly due to the lack of an effective workflow model that will enable efficient use. Over the past 30 years, a variety of CDS systems have been developed and evaluated. Although benefits have been demonstrated, many of these implementations are unique to a single system, confined to use in an academic medical center that has had a long standing working relationship with the developers, or rely on increasingly outdated (e.g., use of paper printouts that are attached to the chart) approaches to delivering recommendations [67].

The increasing adoption of electronic health records (EHR) will result in increased physician exposure to rudimentary forms of CDS and possibly more advanced forms. The ONC’s recently-released final rule on EHR certification requires decision support functionality, as do the meaningful use criteria that are designed to drive EHR adoption [4, 68]. Less than ten percent of U.S. hospitals have a basic EHR system and less than two percent have a comprehensive EHR system [69]. Recent evidence indicates that only about 17% of outpatient practices use at least a basic EHR system [70].

Chaudhry et al. found that approximately 25% of all English-language peer-reviewed studies that have been used to demonstrate increases in quality and safety in patient care have originated from four institutions [67], each of which had internally-developed EHRs with advanced CDS features and functions and long-standing collaborations with the users and developers of the EHR and CDS. More recently, a growing number of large integrated delivery systems (e.g., Geisinger, GroupHealth Cooperative of Puget Sound, Kaiser Permanente) have adopted commercially-developed multifunctional health information systems. Despite burgeoning adoption, there is little evidence regarding the effectiveness of these commercially-developed systems [67, 71].

CDS and CPGs in the future

The adoption and meaningful use of EHRs and HIT has the potential to transform the pace, specificity, quality, and utility of how knowledge is translated into practice. To this end, it will be important to translate evidence into a reliable, valid, and structured form so that it can be more readily used with HIT in a manner that is useful to providers and patients. Recent legislation will foster and compel adoption and “meaningful” use of EHRs to prime the change process. However, one of the dominant concerns in motivating adoption of EHRs among physicians, let alone meaningful use, is the cost and utility of technology [70]. Thus, it is likely that sustainable transformation will depend on how effectively and efficiently new technologies help providers achieve outcomes they each deem to be a priority and that improve efficiency of care in ways that also improve the quality of care. We consider how more structured CPGs will be important to adoption and meaningful use of EHRs.

The growing use of HIT in clinical care represents a fundamental shift. The notion of “meaningful use,” a recent addition to the HIT lexicon, is itself an indication of the shift that is underway. The adoption of HIT alone will not be sufficient. The Health Information for Economic and Clinical Health (HITECH) Act is intended to foster adoption of HIT and its meaningful use through financial incentives and with increasingly demanding reporting requirements that will be linked to reimbursement rates [72]. The definition of meaningful use is expected to evolve over time as the utility and capabilities of HIT improves. Currently, the definition emphasizes the electronic capture of coded health information, use of information to track key clinical conditions, communication of information for care coordination purposes, and initial reporting of clinical quality measures and public health information. A “meaningful user” will be evaluated in relation to a core set of objectives and related measures [73]. In 2013, the definition is expected to encompass data and process needs relevant to disease management, clinical decision support, medication management, patient access to their health information, transitions in care, and quality measurement and research. In 2015, the definition is expected to further expand to include a focus on improvements in quality, safety and efficiency, decision support for national high priority conditions, patient access to self management tools, access to comprehensive patient data, and improving population health outcomes. Although the financial incentives (and disincentives) forthcoming as part of the HITECH legislation are intended to drive adoption of EHR, truly “meaningful use” will depend on many socio-technical factors, including those outside the scope of this review (e.g., culture, finance, physician preferences, etc.) [74]. The effective translation of knowledge to practice will depend, in part, on what we know about computerized CDS and its impact on care processes and patient outcomes, factors that are central to advancing the vision inherent to HITECH.

Alerting systems provide a cautionary tale about the importance of designing systems that are likely to be “meaningfully used”. Currently, RCTs indicate that alerts fail on the most basic measure of utility. Physicians often ignore alerts and alert-aided advice (even though the advice is evidence-based); this strongly suggests that physicians do not perceive these “aids” as useful. Certainly, clinic culture and physician attitudes are important to motivating adoption of methods known to improve outcomes [75]. It is easy to blame the intended audience for not cooperating and overlook other factors. The proximal cause of an alert failure may be that it is poorly designed, poorly timed, designed for someone other than a physician (e.g., quality managers), or directed to the wrong person, factors that have a bearing on the structure and related utility of CPGs [76].

While simple reminder alerts directed to physicians have minimal impact, there is also little evidence to suggest that more sophisticated forms of CDS improve care processes and patient outcomes. Again, however, the lack of support may simply reflect flaws in how CDS processes are designed (i.e., do they address needs of the end user). As previously noted, structured CPGs are intimately linked to data inputs. More structured CPGs that specify the required content and format of data and the skill level required to manage the CPG will support more sophisticated means of providing guidance to the end user.

While alerts are increasingly being evaluated in randomized controlled trials, we know relatively little about the diversity of alerts actually used in clinical practice and the factors that govern the effectiveness of such alerts. Moreover, while RCT evidence serves as a gold standard, many important questions about CPGs and CDS will not be addressed by RCTs. The time and cost required to initiate RCTs limits the diversity of questions that can be answered. Observational data may also be valuable in providing guidance on what forms of CDS do and do not work [77]. In particular, a growing number of integrated health care systems are using ambulatory and inpatient EHRs. There is intense interest in using the longitudinal clinical data from these systems for comparative effectiveness research [78]. Relatively little has been said about the diversity of CDS protocols used by such systems and the ability to link clinical care process and outcome measures to EHR log files that contain time stamped-data on CDS transactions, including the encounter during which an alert was presented, whether the alert was accessed (i.e., if it is not a hard stop or a simple display of information), how long the alert remained unacknowledged, etc [79]. These data offer a potentially valuable source of comparative effectiveness evidence on the effectiveness of various forms of CDS that are actually used in clinical practice and may be helpful in rapidly advancing understanding of what forms of CDS do and do not work.

CDS should be designed to serve the needs of the end user. Uni-dimensional or binary alerts designed to get physicians to do something or avoid doing something fall short for a number of reasons. Physicians are trained to do cognitively demanding tasks, to process complex information, and to make judgments in the face of uncertainty. Accordingly, they may not be effective or efficient in performing rudimentary tasks that are better suited for automation or completion by a less-skilled individual. Physicians also face demands to be more productive. This is not to say that simple alerts and reminders are not potentially useful to improve care processes and outcomes. Rather, forms of CDS should be hierarchically defined based on the specificity of the CPG, the complexity and specificity of the data required for deployment, the risks associated with various options, and patient preferences (Table 3). Together, these features will likely dictate the optimal timing of deployment in the care process, the optimal decision-maker(s) (i.e., physician, nurse, patient), and the utility of other forms of support (e.g., linked order sets) that are integral to the CDS to engage the end user. In this context, it will be important to consider how CPGs can be structured to allow physicians to do tasks that they could not do otherwise, or that help them to do tasks better and more efficiently.

Table 3 Simple and complex forms of CDS

The changes that are occurring in large delivery systems indicate that use of EHRs including CDS is changing work roles [80]. These changes have implications for CPGs and the need to articulate the skill level required for a given task. Moreover, where there is strong evidence for when and for whom a care process or treatment (e.g., pneumovax in older patients) should be done, it may be sensible to simply automate the task so that it occurs 100% of the time [81]. For example, all Type-II diabetics without a recent HbA1c should have this laboratory test completed at appropriate intervals, but neither the decision nor the completion of the test requires the involvement of the physician. In this example, CPGs could be structured to recommend a link between the HbA1c level, the importance of other covariates (e.g., liver function tests, kidney function test), who should manage the patient (i.e., nurse, primary care physician, endocrinologist), and the ongoing need for care (i.e., automate decision about next scheduled visit). Thus, where evidence about what to do is robust and understandable (e.g., management of hypertension, hyperlipidemia, etc.), risks are low, and within the limits of common sense, as much control as possible should be shifted to others. Where the risk of confusion and of making the “wrong decision” increase (e.g., as decision complexity increases), decision support tools may become increasingly important and useful for both the provider and others involved in the care processes. In addition, patient guidance and preferences are likely to be important in a shared decision approach to care. Future development of CPGs may consider the extent to which care processes and decisions can be assumed by others and where patient preference is important.

Depending on the risks, strength of evidence, complexity of the decision and intervention, and role of patient preferences, CPGs should designate the end users and also be structured with the end user in mind (i.e., patients, administrative staff, mid-level providers, or physicians). Changes in work roles will also affect the physician. Less time will likely be spent on routine care decisions that can be semi-automated or managed by others. More time will be spent on collaborative care and cognitively demanding care decisions.

The increased adoption of EHRs will open unique opportunities to move clinical knowledge beyond the threshold of clinical practices. However, it will be difficult to fulfill the vision for meaningful use without making substantial advances in standardization and codification of CPGs such that they can be more uniformly adopted across diverse clinical care settings.

In our view, the intersection of CPGs and CDS should be an area of active research to foster the development of actionable forms of CPGs. The AHRQ-funded CDS Consortium (CDSC) has studied CDS practices at five different institutions with both commercially-developed and internally-developed EHR and CDS systems; the goal of this effort was to develop recommendations for CPG development activities that complement and build upon existing knowledge and systems. We support the seven focused recommendations that highlight the interdependence between CPGs and CDS development to achieving the vision of the “digital future” [17]. We summarize these recommendations in Table 4.

Table 4 CDSC recommendations for CPG development activities

Summary

If we are to realize the 21st century healthcare delivery system called for in the Institute of Medicine’s landmark reports on Quality and Information technology [82, 83], and reduce the astronomical costs associated with this care, we must increase the efficiency and effectiveness of the clinical decision support that is delivered to clinicians through electronic health records at the point of care. To make these dramatic improvements will require significant changes to the way in which clinical practice guidelines are developed, incorporated into existing EHRs, and integrated into clinicians’ workflow at the point of care. Achieving these objectives should result in improvements in the quality and reductions in the cost of healthcare, both of which are necessary to ensure a 21st century delivery system that consistently provides safe and effective care to all patients.

References

  1. Turner T, Misso M, Harris C, Green S: Development of evidence-based clinical practice guidelines (CPGs): comparing approaches. Implement Sci. 2008, 3: 45-10.1186/1748-5908-3-45.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Mazmanian PE, Davis DA, Galbraith R, American College of Chest Physicians Health and Science Policy Committee: Continuing medical education effect on clinical outcomes: effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest. 2009, 135: 49S-55S. 10.1378/chest.08-2518.

    Article  PubMed  Google Scholar 

  3. HealthCare.gov. Available at: http://www.healthcare.gov/index.html. Accessed 4/19/2013

  4. Center for Medicare and Medicaid Services. Medicare and Medicaid Programs; Electronic Health Record Incentive Program Final Rule. 2010, Available from: http://www.gpo.gov/fdsys/pkg/FR-2010-01-13/pdf/E9-31217.pdf. Accessed 4/19/2013.

  5. National Guideline Clearinghouse Guideline Index. Available at: http://guideline.gov/browse/by-topic.aspx. Accessed 4/19/2013.

  6. Larson E: Status of practice guidelines in the United States: CDC guidelines as an example. Prev Med. 2003, 36: 519-524. 10.1016/S0091-7435(03)00014-8.

    Article  PubMed  Google Scholar 

  7. National Heart, Lung, and Blood Institute. National Asthma Education Program. Expert Panel on the Management of Asthma, United States. Dept. of Health and Human Services, National Institutes of Health. Expert Panel report 3. 2007, Bethesda, Md: U.S. Dept. of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute, 07–4051:417-Available from: http://purl.access.gpo.gov/GPO/LPS93956

  8. McGlynn EA, Asch SM, Adams J: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348: 2635-2645. 10.1056/NEJMsa022615.

    Article  PubMed  Google Scholar 

  9. Mangione-Smith R, DeCristofaro AH, Setodji CM: The quality of ambulatory care delivered to children in the United States. N Engl J Med. 2007, 357: 1515-1523. 10.1056/NEJMsa064637.

    Article  CAS  PubMed  Google Scholar 

  10. Flexner A: Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching (Bulletin No. 4). 1910, New York, NY: Carnegie Foundation

    Google Scholar 

  11. Shiffman RN, Dixon J, Brandt C: The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Med Inform Decis Mak. 2005, 5: 23-10.1186/1472-6947-5-23.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Cabana MD, Rand CS, Powe NR: Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999, 282: 1458-1465. 10.1001/jama.282.15.1458.

    Article  CAS  PubMed  Google Scholar 

  13. Mansouri M, Lockyer J: A meta-analysis of continuing medical education effectiveness. J Contin Educ Health Prof. 2007, 27: 6-15. 10.1002/chp.88.

    Article  PubMed  Google Scholar 

  14. Stead WW, Lin H, National Research Council. Committee on Engaging the Computer Science Research Community in Health Care Informatics: Computational technology for effective health care: immediate steps and strategic directions. 2009, Washington, D.C.: National Academies Press

    Google Scholar 

  15. Sittig D, Singh H: A New Socio-technical Model for Studying Health Information Technology in Complex Adaptive Healthcare Systems. Quality & Safety in Healthcare. 2010, 19 (Suppl 3): i68-i74. 10.1136/qshc.2010.042085. PMID: 20959322

    Article  Google Scholar 

  16. Wright A, Sittig DF, Ash JS, Sharma S, Pang JE, Middleton B: Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc. 2009, 16: 637-644. 10.1197/jamia.M3111.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Sittig DF, Wright A, Ash JS, Middleton B: A Set of Preliminary Standards Recommended for Achieving a National Repository of Clinical Decision Support Interventions. 2009, AMIA Fall Symposium, 614-618.

    Google Scholar 

  18. Osheroff JA, Pifer EA, Teich JM, Sittig DF, Jenders RA: Improving Outcomes with Clinical Decision Support: An Implementer's Guide. 2005, : An Implementer’s Guide. Health Information Management and Systems Society

    Google Scholar 

  19. Berg M: Rationalizing medical work : decision-support techniques and medical practices. 1997, Cambridge, Mass: MIT Press

    Google Scholar 

  20. Berner ES: Clinical decision support systems: State of the art. Rockville. 2009, Maryland: Agency for Healthcare Research and Quality. AHRQ Publication No. 09-0069-EF

    Google Scholar 

  21. Kim S, Haug PJ, Rocha RA, Choi I: Modeling the Arden syntax for medical decisions in XML. Int J Med Inform. 2008, 77: 650-656. 10.1016/j.ijmedinf.2008.01.001.

    Article  PubMed  Google Scholar 

  22. Hripcsak G: Writing Arden syntax medical logic modules. Comput Biol Med. 1994, 24: 331-363. 10.1016/0010-4825(94)90002-7.

    Article  CAS  PubMed  Google Scholar 

  23. Wright A, Goldberg H, Hongsermeier T, Middleton B: A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. J Am Med Inform Assoc. 2007, 14: 489-496. 10.1197/jamia.M2364.

    Article  PubMed  PubMed Central  Google Scholar 

  24. RxNorm Files. Available at: http://www.nlm.nih.gov/research/umls/rxnorm/docs/rxnormfiles.html. Accessed: 4/19/2013.

  25. The International Health Terminology Standards Development Organisation. SNOMED Clinical Terms® User Guide. Available at: http://ihtsdo.org/fileadmin/user_upload/doc/. Accessed: 4/19/2013.

  26. Forrey AW, McDonald CJ, DeMoor G: Logical observation identifier names and codes (LOINC) database: a public use set of codes and names for electronic reporting of clinical laboratory test results. Clin Chem. 1996, 42: 81-90.

    CAS  PubMed  Google Scholar 

  27. Jones JB, Snyder CF, Wu AW: Issues in the design of Internet-based systems for collecting patient-reported outcomes. Qual Life Res. 2007, 16: 1407-1417. 10.1007/s11136-007-9235-z.

    Article  PubMed  Google Scholar 

  28. Sittig DF, Wright A, Simonaitis L: The state of the art in clinical knowledge management: an inventory of tools and techniques. Int J Med Inform. 2010, 79: 44-57. 10.1016/j.ijmedinf.2009.09.003.

    Article  PubMed  Google Scholar 

  29. van der Sijs H, Aarts J, Vulto A, Berg M: Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006, 13: 138-147. 10.1197/jamia.M1809.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Bates DW, Kuperman GJ, Wang S: Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003, 10: 523-530. 10.1197/jamia.M1370.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Abookire SA, Teich JM, Sandige H: Improving allergy alerting in a computerized physician order entry system. Proc AMIA Symp. 2000, 2-6.

    Google Scholar 

  32. Miller PL, Frawley SJ, Sayward FG: Maintaining and incrementally revalidating a computer-based clinical guideline: a case study. J Biomed Inform. 2001, 34: 99-111. 10.1006/jbin.2001.1011.

    Article  CAS  PubMed  Google Scholar 

  33. Kawamoto K, Houlihan CA, Balas EA, Lobach DF: Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005, 330: 765-10.1136/bmj.38398.500764.8F.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Garg AX, Adhikari NK, McDonald H: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293: 1223-1238. 10.1001/jama.293.10.1223.

    Article  CAS  PubMed  Google Scholar 

  35. United States Department of Health and Human Services. Personalized Health Care: Opportunities, Pathways, Resources. 2007, Available from: http://www.hhs.gov/myhealthcare/news/phc-report.pdf. Accessed July 26, 2010

  36. Approaching CDS in Medication Management. Available at: http://healthit.ahrq.gov/images/mar09_cds_book_chapter/CDS_MedMgmnt_ch_1_sec_2_five_rights.htm. Accessed 8/26, 2010

  37. Gorman CA, Zimmerman BR, Smith SA: DEMS - a second generation diabetes electronic management system. Comput Methods Programs Biomed. 2000, 62: 127-140. 10.1016/S0169-2607(99)00054-1.

    Article  CAS  PubMed  Google Scholar 

  38. Feldstein A, Elmer PJ, Smith DH: Electronic medical record reminder improves osteoporosis management after a fracture: a randomized, controlled trial. J Am Geriatr Soc. 2006, 54: 450-457. 10.1111/j.1532-5415.2005.00618.x.

    Article  PubMed  Google Scholar 

  39. Butzlaff M, Vollmar HC, Floer B, Koneczny N, Isfort J, Lange S: Learning with computerized guidelines in general practice?: A randomized controlled trial. Fam Pract. 2004, 21: 183-188. 10.1093/fampra/cmh214.

    Article  CAS  PubMed  Google Scholar 

  40. Dayton CS, Ferguson JS, Hornick DB, Peterson MW: Evaluation of an Internet-based decision-support system for applying the ATS/CDC guidelines for tuberculosis preventive therapy. Med Decis Making. 2000, 20: 1-6. 10.1177/0272989X0002000101.

    Article  CAS  PubMed  Google Scholar 

  41. Sequist TD, Gandhi TK, Karson AS: A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005, 12: 431-437. 10.1197/jamia.M1788.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Kuilboer MM, van Wijk MA, Mosseveld M: Computed critiquing integrated into daily clinical practice affects physicians’ behavior–a randomized clinical trial with AsthmaCritic. Methods Inf Med. 2006, 45: 447-454.

    CAS  PubMed  Google Scholar 

  43. Morris AH, Wallace CJ, Menlove RL: Randomized clinical trial of pressure-controlled inverse ratio ventilation and extracorporeal CO2 removal for adult respiratory distress syndrome. Am J Respir Crit Care Med. 1994, 149: 295-305. 10.1164/ajrccm.149.2.8306022.

    Article  CAS  PubMed  Google Scholar 

  44. Ramnarayan P, Kapoor RR, Coren M: Measuring the impact of diagnostic decision support on the quality of clinical decision making: development of a reliable and valid composite score. J Am Med Inform Assoc. 2003, 10: 563-572. 10.1197/jamia.M1338.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Ledley RS, Lusted LB: Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science. 1959, 130: 9-21. 10.1126/science.130.3366.9.

    Article  CAS  PubMed  Google Scholar 

  46. Wright A, Sittig DF: A four-phase model of the evolution of clinical decision support architectures. Int J Med Inform. 2008, 77: 641-649. 10.1016/j.ijmedinf.2008.01.004.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Berner ES: Diagnostic decision support systems: why aren’t they used more and what can we do about it?. AMIA Annu Symp Proc. 2006, 1167-1168.

    Google Scholar 

  48. Miller RA, Masarie FE: The demise of the “Greek Oracle” model for medical diagnostic systems. Methods Inf Med. 1990, 29: 1-2.

    CAS  PubMed  Google Scholar 

  49. McDonald CJ: Protocol-based computer reminders, the quality of care and the non-perfectability of man. N Engl J Med. 1976, 295: 1351-1355. 10.1056/NEJM197612092952405.

    Article  CAS  PubMed  Google Scholar 

  50. Shea S, DuMouchel W, Bahamonde L: A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc. 1996, 3: 399-409. 10.1136/jamia.1996.97084513.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  51. Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J: Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010, 182: E216-E225.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Singh H, Arora HS, Vij MS, Rao R, Khan MM, Petersen LA: Communication outcomes of critical imaging results in a computerized notification system. J Am Med Inform Assoc. 2007, 14: 459-466. 10.1197/jamia.M2280.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Harpole LH, Khorasani R, Fiskio J, Kuperman GJ, Bates DW: Automated evidence-based critiquing of orders for abdominal radiographs: impact on utilization and appropriateness. J Am Med Inform Assoc. 1997, 4: 511-521. 10.1136/jamia.1997.0040511.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  54. Iglehart JK: Health insurers and medical-imaging policy–a work in progress. N Engl J Med. 2009, 360: 1030-1037. 10.1056/NEJMhpr0808703.

    Article  CAS  PubMed  Google Scholar 

  55. Solberg LI, Vinz C, Trevis JE: A technology solution for the high-tech diagnostic imaging conundrum. Am J Manag Care. 2012, 18 (8): 421-425.

    PubMed  Google Scholar 

  56. Sistrom CL, Dang PA, Weilburg JB, Dreyer KJ, Rosenthal DI, Thrall JH: Effect of computerized order entry with integrated decision support on the growth of outpatient procedure volumes: seven-year time series analysis. Radiology. 2009, 251: 147-155. 10.1148/radiol.2511081174.

    Article  PubMed  Google Scholar 

  57. Chertow GM, Lee J, Kuperman GJ: Guided medication dosing for inpatients with renal insufficiency. JAMA. 2001, 286: 2839-2844. 10.1001/jama.286.22.2839.

    Article  CAS  PubMed  Google Scholar 

  58. Feldstein AC, Smith DH, Perrin N: Reducing warfarin medication interactions: an interrupted time series evaluation. Arch Intern Med. 2006, 166: 1009-1015. 10.1001/archinte.166.9.1009.

    Article  PubMed  Google Scholar 

  59. Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U: The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc. 2008, 15: 585-600. 10.1197/jamia.M2667.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Blumenthal D, Tavenner M: The “Meaningful Use” Regulation for Electronic Health Records. N Engl J Med. 2010, 363: 501-504. 10.1056/NEJMp1006114.

    Article  CAS  PubMed  Google Scholar 

  61. Rosenbloom ST, Miller RA, Johnson KB, Elkin PL, Brown SH: Interface terminologies: facilitating direct entry of clinical data into electronic health record systems. J Am Med Inform Assoc. 2006, 13: 277-288. 10.1197/jamia.M1957.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Schnipper JL, Linder JA, Palchuk MB: “Smart forms” in an electronic medical record: documentation-based clinical decision support to improve disease management. J Am Med Inform Assoc. 2008, 15: 513-523. 10.1197/jamia.M2501.

    Article  PubMed  PubMed Central  Google Scholar 

  63. McDonald CJ, Overhage JM, Tierney WM: The regenstrief medical record system: a quarter century experience. Int J Med Inform. 1999, 54: 225-253. 10.1016/S1386-5056(99)00009-X.

    Article  CAS  PubMed  Google Scholar 

  64. Ozdas A, Speroff T, Waitman LR, Ozbolt J, Butler J, Miller RA: Integrating “best of care” protocols into clinicians’ workflow via care provider order entry: impact on quality-of-care indicators for acute myocardial infarction. J Am Med Inform Assoc. 2006, 13: 188-196. 10.1197/jamia.M1656.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Bonis PA, Pickens GT, Rind DM, Foster DA: Association of a clinical knowledge support system with improved patient safety, reduced complications and shorter length of stay among Medicare beneficiaries in acute care hospitals in the United States. Int J Med Inform. 2008, 77: 745-753. 10.1016/j.ijmedinf.2008.04.002.

    Article  PubMed  Google Scholar 

  66. Cimino JJ: Infobuttons: anticipatory passive decision support. AMIA Annu Symp Proc. 2008, 1203-1204.

    Google Scholar 

  67. Chaudhry B, Wang J, Wu S: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144: 742-752. 10.7326/0003-4819-144-10-200605160-00125.

    Article  PubMed  Google Scholar 

  68. Department of Health and Human Services. 45 CFR Part 170 RIN 0991–AB58. Health Information Technology: Initial Set of Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology. Federal Register. 2010, 77 (144): 44590-44654. Available at: http://www.gpo.gov/fdsys/pkg/FR-2010-07-28/pdf/2010-17210.pdf. Accessed 4/19/2013

    Google Scholar 

  69. Jha AK, DesRoches CM, Campbell EG: Use of electronic health records in U.S. hospitals. N Engl J Med. 2009, 360: 1628-1638. 10.1056/NEJMsa0900592.

    Article  CAS  PubMed  Google Scholar 

  70. DesRoches CM, Campbell EG, Rao SR: Electronic health records in ambulatory care–a national survey of physicians. N Engl J Med. 2008, 359: 50-60. 10.1056/NEJMsa0802005.

    Article  CAS  PubMed  Google Scholar 

  71. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, 3: CD001096-

    PubMed  Google Scholar 

  72. Blumenthal D: Stimulating the adoption of health information technology. N Engl J Med. 2009, 360: 1477-1479. 10.1056/NEJMp0901592.

    Article  CAS  PubMed  Google Scholar 

  73. Blumenthal D: Launching HITECH. N Engl J Med. 2010, 362: 382-385. 10.1056/NEJMp0912825.

    Article  CAS  PubMed  Google Scholar 

  74. Sittig DF, Singh H: Eight rights of safe electronic health record use. JAMA. 2009, 302: 1111-1113. 10.1001/jama.2009.1311.

    Article  CAS  PubMed  Google Scholar 

  75. Modak I, Sexton JB, Lux TR, Helmreich RL, Thomas EJ: Measuring safety culture in the ambulatory setting: the safety attitudes questionnaire–ambulatory version. J Gen Intern Med. 2007, 22: 1-5.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Sittig DF, Teich JM, Osheroff JA, Singh H: Improving clinical quality indicators through electronic health records: it takes more than just a reminder. Pediatrics. 2009, 124: 375-377. 10.1542/peds.2009-0339.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Singh H, Thomas EJ, Sittig DF: Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain?. Am J Med. 2010, 123: 238-244. 10.1016/j.amjmed.2009.07.027.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Maro JC, Platt R, Holmes JH: Design of a national distributed health data network. Ann Intern Med. 2009, 151: 341-344. 10.7326/0003-4819-151-5-200909010-00139.

    Article  PubMed  Google Scholar 

  79. Singh H, Thomas EJ, Mani S: Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential?. Arch Intern Med. 2009, 169: 1578-1586.

    PubMed  PubMed Central  Google Scholar 

  80. Ash JS, Sittig DF, Campbell E, Guappone K, Dykstra RH: An unintended consequence of CPOE implementation: shifts in power, control, and autonomy. AMIA Annu Symp Proc. 2006, 11-15.

    Google Scholar 

  81. Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ: A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001, 345: 965-970. 10.1056/NEJMsa010181.

    Article  CAS  PubMed  Google Scholar 

  82. Institute of Medicine, Committee on Quality of Health Care in America: Crossing the quality chasm: a new health system for the 21st century. 2001, Washington, D.C: National Academy Press

    Google Scholar 

  83. Kohn LT, Corrigan J, Donaldson MS: To err is human: building a safer health system. 2000, Washington, D.C.: National Academy Press

    Google Scholar 

Pre-publication history

Download references

Acknowledgments

This publication is derived in part from work supported under a contract with the Agency for Healthcare Research and Quality (AHRQ) Contract # HHSA290200810010.

This paper was commissioned by the Institute of Medicine in support of the report “Clinical Practice Guidelines We Can Trust”, released March 23, 2011.

Disclosures

The findings and conclusions in this document are those of the author(s), who are responsible for its content, and do not necessarily represent the views of AHRQ. No statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.

Identifiable information on which this report, presentation, or other form of disclosure is based is protected by federal law, section 934(c) of the public health service act, 42 U.S.C. 299c-3(c). No identifiable information about any individuals or entities supplying the information or described in it may be knowingly used except in accordance with their prior consent. Any confidential identifiable information in this report or presentation that is knowingly disclosed is disclosed solely for the purpose for which it was provided.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to James B Jones or Dean F Sittig.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Jones, J.B., Stewart, W.F., Darer, J.D. et al. Beyond the threshold: real-time use of evidence in practice. BMC Med Inform Decis Mak 13, 47 (2013). https://doi.org/10.1186/1472-6947-13-47

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-13-47

Keywords