Studies in Educational Evaluation

Research on data-based decision making has proliferated around the world, fueled by policy recommendations and the diverse data that are now available to educators to inform their practice. Yet, many misconceptions and concerns have been raised by researchers and practitioners. To better understand the issues, a session was convened at AERA’s annual convention in 2018, followed by an analysis of the literature based on mis-conceptions that emerged. This commentary is an outgrowth of that exploration by providing research, theoretical, and practical evidence to dispel some of the misconceptions. Our objective is to survey and synthesize the landscape of the data-based decision making literature to address the identified misconceptions and then to serve as a stimulus to changes in policy and practice as well as a roadmap for a research agenda.


The impetus
Data-based decision making (DBDM; or data-driven decision making), data use for short, has emerged and evolved as a key field in education for nearly two decades. DBDM has become important, in part, because policymakers have stressed the need for education to become an evidence-based field, causing educators to rely more on data and research evidence, and not just experience and intuition. Accordingly, research on DBDM has paralleled policy mandates and emerging practice. With any evolving field, however, intent can sometimes be confused and misused or poor implementation can occur. Misconceptions about DBDM have arisen over the course of changing mandates from practice, policy, research, and theory. This article is meant to be a commentary that is grounded in the DBDM literature. The purpose of the article is to identify and address the misconceptions through the presentation of relevant research that either confirms or disconfirms the basis of the misconceptions.

Definitions and background
We begin with some basic definitions and background to assist the reader. Definitions for the process of data use differ as do emphases. Based on a review of over 3000 reports and journal articles, Hamilton et al. (2009) define DBDM as the systematic collection and analysis of different kinds of data to inform educational decisions. Mandinach and Gummer (2016b) make clear the importance of considering diverse data sources in the decision-making process because all too often, educators think only of student performance indices (i.e. assessment results) as educational data. This is a main issue raised among the misconceptions later in the paper. Research finds that effective data use requires the use of multiple sources of qualitative as well as quantitative data, and not solely achievement data Mandinach & Gummer, 2016b). Data use is a complex and interpretive process, in which goals have to be set, data have to be identified, collected, analyzed, and interpreted, and used to improve teaching and learning (Coburn, Toure, & Yamashita, 2009;Coburn & Turner, 2011;Mandinach & Jackson, 2012).
This interpretive transformation process involves a comprehensive skill set as part of an iterative inquiry process that informs decision making. DBDM was described by Van der Kleij, Vermeulen, Schildkamp, and Eggen (2015) as a formative assessment approach, as assessments are used to support learning, evidence is gathered, interpreted and used to change the learning environment based on students' needs ( Van der Kleij et al., 2015;Wiliam, 2011). Data use, in part, originated in the United States as a consequence of the No Child Left Behind (NCLB) Act, and has continued in the Every Student Succeeds Act (ESSA) in which learning outcomes were defined in terms of results and attaining specified targets (Wayman, Spikes, & Volonnino, 2013). This stimulated the use of data for informing teaching and learning in schools in the United States (Wayman, Jimerson, & Cho, 2012), but most often, for the purposes of accountability and compliance, rather than for continuous improvement (Hargreaves & Braun, 2013).
Policymakers have stressed the importance of data use to make education an evidence-based discipline. Researchers are studying diverse aspects of how data are being used in education and the impact. Although DBDM has existed for many years, some practitioners still maintain that it is a passing fad. Yet we maintain that effective educators have used data to inform their practice for a long time but many things in education have changed to position data use differently. These changes can even be connected to the learning theories behind data use. Van der Kleij et al. (2015) analyzed the learning theories in which data use is grounded. They state that early initiatives of data use were based on neo-behaviorism and cognitivism (Stobart, 2008), with no explicit attention paid to the environment in which the teaching and learning occurred. According to Van der Kleij et al. (2015) data use focused on reaching preset attainment targets, checking if these targets have been reached, and adapting the learning environment where needed. The focus was on teachers using assessments to check on individual student's abilities and delivering adequate instruction to these students. This focus is aligned with a focus on accountability and compliance in such policies as NCLB and ESSA in the United States where the subgroup reporting requirements sought to promote equity. Data use for accountability continues to be a prominent focus due to federal and state and/or national testing and compliance policies (Hargreaves & Braun, 2013;Nichols & Berliner, 2007). However, this focus did not take the variety of contexts in which the learning occurred into account, and also led to a narrow focus on achievement data, as the sole source of important data.
In the last decade a change from a sole focus on accountability to an emphasis on continuous improvement occurred (Mandinach, 2012). According to Van der Kleij et al (2015, p. 330), data use has moved towards a more sociocultural paradigm as it is currently 'focusing on continuously adapting learning environments to facilitate and optimize learning processes, taking into account learners' needs and individual characteristics. Thus, instead of just acknowledging the context or controlling for it, the emphasis is on the process of data use within a particular context (Coburn & Turner, 2011;Schildkamp, Lai, & Earl, 2013;Supovitz, 2010).
This shift toward continuous improvement has important implications for data use because many criticisms have emanated from the pressures of data use for accountability. The criticisms will be explored below. Second and relatedly, research recognizes that students and their backgrounds and circumstance are complex and face situational challenges that require educators to tap diverse data sources to gain a comprehensive understanding of their students (Datnow, 2017;. This means that a reliance only on test scores and indicators of student performance is inadequate. Educators need data, such as demographics, attendance, health, transportation, justice, motivation, home circumstances (i.e., homelessness, foster care, potential abuse, poverty), and special designations (i.e., disability, language learners, bullying) to contextualize student performance and behavior. These other sources of data are not intended to replace essential data around student performance, but to provide explanations and context to help educators better understand and interpret what the data mean (Mandinach, Warner, & Mundry, 2019). Third, sophisticated technologies and apps now exist that enable educators to access and make effective use of diverse sources of data to improve the quality of educational decision making.

Evidence of impact
Some proponents of DBDM believe that the use of data can solve educational problems, but data use is not a solution to all problems. Like everything, the effectiveness of DBDM depends on many factors, including the intent of use. Critics are vocal about their concerns (Penuel & Shepard, 2016).
For some DBDM seems to be an untenable, unsubstantiated, and irrational enterprise. Why, then are there policies, pressures, and emphases on data use, given the challenges and criticisms? Clearly there are opportunities and affordances provided by data use. The literature has been growing as data use has become more embedded into practice. Research on DBDM finds that the components of data use can either serve as an enabler or hinderance (Jimerson, Garry, Poortman, & Schildkamp, 2019). If done well, data use can be a positive activity. Conversely, if done poorly, it is a hinderance.
Educators (and other critics) sometimes wonder if data use makes a difference in educational practice. They ask if there is evidence of impact. Evidence to support or refute the claims can be found in research. For example, several studies show that if used effectively, data use can lead to increased student achievement (Marsh, 2012). Schildkamp, Poortman, Ebbeler, and Pieters (2019) conducted a literature review and identified 11 data use interventions that have been studied scientifically, and where there was evidence for an impact on student achievement. All of these interventions were studied scientifically, and all studies provide more evidence than just anecdotal evidence. Most of the studies included in their study used a quasi-experimental or randomized control trial design. This confirms that if done well, it can actually lead to improved student learning and achievement.
Clearly DBDM must address both accountability and continuous improvement objectives but there must be a balance. Teachers worry about the ramifications or the gotchas that data use might have in terms of data use for accountability (Nichols & Berliner, 2007). There are concerns about data (sic test scores) being used for inappropriate decisions, including teacher evaluations. This concern goes both for summative data being used for formative purposes and the reverse. But as Bennett (2011) and Pellegrino (2010) both note, the differences are not necessarily distinct. We therefore invoke the wisdom of Cronbach (1988) that validity really is about interpretation, not just about the properties of the instrument or the data.
Given the controversies around DBDM, the explicit purpose of this commentary is to lay out the topics of concern and misconceptions and then explore the literature to address the challenges, opportunities, and practices. To support the contentions of the commentary, we review the literature and how the research addresses issues in DBDM, focusing on a select number of topics. We conclude with logical steps to move the field forward. It is our hope that this commentary will stimulate improvement among colleagues within the data field while informing other colleagues about DBDM.

Key misconceptions: what the literature says
The commentary is based on a range of relevant literature, including but not limited to several review studies in this area (Datnow & Hubbard, 2015;Hamilton et al., 2009;Hoogland et al., 2016;Schenke & Meijer, 2018). Our review of the literature yielded five key topics around which criticisms or misconceptions about data use exist. We specify the concerns, misconceptions, and issues around the general topic and then provide a review that relates to each topic. Using the landscape of the literature and our knowledge of DBDM, we then posit recommendations for the field to consider and to stimulate progress for those working in the area of data use. These recommendations are geared primarily for researchers but also for practitioners and policymakers. The general topics addressed in this section revolve around: (1) theories of action and the process of data use; (2) the goals of data use; (3) what constitutes data and the data use process; (4) data literacy; and (5) technology. The ordering of these topics is purposeful although the misconceptions are systemically interrelated. We begin with theoretical issues and follow with infrastructure components.
Before we discuss the review, it is important to note that there, of E.B. Mandinach and K. Schildkamp Studies in Educational Evaluation xxx (xxxx) xxxx course, is diversity in practice and that nothing is universal. Nuances in theory, research, and practice exist. The commentary attempts to present a balanced view of the literature. It also is important to note is that much of the research has occurred in Europe (particularly the Netherlands and Belgium) and the United States, and to a lesser degree in New Zealand. Other countries are now engaging, and the research will follow.  Penuel and Shepard (2016) that data-driven interventions lack a theory of action. However, several theories of action, conceptual frameworks, and models of inquiry exist with regard to data use (e.g., Boudett et al., 2013;Coburn & Turner, 2011;Hamilton et al., 2009;Mandinach, Honey, Light, & Brunner, 2008;Marsh, 2012;Schenke & Meijer, 2018;Schildkamp & Poortman, 2015). Data use often starts with a certain goal that educators want to reach, usually related to improving the quality of teaching and learning in the school (e.g., student learning goals, aggregated achievement goals). It is essential that these goals are clear and measurable (Hamilton et al., 2009;Schildkamp, 2019). Multiple data sources can be used to determine whether these goals are reached. Next, educators need to make sense of these data (Vanlommel, Van Gasse, Vanhoof, & Van Petegem, 2017;Weick, 1995). Educators must collectively analyze and interpret the data to identify problems (i.e., when the set goals are not being met) and possible causes of these problems. Data use should not be an individual effort, collectively engaging in this sensemaking process is crucial as the implications regarding solutions to the problems and consequent actions based on the analysis of the data are often not self-evident (Mandinach et al., 2008;Marsh, 2012;Vanlommel et al., 2017). Collaboration may take place in a data team. A data team is a group of educators who come together around the examination of data to discuss actionable strategies. Data teams have different compositions. They can be formed around grade levels, content, or across grade levels and are led by a data coach (Farley-Ripple & Buttram, 2015;Huguet, Marsh, & Farrell, 2014;Schildkamp, Poortman, & Handelzalts, 2016). The assumption is that teachers collaborate through various kinds of teams. In these teams it is important to work toward continuous improvement and to try to address the needs of individual students through collaborative inquiry .
There are questions about whether data teams have the appropriate skills and knowledge, what kinds of inquiry processes they are using for what kinds of decision making, and if there are adequate structures and resources to support the collaborative inquiry and sensemaking process. Several studies have begun to answer these questions (Bocala & Boudett, 2015;Bolhuis, Voogt, & Schildkamp, 2019;Datnow, Park, & Kennedy-Lewis, 2013). For example, the Data Team intervention developed in the Netherlands, has been systematically researched to evaluate and theorize the (effects of the) professional development over an extended period of time in different countries (The Netherlands, Sweden, Belgium, England, and the United States). In this intervention, data teams consisting of six to eight teachers and school leaders collaboratively use data to solve a selected educational problem within the school. Research results (Ebbeler, Poortman, Schildkamp, & Pieters, 2017;Kippers, Poortman, Schildkamp, & Visscher, 2018) show that school leaders and teachers developed the necessary knowledge and skills to use data to improve education, an effect of the intervention on data literacy was found (effect sizes ranging from d = 0.60 to d = 0.71. Moreover, several data teams were able to solve their problem and improved student achievement (large effect sizes, ranging from d = 0.54 to 0.66) .
Based on this sensemaking process, whether it is in a data team or not, different types of improvement actions can be developed and implemented, for example actions with regard to curriculum, instructional and/or assessment changes (e.g., Gelderblom, Schildkamp, Pieters, & Ehren, 2016;Schildkamp et al., 2016). After these actions have been implemented, data need to be collected to determine whether or not the goals set in the beginning of the process are reached. Although described as a rather straightforward and linear process, in reality educators move back and forward between these different steps of the data use cycle, making it an iterative process. Moreover, data use is not a straightforward and not even an exclusively rational process (Bertrand & Marsh, 2015;Kahneman & Frederick, 2005), it also involves professional judgement. For example, as stated by Schildkamp (2019), p. 8): 'the same data might have different meanings for different people; decisions can never be completely based on data, because people filter data through their own lenses and experiences, in which intuition also plays an important role (Greene, & Gannon-Slater 2017).' Moreover, confirmation bias plays a role, as people may try to fit data into a frame that confirms their already pre-existing beliefs (Kahneman & Frederick, 2005;Kauffman, Reips, & Merki, 2016;Vanlommel & Schildkamp, 2018).
One more important aspect to consider in the process of data use is that this process has multiple stakeholders. The focus is often on teachers (and school leaders) using data, but in our view, students should not only be the recipients here, but should be participants in the process of data use. Hamilton et al. (2009) noted that students as data-driven decision makers rose to the level of one of five recommendations in the Institute of Education Sciences' Practice Guide. Students can, together with teachers, examine their own test results (Kennedy & Datnow, 2011;Levin & Datnow, 2012). Students need to be actively involved in the data use process to enhance their commitment and motivation, which in turn can lead to enhanced learning (Fletcher & Shaw, 2012). However, a review study by Hoogland et al. (2016) into the use of data concludes that the role of the student in the data use process has not been studied much yet.

Recommendations.
Based on the literature, we formulated the following recommendations to stimulate progress in the field: • Do not start with data, but with clear and measurable goals; • Triangulate different data sources, to capture the needs of diverse students; • Collectively engage in a sensemaking process, for example in a data team; • Connect professional judgement and data use to increase the quality of decision making; • Involve students in the process of data use; and • Conduct research into the role of students in the process of data use.

Goals of data use 1.2.2.1. Misconception 2: Data use is only for accountability purposes.
One of the main misconceptions is that data use is only for accountability purposes. Much of the criticism towards DBDM is related to accountability and compliance (Firestone & Gonzalez, 2007;Ingram, Louis, & Schroeder, 2004). As  note, data use is inextricably linked to accountability. Moreover, data use is often connected to two distinct goals: school improvement and accountability. Tensions and conflicts have arisen between these different types of goals (Hargreaves & Braun, 2013).
For example, Penuel and Shepard (2016) comment that there is a narrow focus on raising standardized test scores as a primary goal for the data-driven decision making interventions. One accountability related misconception that we would like to address here is that accountability pressure is conflated with data use because of the sole focus on test scores rather than a broader view of diverse data sources that can inform about the whole child and provide an asset-based perspective (Atwood, Jimerson, & Holt, 2019;Garner, Kahn, & Horn, 2017;Mandinach & Gummer, 2016b). This issue is also related to the next misconception.
Moreover, because of the narrow accountability focus, some teachers use data superficially, supporting a deficit mindset. Data can confirm assumptions, challenge beliefs, but can also reinforce expectations for low achieving students (Bertrand & Marsh, 2015;Datnow, 2017). In contexts with too much accountability pressure, teachers often focus more on students' deficits than their assets , and fail to make meaningful instructional change (Garner et al., 2017). Data in this context can even be used to marginalize or shame particular students (Garner et al., 2017;Neuman, 2016). Data use is these cases only focuses on achievement and not on learning. Accountability related to high-stakes test scores restricts teacher creativity, response, and dialogue, and provides a limited view of data to address short-term goals (Au, 2007;Berliner, 2011;Datnow, Park, & Choi, 2018;Nichols & Berliner, 2007).
When there is a strong focus on accountability data use can lead to narrowed curricula (Au, 2007;Berliner, 2011;Diamond & Cooper, 2007;Lipman, 2004;Nichols & Berliner, 2007). because of the narrow focus only on standardized assessment and achievement on a narrow set of topics (e.g., literacy and numeracy (Berliner, 2011;. The misconception important to address here is the idea that data use should focus only on standardized assessment and should only focus on a narrow set of topics. Data can be used not only for subjects as science, English, and Mathematics, but can also be used for topics such as arts, physical education, and wellbeing of students. The accountability pressure can also manifest itself in cultures where teachers feel the potential for retribution and punitive actions, shaming and blaming, especially when their students do not meet expectations, and therefore have little trust in data use (Datnow et al., 2013;Ingram et al., 2004). As Bocala and Boudett (2015) note, it is important for teachers to feel trust and safety in their data use, while using evidence rather than anecdotes. Unfortunately, in high accountability contexts, many educators have the misconception that accountability goals outweigh doing what is best to help their students.
Furthermore, if there is too much accountability pressure, this often leads to misuse of data, and even to abuse. Data use can lead to undue attention on students just below the threshold in order to increase their proficiency scores (Booher-Jennings, 2005). In this case teachers focus on the "bubble-kids" (i.e., students just below the threshold of a certain cut scores as a way of reporting proficiency levels) with the assumption that they will then reach mastery (Booher-Jennings, 2005;Moody & Dede, 2008). These teachers will focus most of their efforts on a specific type of student who can help improve the school's status on benchmarks and accountability indicators. Other possible negative effects due to too much accountability pressure include gaming the system, cheating on tests to reach a certain benchmark or accountability indicators, teaching to the test, excluding certain (weaker) students from a test, and even for marginalization and encouraging low performing students to drop out (Booher-Jennings, 2005;Diamond & Cooper, 2007;Ehren & Swanborn, 2012;Hamilton, Stecher, & Yuan, 2008;Schildkamp et al., 2019). Accountability can demoralize teachers and pressure them to use inappropriate kinds of data and use data inappropriately (Diamond & Spillane, 2004;Hubbard, Datnow, & Pruyn, 2014;Schildkamp & Tedlie, 2008).
However, this does not imply that data should never be used for accountability purposes. Accountability is needed as it makes a system more transparent, and it can be connected to data use for school improvement as data used in such a system can reveal aspects that need improvement (Tulowitzki, 2016). Data use for accountability and data use for school improvement are both needed. Earl and Katz (2006) stated in this light: "Accountability without improvement is empty rhetoric, and improvement without accountability is whimsical action without direction" (p. 12).
We argue here that it is crucial that data use starts with a certain school improvement goal and not a focus solely on accountability and/ or on the data available. Data use often focuses on student achievement as an important goal, but schools have other school improvement goals as well, such as the well-being of students, information literacy, and student self-regulation skills. Measuring progress towards these goals requires other data than the traditional test scores (Schildkamp, 2019). Moreover, equity is becoming an increasingly important goal in education. This means that educators consult diverse data sources to examine the whole child, not just student performance indices. An equity lens seeks to adopt an asset-based perspective which capitalizes on student strengths, interests, and backgrounds . By challenging beliefs and assumptions and carefully framing conversations, teachers can examine discrepancies, and hold high expectations for all students. With an explicit equity goal and the use of culturally responsive pedagogy (Ladson-Billings, 1995), data can help teachers redress inequities by using meaningful cultural resources and students' experiences (Athanases, Wahleithner, & Bennett, 2012;Datnow, 2017;Diamond & Cooper, 2007;Garner et al., 2017;Mandinach et al., 2019). The primary implication for data use is that educators need to access and use not only student performance indicators but also contextual and background information about students from which they can make more informed decisions. Examining the context and the background of students provides rich data sources to help educators understand the culture, the interests, and the strengths of students bring to the classroom. The emphasis on assets rather than deficits may be a difficult mindset shift for some educators but the intent, as noted above, is to prevent educators from making predetermined and potentially inaccurate assumptions about a student based on group characteristics such as disability, ethnicity, religion, home circumstance, socio-economic status, or even being an athlete (e.g., dumb jocks cannot learn).
Schools have different types of data available, some of which have been collected for many years. The question is whether all these different data sources still address their purposes. Society and schools are changing, so some data sources might not be as valuable anymore or may have been collected for the wrong purpose. It is important to consider what the goal of the data collected is and why are these things being measured (Tulowitzki, 2016). It is important to prevent goal displacement (Lavertu, 2014), a situation where what we can measure become our goals, instead of measuring what we value and believe our goals should be. Furthermore, educators may have developed new goals, and may need to think about new data to collect to monitor progress towards these new goals (Schildkamp, 2019 • Balance the use of data for accountability and continuous improvement; • Assume an asset-based model for data use rather than a punitive, deficit approach that is based solely on accountability that tends to further marginalize the most challenged students; • An increase of student achievement is an important goal for data use, but also focus on other important educational goals, such as well-being and equity; and • Evaluate the data sources available in the schools and school systems: Are all data sources still valuable, is there anything missing? Even assessment data are nuanced. The farther removed from classroom practice, such as high-stakes testing, the less informative the data. We argue that instructional decisions need to be more aligned to local data rather than state results that are too removed from the instructional process, with tests being aligned to the curriculum. Data are also more than only test results. Data now must be diverse and both qualitative and quantitative, including socio-emotional, attitudes, behavior, and more. Educators need formal data; that is, systematically collected data (Lai & Schildkamp, 2016), but they also need informal data. Educators collect information on the needs of their students in everyday practice, for example, by observing their students and by engaging in conversations with their students. These data are often collected quickly, "on-the-fly" (Heritage, 2018;Klenowski, 2009), and have the potential risk of reverting to experience and intuition rather than data. Such "on-the-fly" data, part of a formative assessment process, must be collected carefully and connected to student learning goals, with the objective of providing constructive feedback to students (Heritage, 2018). It is important that educators triangulate across a variety of data sources. Formal data come with disadvantages, such as that student learning cannot be captured in a single test score and a test score does not readily translate into the cause of performance or what to do instructionally. With the use of informal data there is a (bigger) risk of confirmation bias (Bolhuis, Schildkamp, & Voogt, 2016;Farrell & Marsh, 2016;Katz & Dack, 2013;Vanlommel & Schildkamp, 2018;Vanlommel et al., 2017). Bertrand and Marsh (2015) note the possibility that teachers will confirm their beliefs based on student characteristics related to results. Yet, the positive use of data can serve to challenge beliefs and minimize confirmation bias Love, Stiles, Mundry, & DiRanna, 2008). The intent is to promote the equitable use of data.
The ethics of data use is implied and assumed but a skill set not readily covered in pre-service or in-service settings (Mandinach & Gummer, forthcoming). Ethical and responsible data use is foundational to data literacy (Data Quality Campaign, 2014; Mandinach & Gummer, 2016b). Data ethics extends past prevailing laws such as the Family Educational Rights and Privacy Act (FERPA) in the United States and the General Data Protection Regulation (GDPR) in Europe, moving beyond the protection of privacy and confidentiality of student data. Data ethics incorporates such skills and knowledge as understanding data quality, using multiple data sources, using valid data sources aligned to the targeted decision, and drawing valid interpretations on the given data. As noted above, the quest for equity also is a component in that it focuses on addressing assumptions and mitigating confirmatory bias, all parts of ethical data use.
A foundational concept of data use is that students cannot and should not be summarized by one data point. Putting it simply, students are too complex. Our perspective is that student performance data, form the central source of data for teachers, but now teachers need surrounding and contextual information from which to understand each student and to inform how they can design instructional steps to help that student. With the proliferation of homelessness, foster care, truancy, behavioral issues, medical challenges, bullying, and other contextual constraints, teachers must have access to those data as well to understand why students might be performing poorly, in order to determine appropriate courses of action. Such data support cultural responsiveness and social justice Skrla, Scheurich, Garcia, & Nolly, 2004). Additionally, teachers need data on their own performance in the classroom to be able to address gaps in their own instruction and performance.
Although there are exceptions, research still depicts a somewhat constrained view of data. The relevant studies all focus on student performance indices. Teachers use different kinds of data for different kinds of decisions (Little, 2012;Spillane, 2012), yet they may not be using the most relevant data. For example, Farrell and Marsh (2016) found that state test data were mostly used for grouping. Benchmark tests were used for discerning patterns but were deemed untrustworthy.
Common assessments were more trusted and considered more reliable but the most valued data were the classroom-specific data that were most closely aligned to instruction and student work. This study provided valuable insights but is limited due to only looking at student performance data, not the full set of information.
When it comes to using data for instructional decision making, several studies point to problems with regard to the actual use of different data sources to improve instructional decision making (e.g., Hoover & Abrams, 2013;Olah, Lawrence, & Riggan, 2010). Problems identified in different studies include: a lack of (access to) different types of data (Schildkamp & Kuiper, 2010); failing to perform a deeper analysis of assessment data to yield valuable instructional insights (Hoover & Abrams, 2013); a focus on the use of assessment data or test preparation and instruction to improve test scores, while failing to influence teaching practice (Garner et al., 2017); superficial use of data for accountability and triage purposes (Booher-Jennings, 2005;Garner et al., 2017;Lai & Schildkamp, 2016); interim data helps teachers to consider which students need help, but do not provide sufficient information about what to teach and how to teach it (Goertz, Olah, & Riggan, 2009); and interim assessment data alone do not help teachers to develop a deep understanding of students' learning of the specific content and the misconceptions of students (Garner et al., 2017;Goertz et al., 2009).
Several of these problems relate to an overreliance on assessment data, but these problems are also related to a lack of ability to interpret the data, and interpretation is what allows teachers to determine on which data to act (Farrell & Marsh, 2016). Data must be transformed into actionable knowledge as part of the decision-making process (Mandinach et al., 2008). Thus, one challenge is how to transform the data into actionable steps that create lasting instructional impact, rather than cursory reteaching and repetition.
The findings from most of these studies indicate that teachers often use incomplete or even the wrong data for the kinds of decisions with which they are confronted. They do not use a full range of data to gain a comprehensive view of their students (Mandinach & Gummer, 2016b, 2016c. As Baker, Linn, Herman, and Koretz (2002) noted, student achievement data may be primary, but it is essential to also have additional data about student characteristics to contextualize student performance. Having a comprehensive view of students has become increasingly important in light of the need to attend to diversity and equity Park, St. John, Datnow, & Choi, 2017).
We provide an illustrative example that does not focus on student performance. One recent study (Atwood et al., 2019) is a model for why educators sometime need to look beyond traditional or typical data sources to make decisions that impact students. In many instances decisions focus on instruction and how to improve student performance but other times, DBDM extends beyond student performance and the need to access and examine a broader spectrum of data. The Atwood and colleagues study examined how a school dealt with food insecurity based on what was apparently a student's theft of food. At a glance, educators might have thought the student in question had behavioral issues. But when looking more deeply and triangulating data, the educators realized the student was hungry, with food insecurity, and the family needed assistance. The school was able to mobilize a strategy to help this student and others like her by examining diverse data sources and not just making the most apparent interpretation from the most obvious information.
In sum, when data use is more broadly focused on multiple measures and a formative perspective, for the goal of addressing the whole child, to improve instruction and learning, inform educational decisions, and reflect on practice, data use can be powerful tools for continuous improvement. The whole child perspective may be particularly important in developing countries as reflected by a special issue of the Journal of Professional Capital and Community that focuses on building capacity through the use of a systems approach and evidence to inform policy and practice.

E.B. Mandinach and K. Schildkamp
Studies in Educational Evaluation xxx (xxxx) xxxx The major misconception is the conflation between data literacy and assessment literacy where stakeholders do not understand the differences, but the differences are very real and important for research, theory, and practice (Beck, Morgan, & Whitesides, 2019;Mandinach & Gummer, 2011;Mandinach, Kahl, Parton, & Carson, 2014).
Research shows that educators struggle with the use of data. With the proliferation of data, educators are often overwhelmed and need to have strategies for culling through the mounds of data (Hamilton et al., 2009). The often-used phrase, drowning in data, is a reality. Many educators do not feel comfortable using data (Piro, Dunlap, & Shutt, 2014). Educators, for example, struggle with setting clear and measurable goals, collecting data, and making sense of data (e.g., Gelderblom et al., 2016;Schildkamp et al., 2016). They struggle to identify problems of practice and pose researchable questions (Means, Chen, DeBarger, & Padilla, 2011). Moreover, they may not understand how to use data effectively and responsibly, without violating student privacy and confidentiality (Mandinach, Parton, Gummer, & Anderson, 2015). Educators sometimes fail to conduct the right types of analysis, and even more often they have difficulties with connecting the data to their own instruction in the classroom and translating the data into an action plan (e.g., Brown, Schildkamp, & Hubers, 2017;Schildkamp & Kuiper, 2010;Schildkamp & Poortman, 2015;Schildkamp et al., 2016). Further, the lack of capacity can cause poor decisions and the misuse of data (Daly, 2012;Kahneman & Klein, 2009;Mandinach & Gummer, 2016b).
Having sophisticated technologies and appropriate data are foundational elements in data use, but educators must know how to use data effectively and responsibly; that is, they must have some level of data literacy (Data Quality Campaign, 2014;Mandinach & Gummer, 2016b, 2016c. A long-term concern remains that there is a lack of internal capacity and a lack of adequate preparation at the pre-service or inservice level beginning with assessment literacy (Mandinach & Gummer, 2013;Mandinach, Friedman, & Gummer, 2015;Reeves & Honig, 2015;Reeves, 2017;Schafer & Lissitz, 1987;Wise, Lukin, & Roos, 1991) and morphing to the broader construct, data literacy. Data literacy is seen as a broader construct in which educators use diverse sources of data, not just assessments, to make informed decisions. Mandinach and Gummer (2016b) define the construct: Data literacy for teaching is the ability to transform information into actionable instructional knowledge and practices by collecting, analyzing, and interpreting all types of data (assessment, school climate, behavioral, snapshot, longitudinal, moment-to-moment, etc.) to help determine instructional steps. It combines an understanding of data with standards, disciplinary knowledge and practices, curricular knowledge, pedagogical content knowledge, and an understanding of how children learn. (p. 14) Mandinach and Gummer (2013) have long argued that there is a lack of human infrastructure for data use that must be addressed as early as possible in an educator's career, starting at the pre-service level and carrying forward. As Means, Padilla, and Gallagher (2010) note, professional development around data must be ongoing and sustained. Unfortunately, that is not the case in the United States as professional development around data use tends to have a low priority and therefore is not well addressed. Recent attention to data literacy in teacher preparation programs indicates its growing importance (Mandinach & Gummer, 2016c;Mandinach & Nunnaley, 2017;Reeves, 2017). In a survey with large and representative sample of educator preparation programs, the programs reported that they are teaching data literacy (Mandinach, Friedman et al., 2015). That said, an in-depth analysis of course syllabi from the study indicated that programs focus on assessment literacy rather than data literacy, although it is possible that some non-responding programs may indeed teach about data literacy. In a second survey of administrators and faculty of teacher preparation programs, results indicated that the institutions want data literacy integrated into pre-service curricula (Mandinach & Nunnaley, 2017). At the in-service level, educators often do not have adequate support in their schools with resources such as data coaches or data teams Lai & McNaughton, 2013;Schildkamp & Kuiper, 2010;Schildkamp & Poortman, 2015). Professional development can improve data skills, often embedded within a content domain. There are few comprehensive models of professional development that have been scientifically developed and examined (e.g., Boudett et al., 2013;Lai et al., 2014;Love et al., 2008;Schildkamp et al., 2018). For example, the Using Data model (Love et al., 2008) creates data coaches and data teams and strives for educators to attain 13 high-capacity data strategies, one of which is a focus on designing culturally proficient instructional strategies to address equity and diversity issues. Such a focus has the potential to creative more equitable education for all students through the use of data . But the issue is combining that knowledge with pedagogical content knowledge to determine the needed instructional steps. Data literacy requires more than just identifying which students need help, but also identifying students' learning needs. Data literacy is also more than just being trained to use a particular data system, data-related application, or assessment system. Further, any professional development or training should focus on the actual use of data, not just the technical skills. It should help the teachers change their practice by learning how to transform data into actionable instructional steps while integrating their knowledge of content and pedagogy. For example, Kippers et al. (2018) studied the process of taking educational action based on data. The study found that throughout the complex process of taking educational actions, teachers need to combine their skills to use information with their expertise about teaching and (their) students. Without data, teachers do not know the gap between students' current learning and their learning goals, and without expertise, teachers do not know how to close this gap (Mandinach & Gummer, 2016a).

Recommendations. Recommendations based on our literature review include:
• Make use of the data literacy definition and framework that lays out the skills, knowledge, and dispositions educators need to use data effectively; • Delineate the continuum of data literacy from novice to expert, particularly identifying what the midpoints look like; • Conduct more research on how the data skills interact with content knowledge and pedagogical content knowledge (Mandinach & Gummer, 2016b); • Re-design the programs for the preparation and training of current and future educators by introducing and integrating data literacy into the curricula; and • Recognize the importance of the merger of the culturally responsive pedagogy with data literacy (Mandinach et al., 2019) to take a whole child perspective and an equity lens while assuming an assetbased model   E.B. Mandinach and K. Schildkamp Studies in Educational Evaluation xxx (xxxx) xxxx because the graphical representations these offer fail to present the data in a meaningful way. Technologies to support DBDM have proliferated, ranging from sophisticated data warehouses to apps on mobile devices (Means et al., 2010;Wayman, Cho, & Richards, 2010). However, educators may not have technologies aligned with their educational objectives or may have technologies that generate information that leads to overly simplified or ill-conceived interpretations that are misleading (Kahneman & Klein, 2009;Wayman et al., 2010). Some people (e.g., Penuel & Shepard, 2016) even accuse the data-based decision making movement of "selling out to the vendors." One of Penuel and Shepard's biggest complaints is that the developers of assessment systems produce reports that summarize results into red, yellow, and green categories that indicate to the user which students are failing, borderline, or passing . It is called the stop light. According to Penuel and Shepard's comments at a large AERA session, this problematic form of presentation oversimplifies the results and fails to provide a roadmap for instructional steps. However, these categories may provide a starting point for further analysis in each sub-group. New technological opportunities arise almost every day and there are many technologies that do not use the stop light approach. Tools are improving and becoming more sophisticated in collecting and storing (real time) data, and in visualizing and analyzing these data (e.g., data warehouses, dashboards, data lockers, data analytics, data mining tools, machine learning), moving beyond the stop light categories. Take for example, data use in personalized learning environments. These environments make it possible for teachers and students to collect and have access to diverse data sources supported by the interfaces with technologies (Mandinach & Miskell, 2018;Pane, Steiner, Baird, & Hamilton, 2015). Further investments are needed in the design, development, implementation, and evaluation of systems and tools that can support teaching and learning in schools.
Two issues here are the lack of interoperability among the silos of data and teachers' knowledge of how to triangulate across the data sources. The technologies to support data use need to attend to "high tech" (e.g., the development of high-quality tools) and "human touch" (e.g., making sure that educators can actually use these tools to benefit the learner) (Schildkamp, 2019). To realize the potential of data use, expertise is needed in the fields of technology (e.g., the vendors), as well as in the field of learning and psychology, as data use is still mostly a human endeavor (Schildkamp, 2019 • Use technology to support the data use process, but also engage in further in-depth analysis; • Invest in (connecting different types of) systems and tools that match with the needs of the users; and • Engage in educational studies to design, develop, implement, and evaluate systems and tools that can support teaching and learning in schools.

Conclusion and discussion: steps to move the field forward
Let us return to the original criticisms that motivated this review to identify key themes and implications. Research and practice in the area of DBDM has made great strides over the past two decades, as policymakers have urged the field of education to become more evidencebased. In no way is the use of data a panacea or the sole source of information to inform practice. Educator experience and professional judgement count, but must be used in conjunction with data, especially now that understanding students has become more complex. This means that the data use field needs to move from neo-behaviorism and cognitivist perspective on data use to a more social-cultural paradigm. The focus should be continuously adapting instruction in the classroom and beyond, to facilitate and optimize students' learning processes, taking into account learners' needs and individual characteristics.
This increasing complexity of students, their backgrounds and circumstances should be an impetus for the use of a broad definition of data use that includes all types of qualitative and quantitative data, formal and informal data. It is essential to consider the whole child with diverse data sources that go beyond traditional, quantitative student performance measures. This need also impacts educators' skill sets to move from assessment literacy to the broader conceptualization of data literacy (Data Quality Campaign, 2014;Mandinach & Gummer, 2016b;Mandinach et al., 2014). As Bocala and Boudett (2015) note, "The goal is not just getting teachers to be comfortable with data but allowing the profession to evolve to a place where understanding of data is thoroughly integrated with the work of learning and teaching" (p. 8).
Of course, student learning and achievement are important, but the extension of data to diverse sources may influence students and the educational process. Adapting an equity lens may well be the most important contribution that the DBDM field can make in education; that is the shift to understanding the whole child, with context and other variables helping to enhance the interpretation of student performance through cultural responsiveness. There are implications for practice. Educators will need to look beyond performance data to understand the student. It will require an asset-based model that focuses on student strengths, interests, and contexts. We recognize, however, that the flip side of this equity lens is the potential for confirmation bias as discussed above. That said, we firmly believe that one of the strengths of DBDM, if done effectively, appropriately, and responsibly, is for data use to enable educators to make more culturally sensitive and equitable decisions based on their knowledge of their students and the contextual factors that may impact them on a daily basis. This focus has implications for how teacher candidates and current educators acquire competence with data through educator preparation programs and professional development.
We have stressed the need to focus on data use for continuous improvement rather than for just accountability and compliance, a major philosophical shift. No doubt data will always be used to some extent to meet accountability requirements. However, there should also be a foundation for data use to inform the improvement process, whether at the student, classroom, school, district, or federal level. The more closely tied data are to the target of improvement, the more effectively progress can be monitored and action steps taken. This involves addressing proximal goals rather than a focus on distal accountability objectives. Critics have argued that such continuous improvement is more a business model derived from organization learning (Senge, 1990) than it is an educational process. We disagree. The use of data to inform educational improvement can provide a roadmap and actionable steps to inform practice. Christman et al. (2009) apply an organizational learning framework and collaborative inquiry to data use. This framework includes an iterative process in which a problem is identified and action steps outlined to address the problem with the objective of continuous improvement. Data can be a source of information for educators to help students learn, but also to help educators to improve their own classroom processes, instructional actions, and behavior. Firestone and Gonzalez (2007) note that data for continuous improvement can address organizational learning and instructional improvement using a long-term approach to improvement. In this way, as also argued by Van der Kleij et al. (2015) data use can be seen as an approach to formative assessment, where the focus is on using data to support student learning. But the educators need to know how to make the data actionable; that is, they need to understand how to translate the data into pedagogy or other actionable steps to address the particular issues.
Effective data use also will shift the classroom to a more studentcentered environment, where students can become a vital part of the educational process (Hamilton et al., 2009). Student involvement is a fundamental principle underlying formative assessment (Heritage, 2010). Key steps were identified from the literature of ways to involve students in the data process (Hamilton et al., 2009). First, by using data, students can better understand performance criteria and expectations. Second, the use of timely and constructive feedback based on data is an essential part of the instructional process as is the provision of tools to help students learn from the feedback. Finally, the review of data with students will provide a better understanding of performance and may motivate learning. However, we do need to acknowledge that the role of the student in the data use process has received too little attention so far. Only few studies have addressed the role of the student (Hoogland et al., 2016). More research is urgently needed how to include students in the process of data use, so that it leads to ownership, student learning, and ultimately increased student achievement.
The effective use of data must be grounded in teacher beliefs (Datnow and Hubbard, 2015;Prenger & Schildkamp, 2018) of the importance of data use and data literacy (Data Quality Campaign, 2014;Mandinach & Gummer, 2016b, 2016c. The acquisition of this skill set and dispositions must be a lifelong learning process for educators. As noted above, introducing data use to educators must begin during their pre-service preparation and be reinforced throughout their careers (Mandinach & Gummer, 2013;Mandinach & Nunnaley, 2017;Reeves, 2017). It must become an engrained part of practice, for example though working in data teams  and with knowledgeable data coaches (Love et al., 2008). We believe that working in teams (e.g., grade level teams, subject matter teams) led by data coaches is the way forward, as data use is a complex sensemaking process that does not take place in isolation. It requires collective sensemaking and dialogue Vanlommel & Schildkamp, 2018), focused on the questions: What can we do as educators to help our students learn? What are the actionable steps we can take to positively impact the instructional process or affect better educational decisions?
We have reviewed some of the strategies around effective data use that can provide the foundation and impetus for policy, practice, and research. We urge the field to work toward a better understanding of the actual data use process. We recognize that data use, if conducted properly and in good faith with an equity lens can have a positive impact addressing the needs of all students, regardless of circumstances. Taking the equity perspective can impact how educators are prepared to use data across the continuum of their careers. It can impact the focus of courses in educator preparation programs as well as professional development and in-service trainings. Finally, the practice field must take seriously the need to develop data literacy in all educators, current and future. This requires the mobilization of changes in the preparation programs and the development of appropriate curriculum materials that can be used (Mandinach & Nunnaley, 2017). We hope this commentary will serve as a stimulus to change in policy and practice as well as a roadmap for a research agenda.