The gap between predicted and measured energy performance of buildings: A framework for investigation

https://doi.org/10.1016/j.autcon.2014.02.009Get rights and content

Highlights

  • Critical review of the literature on the building energy performance gap

  • Differentiation between three types of performance gap

  • Probabilistic probe into the gap between simulated and monitored data

  • Performance gap changes over time and is dependent on context of observation.

Abstract

There often is a significant difference between predicted (computed) energy performance of buildings and actual measured energy use once buildings are operational. This article reviews literature on this ‘performance gap’. It discerns three main types of gap: (1) between first-principle predictions and measurements, (2) between machine learning and measurements, and (3) between predictions and display certificates in legislation. It presents a pilot study that attempts an initial probabilistic probe into the performance gap. Findings from this pilot study are used to identify a number of key issues that need to be addressed within future investigations of the performance gap in general, especially the fact that the performance gap is a function of time and external conditions. The paper concludes that the performance gap can only be bridged by a broad, coordinated approach that combines model validation and verification, improved data collection for predictions, better forecasting, and change of industry practice.

Introduction

Within the building industry, there is an increasing concern about a mismatch between the predicted energy performance of buildings and actual measured performance, typically addressed as ‘the performance gap’ [1], [2], [3], [4]. Rapid deployment of automated meter reading (AMR) technology, typically now harvesting data at hourly or even half-hourly intervals, is making the performance gap more and more visible. The magnitude of this gap is significant, with reports suggesting that the measured energy use can be as much as 2.5 times the predicted energy use [4]. Increased pressure on the industry to address the challenges of environmental issues and rising energy prices makes it important to address this performance gap, with clients and the general public expecting new high performance buildings to meet increasingly stringent energy efficiency targets. While it seems reasonable to allow for some variation in both predictions and measurements due to the realities of uncertainties (inherent in predictions) and data scatter (inherent in measurements), the evidence seems to point to the gap presently being too wide to be acceptable.

Bridging the gap between predicted and measured performance is crucial if the design and engineering stage is to provide serious input to the delivery of buildings that meet their (quantified) ambitions, such as High Performance Buildings, Zero Carbon and Net Zero Energy Buildings. Bridging the gap is also crucial if the industry wants to deliver buildings that are robust towards change, that maintain a good performance throughout their lifetime, and that are engineered to adapt to changing use conditions in terms of ‘occupant proofing’ or ‘climate change proofing’. Furthermore, it is a key prerequisite to novel modes of building delivery and facility management, enabling concepts such as performance based building, or performance-contracting, where occupants purchase a working environment with specified comfort boundaries rather than hardware (building and systems) that might – or might not – deliver such an environment [5], [6]. In a wider context, the performance gap erodes the credibility of the design and engineering sectors of the building industry, and leads to general public scepticism of new High Performance Building concepts.

Energy efficiency is only one of the various performance aspects of buildings; it is highly likely that similar performance gaps exist between predicted and measured indoor air quality, thermal comfort, acoustic performance, daylighting levels and others. However, building industry and research presently focus on the energy performance gap; this might be due to the fact that energy metering is more prevalent and easier to implement than measurement of the other aspects. This paper aligns itself with this general focus on energy.

Energy performance of buildings can be studied at various levels of resolution. The primary view used in most studies is annual energy use of the whole building for heating and cooling purposes. However, one needs to be very careful in terms of including or excluding additional energy use for appliances, lighting, hot water and others. Energy efficiency can also be studied at higher temporal resolution using monthly, weekly, daily or even hourly data. A further differentiation relates to the object of study in energy performance analysis; at the design stage (prediction) this is linked to design intent, whereas post construction the measurement only applies to a building as actually constructed (instantiation). The energy performance gap typically concerns predicted performance of the design intent with observed performance of the realized building over the year.

Some discrepancy between prediction and measurement is inevitable due to numerical errors in simulation, and experimental variation in any observation [7], but getting reasonable agreement has been a key aim of tool developers ever since the inception of energy performance prediction methods, which started in the 1960s [8]. A good historical overview of various efforts in this direction is provided by Strachan et al. which although focussed on the development of the ESP-r simulation program is also applicable to many other similar efforts [9]. Their paper describes the validation of ESP-r in the context of a series of IEA Annexes (BESTEST) and related validation projects starting from the late 1970s. The key approaches used in much of this work are analytical validation, inter-program comparison and empirical validation, with the latter mostly based on results obtained from dedicated test cells; see for instance [10]. Yet these validation approaches are not without criticism. For instance Williamson [11] has pointed out that the analytical approach requires strong constraints and thus often does not reflect the real world, while inter-program comparison does not guarantee that any of the tools studied reflects what happens in the real world. But typically empirical validation is only possible for simple situations, not for full building complexity. In general, the field of verification, validation and testing (sometimes abbreviated as VVT) is still under development [12], [13], [14]. Interest is now also showing at the measurement side, most notably through the International Performance Measurement and Verification Protocol (IPMVP) [15], [16]. Rapid developments in monitoring techniques and data mining techniques, including cheap sensors, radio-frequency identification (RFID) tags, and ubiquitous positioning, provide an increasingly high resolution map of reality and hence set higher benchmarks for performance predictions [17].

Indications of the ‘performance gap’ as addressed in this work started to appear from the mid-1990s [18], with a continuous coverage to the present day [1], [19], [20], [21], [22]. It must be noted that this performance gap is positioned in a different context than the above validation efforts: it addresses the differences between prediction and measurement of the energy performance of a complete building, including the full complexities of sub-systems, control settings, occupant behaviour, climate conditions, and others. Also, it is important to emphasize that in true prediction, made when the project still is in the design stage, there is typically only a description of a building, but no actual object—apart from a case where the design involves the renovation of an existing building; see for instance Sanguinetti [23].

This article develops a framework for further investigation of the magnitude of the performance gap, and for R&D, efforts towards narrowing or bridging the gap. It first provides a critical review of current literature on the subject, both in terms of root causes and solutions, and then continues to develop a fundamental position that distinguishes three different views of the energy performance gap. From this perspective the discussion then focuses on a pilot study that attempts an initial probabilistic probe into the performance gap. Finally, findings from the pilot study are used to identify a number of key issues that need to be addressed within future investigations of the performance gap in general.

Section snippets

Root causes

Literature on the energy performance gap suggests various causes for the mismatch between prediction and measurements. These causes can be grouped in three main categories: causes that pertain to the design stage, causes rooted in the construction stage (including hand-over), and causes that relate to the operational stage. Note that the specific issues which cause a performance gap will vary from one building to another; in many cases there will be a combination of several issues.

Within the

Ongoing efforts to bridge the gap

Suggestions on how to best bridge this gap presented in literature are generally aligned with the root causes, and cover design and prediction, construction, and measurement once buildings are operational. As the performance gap for a specific building can stem from any cause listed, there is a need to address the whole field [25]. As stated in the Zero Carbon Hub report: “In developing a solution to the problem of underperformance it is clear that the focus should be on improving the

Methodology

From the review of literature on the performance gap, it becomes obvious that there are different viewpoints regarding the gap. Any research directed at a deeper investigation of the gap, or efforts to bridge the gap, requires a solid position in regards to these different viewpoints. Thus, Section 5 of this paper develops a classification that discerns three main types of performance gap.

The work reported here is based on the premise that uncertainties will be present in both energy

Performance gap typology

While at first sight the concept of a performance gap between predicted and measured energy performance seems simple enough, in reality it is more complex: in the world of building design and engineering (as well as research), there are various approaches to both prediction and measurement. Additionally, there is another layer of building regulation which involves both prediction and measurement; this substantially complicates the discussion about the performance gap. The resulting complex

Pilot study

A pilot study, focussing on the Roland Levinsky Building at Plymouth University, has been undertaken to explore the feasibility of investigating and quantifying the performance gap while taking into account the uncertainties in both prediction and measurement. In the performance gap classification this is a study into Type 1, comparing ‘first principle’ models with actual measurements.

The Roland Levinsky Building is subject of a series of computational studies [54], [69] and monitoring, and

A framework for further investigation of the performance gap

Following from the literature, the pilot study, and focussing mainly on the Type 1 performance gap, the following issues need to be addressed in further research that aims to study the performance gap:

  • 1.

    Efforts trying to quantify the performance gap need to accommodate the fact that the magnitude of the performance gap is dependent on time, contextual factors like climate and building use, as well as the temporal resolution at which the performance gap is studied, as demonstrated in the pilot

Discussion and conclusion

Bridging the energy performance gap is crucial if the building design/engineering stage is to provide serious input into the delivery of buildings that meet their (quantified) ambitions. This paper has presented an in-depth review of literature on the subject. It has then developed a typology that discerns three types of gap: (1) between first-principle predictions and measurements, (2) between machine learning and measurements, and (3) between predictions and display certificates in

Acknowledgments

The work by described in this paper has been funded through the Royal Academy of Engineering and Leverhulme Senior Research Fellowship, Reference Number 10226/48. The author wishes to thank Godfried Augenbroe and Yuming Sun for deep discussions on the subject of this paper, which have significantly shaped the work reported, and Darren Pearson of C3Resources who was instrumental in obtaining AMR data for the pilot study. ModelCenter® has been used in the described work under academic license

References (72)

  • E. Ryan et al.

    Validation of building energy modeling tools under idealized and realistic conditions

    Energy Build.

    (2012)
  • Z. Yu et al.

    A methodology for identifying and improving occupant behaviour in residential buildings

    Energy

    (2011)
  • M. Chung et al.

    Development of a software package for community energy system assessment—Part I: building a load estimator

    Energy

    (2010)
  • A. Molin et al.

    Investigation of energy performance of newly built low-energy buildings in Sweden

    Energy Build.

    (2011)
  • W. Lee et al.

    Benchmarking the performance of building energy management using data envelopment analysis

    Appl. Therm. Eng.

    (2009)
  • D. O'Sullivan et al.

    Improving building operation by tracking performance metrics throughout the building lifecycle (BLC)

    Energy Build.

    (2004)
  • H. Koziolek

    Performance evaluation of component-based software systems: a survey

    Perform. Eval.

    (2010)
  • P. Raftery et al.

    Calibrating whole building energy models: an evidence-based methodology

    Energy Build.

    (2011)
  • W. Shen et al.

    Systems integration and collaboration in architecture, engineering, construction, and facilities management: a review

    Adv. Eng. Inform.

    (2010)
  • E. Curry et al.

    Linking building data in the cloud: integrating cross-domain building data using linked data

    Adv. Eng. Inform.

    (2013)
  • W. Tian et al.

    Uncertainty and sensitivity analysis of the performance of an air-conditioned campus building in the UK under probabilistic climate projections

    Autom. Constr.

    (2011)
  • A. Tsanas et al.

    Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools

    Energy Build.

    (2012)
  • A. Kusiak et al.

    Modeling and optimization of HVAC systems using a dynamic neural network

    Energy

    (2012)
  • S. Cho et al.

    Effect of length of measurement period on accuracy of predicted annual heating energy consumption of buildings

    Energy Convers. Manag.

    (2004)
  • J. Jin et al.

    Thermal characteristic prediction models for a free-form building in various climate zones

    Energy

    (2013)
  • D. Djurdjanovic et al.

    Watchdog Agent—an infotronics-based prognostics approach for product performance degradation assessment and prediction

    Adv. Eng. Inform.

    (2003)
  • W. Chung

    Review of building energy-use performance benchmarking methodologies

    Appl. Energy

    (2011)
  • S. Kelly et al.

    Building performance evaluation and certification in the UK: is SAP fit for purpose?

    Renew. Sustain. Energy Rev.

    (2012)
  • C. Turner et al.

    Energy performance of LEED for new construction buildings (Final Report)

    (2008)
  • Carbon Trust

    Closing the Gap: Lessons Learned on Realising the Potential of Low Carbon Building Design

    (2011)
  • Zero Carbon Hub

    A Review of the Modelling Tools and Assumptions: Topic 4, Closing the Gap between Designed and Built Performance

    (2010)
  • C. Menezes et al.

    Predicted vs. actual energy performance of non-domestic buildings: using post-occupancy evaluation data to reduce the performance gap

    Appl. Energy

    (2012)
  • N. Almeida et al.

    A framework for combining risk management and performance-based building approaches

    Build. Res. Inf.

    (2010)
  • X. Zhang et al.

    Optimal performance-based building facility management

    Comput. Aided Civ. Infrastruct. Eng.

    (2010)
  • W. Oberkampf et al.

    Verification and Validation in Scientific Computing

    (2010)
  • J. Clarke

    Energy Simulation in Building Design

    (2001)
  • Cited by (804)

    View all citing articles on Scopus
    View full text