Accuracy in NatHERS Assessments is something we all assume or take for granted.
In this post guest contributor Rebecca Robins from Efficiency Assessments asks ‘Do we need to start looking at the accuracy of the comparison rather than the accuracy of the assessment?’Have you ever thought how strange it is that in NatHERS, we work with quite complex software that is predominantly used for the simple task of demonstrating a basic compliant or not compliant result?
NatHERS software can potentially model very complex scenarios and is capable of being used for detailed and accurate analysis. This capability has lead to more and more features of the software being incorporated as a mandatory part of all assessments. The inclusion of these features in all assessments should, in theory, increase the accuracy of rating results. However, the quality assurance issues identified by the AAO’s annually shows a decline in alignment with the outcomes expected, despite the requirement that all Assessors enter data in accordance with the NatHERS Technical Note which details the data entry requirements for each of these parts.
The question becomes, when we are using the software for compliance only, could requirements that access fewer parts of the software be mandated, leaving some of the items set to a default value? Doing this could help make the input data more consistent and therefore, the output data a more accurate comparison. Do we need to start looking at the accuracy of the comparison rather than the accuracy of the assessment?
The reality of the NatHERS rating is that it gives a result out of 10. Theoretically, this result is a comparative result that assumes that all the data input into all assessments is the same and therefore the outputs can be compared in an ‘apples with apples’ scenario This is never going to be entirely true given that differences and variables are going to occur through professional judgement, individual interpretation and human error.
If some defaults were set to minimise data input inaccuracies, wouldn’t this also create a comparative setting? For example, these defaults may mean assessing all dwellings:
- in a greenfield situation, taking away the effect of neighbouring buildings on shading and cross flow ventilation; or
- With only default windows or constructions, reducing risk of errors being created when products or systems are used by Assessors without adequate experience or research skills to create accurate products and use them appropriately.
- with a certain percentage/ number and type of ceiling penetrations, removing
- a step often hard to adhere to when the assessment is done at planning stage such as in NSW
- Or installation (illegally) occurring after inspection meaning that the rating doesn’t hold true.
The ‘inaccurate data’ that all Assessors would create could then be used to recalibrate a new rating goal that allows for the impact of these assumptions (eg MJ/m2.annum or stars). In this scenario, the consumer would only need to know if the dwelling has complied or not.
Perhaps we need two NatHERS compliance pathways. One based on defaults that gives the consumer a pass or fail certificate. The other, provides a star rating and uses the software at a higher level, allowing for those who want to use the software as a design tool or to show their clients just how cost effective, energy efficient and comfortable their house is, comparatively out of 10.
This detailed assessment pathway, for example, may require further experience, training, examination or accreditation to be permitted to use the software in this way.
An example of a ‘simple’ comparative system already exists within the ACT as part of the point of sale disclosure of a star rating. The EER certificates required are calculated using FirstRate4. FR4 is a first generation NatHERS software. The currently approved software we use today is second generation software. FirstRate4 does not even include a calculation engine. It correlates data gathered about similar properties from the two other first generation software, NatHERS and BERS, both containing early versions of the Chenath Engine. All these software packages used 4 zones with rooms combined by type and a limited default range of products and glazing inclusions. Data entry was quick and simple making it easier to achieve consistency. The fact that the output is a correlation rather than a calculation further broadens the margin for error and is therefore more suited to dwellings that are already built where the Assessor has to make numerous assumptions and often measure up the dwelling for assessment. There have also been studies that show pre building assessment and post building assessment do no align in a large percentages of cases further begging the question, why would you try to be so accurate in the first place if when all is said and done, the rating produced does not correlate with the final dwelling the occupants inhabit, even if the same software and methodology is used?
So, what are your thoughts? Does every assessment require absolute accuracy? What areas do you think could be set to a default level? Could we create an extra income stream for NatHERS Assessors by having both an application and an ‘as built’ stage assessment? Let us know in the comments section below!