This indicator is answered separately for each property type. These answers are also scored per property type, resulting in multiple scores for the same indicator. Scores are aggregated across property types by taking a weighted mean of the property type scores, weighted by the percentage of GAV reported for each property type in R1.1.
The score of this indicator equals the sum of the scores achieved by:
- Data coverage = 8 points;
- Like-for-Like performance improvement = 2 points.
- Like-for-Like data availability = 0.5 points;
- Asset-level data reporting = 1.5 points;
Data coverage:
Data coverage percentages are calculated and scored separately against different benchmarks for landlord and tenant obtained data for each property type, where "landlord obtained" and "tenant obtained" are defined as:
- Landlord obtained data:
- Managed Assets: Base Building, Tenant Space purchased by Landlord, and Whole Building.
- Tenant obtained data:
- Managed Assets: Tenant space purchased by tenant;
- Indirectly Managed Assets: Whole building.
Benchmarks are constructed by following the steps below:
- Check if there are at least 12 respondents with coverage percentages greater than 0% and less than 100% within the same region. If so, make the benchmark the quartiles of the distribution of these percentages.
- If the step above failed, check if there are at least 12 respondents with coverage percentages greater 0% and less than 100% across regions. If so, make the benchmark the quartiles of the distribution of these percentages.
- If the step above failed, use static cut-off points of 25%, 50% and 75% to make the benchmark.
Referring to the three benchmark numbers as b1, b2 and b3 where b1 < b2 < b3, these numbers are used to split the coverage percentages between 0% and 100% into four intervals. The score achieved by a respondent depends on which interval their coverage percentage lands in, unless they had a coverage percentage of 0% or 100% in which case they will always receive a specfic score. The relationship between coverage percentages and scores is described in the table below:
Coverage percentage |
Fraction of maximum score |
0% | 0/4 |
< 0%,b1 > |
1/4 |
[ b1,b2 > |
2/4 |
[ b2,b3 > |
3/4 |
[ b3,100% ] |
4/4 |
The resulting scores are then aggregated to a single score using a weighted mean with weights determined by floor area, except for base building and tenant space for which base building has a static weight of 40% and tenant space has a static weight of 60%. As tenant space has both a landlord obtained and a tenant obtained section the 60% weight has to be shared between the two which is done based on relative floor area. If a respondent reports on both base building pluss tenant space and whole building, then base building pluss tenant space is given a weight based on floor area which is then split further based on the 40% - 60% weights.
Like-for-Like performance improvement:
Like-for-Like performance is scored based on the percentage change in consumption using a methodology identical to the scoring of data coverage, except for that having a lower value (for example a negative one) which ends up in a lower quartile will always result in a higher or equal score, and that scores are aggregated using Like-for-Like consumption in the previous year as weights instead of area. If the GRESB reporting universe does not contain a sufficient number of peers to construct a global benchmark (minimum of 12), the benchmark will use a static model with cut off points at: -5%, -2.5% and 0%.
We will refer to the three benchmark numbers b1, b2 and b3 where b1 < b2 < b3. These will be used to split the LFL percentage changes into four intervals. As for data coverage the score achieved by a respondent depends on which interval their LFL percentage change lands in, but how many points are given for each interval depends on the relationship between the mean, median and 0 percentage change. Which percentage change results in which score depending on the different relationships between the mean, median and 0 percentage change are described in the tables below:
If 0 < mean & median < mean: |
Condition |
Score |
LFLpc < b1 |
3/3 |
b1 =< LFLpc < b2 & LFLpc =< mean |
2/3 |
b2 =< LFLpc < b3 & LFLpc =< mean |
1/3 |
b3 =< LFLpc or LFLpc > mean |
0/3 |
If mean =< 0: |
Condition |
Score |
LFLpc < b1 |
3/3 |
b1 =< LFLpc < b2 & LFLpc =< 0 |
2/3 |
b2 =< LFLpc < b3 & LFLpc =< 0 |
1/3 |
b3 =< LFLpc or LFLpc > 0 |
0/3 |
If 0 < mean =< median: |
Condition |
Score |
LFLpc < b1 |
3/3 |
b1 =< LFLpc < b2 & LFLpc =< mean |
2/3 |
b2 =< LFLpc or LFLpc > mean |
0/3 |
Like-for-Like data availability:
Points for Like-for-Like data availability are given if any Like-for-Like data is provided and not excluded in the GRESB outlier check.
Asset-level data reporting:
Points relating to asset-level data reporting are granted if participants report their energy consumption values at asset-level.
Open text box: The content of this open text box is not used for scoring, but will be included in the Benchmark Report.
Outlier checks:
GRESB performs two outlier checks for the data provided in this indicator, one based on the energy consumption intensity per square meter and one based on the percentage change in like-for-like consumption.
Intensity outliers:
For intensities, GRESB checks whether the reported values result in an intensity outside a range of expected values. If the value is outside that range, then the respondent is requested to provide an explanation for why their data is abnormal and this explanation is then checked in combination with statistics on the distribution of intensities for the same property type. If the explanation is not accepted, then the respondent will be scored as if they didn't provide the data associated with the explanation.
Like-for-like outliers:
For like-for-like changes, GRESB checks whether the provided values result in absolute percentage changes greater than a threshold between 10% and 20% depending on the like-for-like values reported for the previous year. Higher values result in a lower threshold for what is deemed abnormal. As for intensities, if an outlier is flagged the respondent is prompted to explain the abnormal value and the explanation is then checked in combination with statistics on like-for-like changes for the given property type. Data associated with explanations which are not accepted are treated as if they were not provided for all scoring purposes.