Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decision Proposal 288 - Non-Functional Requirements Revision #288

Closed
CDR-API-Stream opened this issue Jan 9, 2023 · 57 comments
Closed

Decision Proposal 288 - Non-Functional Requirements Revision #288

CDR-API-Stream opened this issue Jan 9, 2023 · 57 comments
Labels
Category: API A proposal for a decision to be made for the API Standards made Industry: All This proposal impacts the CDR as a whole (all sectors) Status: Decision Made A determination on this decision has been made
Milestone

Comments

@CDR-API-Stream
Copy link
Contributor

CDR-API-Stream commented Jan 9, 2023

This decision proposal contains a proposal for changes to Non-Functional Requirements and the Get Metrics end point based on feedback received through various channels.

The decision proposal is embedded below:
Decision Proposal 288 - Non-Functional Requirements Revision.pdf

Consultation on this proposal will close on the 7th April 2023.

Note that the proposed changes have been republished in this comment and this comment

Consultation will be extended to obtain feedback on these updated changes until the 5th May 2023 12th May 2023 19th May 2023.

@JamesMBligh JamesMBligh changed the title Decision Proposal 288 - Placeholder Decision Proposal 288 - Non-Functional Requirements Revision Jan 9, 2023
@CDR-API-Stream CDR-API-Stream added Category: API A proposal for a decision to be made for the API Standards made Status: Open For Feedback Feedback has been requested for the decision Industry: All This proposal impacts the CDR as a whole (all sectors) labels Feb 25, 2023
@CDR-API-Stream
Copy link
Contributor Author

The opening comment has been updated and the Decision Proposal is now attached.

The Data Standards Body welcomes your feedback.

@ranjankg2000
Copy link

In the recommended “consent metrics”, we can potentially look at splitting authorisation count for accounts between individual and non- individual entity. It can provide an indication of adoption among businesses

@AusBanking-Christos
Copy link

On behalf of the ABA, can we please request an extra week for our Members to respond. Thank you.

@CDR-API-Stream
Copy link
Contributor Author

Of course, no problem. We'll extend the consultation until 7th April

@damircuca
Copy link

We recommend making the “GET /admin/metrics” endpoint publicly accessible without any authentication or protection. This change would provide numerous benefits to the ecosystem.

Ref: https://consumerdatastandardsaustralia.github.io/standards/#get-metrics

  • It would increase transparency by allowing stakeholders to see what is working and what is not. Currently, the only way to determine whether a data holder is operational is by encountering errors or checking the CDR website that hosts a status report. With public access to metrics, stakeholders can easily access this information and make informed decisions.

  • Organisations would be able to incorporate these statistics into their consent flow, which would improve the consumer experience. By querying the API, organisations can determine which data holders are operational and avoid presenting options that may result in errors or unsuccessful experiences for the consumer.

  • Public access to metrics would encourage the development of tooling and reports that non-developers can use when working within the ecosystem. This would empower first line support personnel to field calls about potential issues with individual data holders and improve overall support within the ecosystem.

We believe that restricting access to the “GET /admin/metrics” endpoint only to the ACCC and individual data holders limits the potential benefits to the ecosystem. By allowing public access, ADRs and other stakeholders can make better-informed decisions and plan their approach to each data holder more effectively.

@nils-work
Copy link
Member

Hi @damircuca

Are you finding cases where the Get Status endpoint does not accurately represent implementation availability (i.e., because you encounter unexpected unavailability), or that there is not enough detail (on specific endpoint availability, for example) for it to be useful when initiating a series of requests?

@jimbasiq
Copy link

Hi @damircuca

Are you finding cases where the Get Status endpoint does not accurately represent implementation availability (i.e., because you encounter unexpected unavailability), or that there is not enough detail (on specific endpoint availability, for example) for it to be useful when initiating a series of requests?

Hi @nils-work

If you are able, please take a look at https://cdrservicemanagement.atlassian.net/servicedesk/customer/portal/2/CDR-3328 to see an example of when the Get Status endpoint has not worked sufficiently.

You can see the failure reflected in the CDR performance dashboard screengrab below. Availability is apparently 100% but a 50% drop in API traffic is visible i.e. APIs are down. Specifically Error 500s on data retrieval APIs.

image

@nils-work
Copy link
Member

Thanks @jimbasiq, I'm not sure if I'll be able to access that ticket, but I'll check.

As a general comment, and it may not have been the case, but my initial thinking is that a scheduled outage may produce this effect (I note the drop in traffic appears to be over a weekend).

The Availability metric (at ~100%) would not be affected by a scheduled outage, but any invocations (resulting in 500s) may still be recorded and reported in Metrics (though there is not an expectation of this during an outage).

This makes it appear that either the Status SCHEDULED_OUTAGE was ignored by clients and about 50% of the average invocations were still being received (perhaps only some endpoints were affected), or the status was incorrectly reported as OK during an unexpected outage (of about 3 days) but only about 50% of invocations could actually be logged.

If it was an unexpected outage, the Status response should have been PARTIAL_FAILURE or UNAVAILABLE and Availability should have been about 90% for the month.

@damircuca
Copy link

Hey @nils-work generally the more data that is available the more options we have on how to incorporate it within the delivery of CDR services.

The Get Status endpoint is very coarse, and doesn't provide enough depth for us. Whereas metrics has a lot more detail that can be used to better support customers, implement a more refined CDR CX flow, and also help with fault resolution.

Even now, whenever a support ticket is raised, we find ourselves going straight to the metrics report (available via CDR site) to see what the lay of the land looks like before we respond back to our customers.

Further to this, even with scraping we have bee surfacing metrics such as performance metrics, stability percentages and more, which we find valuable to drive decisions and future enhancements.

Realise it may be a big ask, but it will be valuable - and will also raise the transparency and accountability within the ecosystem which is equally important for making a success.

@CDRPhoenix
Copy link

Rather than opening up the Get Metric endpoint to the public, I think it is worthwhile to allow the public to sign up and download raw data from Get Metrics that ACCC CDR collects, this will still place the responsibility of chasing non responding Data Holder Brands with CDR and wont flood the Data Holders brands with Get Metrics request.

As a side note stemming out of ADRs hitting Maximum TPS tresholds, I think it is also worthwhile to revisit batch CDR requests and whether there is a need for something like Get Bulk Transaction for banking if there is a use case for get fetch transaction periodically and for DSB to consider creating a best practice article on ZenDesk on ADR calling patterns, i.e. do you need to perform a get Customer Detail, get Account details every time you want to pull transactions down. Otherwise any increase in traffic threshold will be soaked up with "low value" calls and we will be forever chasing for more and more bandwidth.

@CDR-API-Stream
Copy link
Contributor Author

In response to @ranjankg2000:

In the recommended “consent metrics”, we can potentially look at splitting authorisation count for accounts between individual and non- individual entity. It can provide an indication of adoption among businesses

Thank you for this feedback. This is a good idea to incorporate

@CDR-API-Stream
Copy link
Contributor Author

CDR-API-Stream commented Mar 28, 2023

In response to @damircuca:

Making the metrics API public is a very interesting idea. The DSB will discuss this internally with Treasury and the ACCC to identify any policy reasons why this would not be possible. There are real no technical reasons why this would be an issue provided there were low non-functional requirements that would ensure Data Holder implementations didn't need to be over-scaled.

The other option provided @CDRPhoenix, where the data is made available from the Register is also something that could be investigated.

@damircuca
Copy link

One thing to consider which CDRPheonix touched on was to open up the data that ACCC collects vs forcing the Data Holders to make changes on their end. Sorry for stating the obvious your likely considering this already 🤷🏻‍♂️

@ACCC-CDR
Copy link

The ACCC supports the changes outlined in Decision Proposal 288. These changes will improve the accuracy of the information captured through Get Metrics and better support the estimation of consumer uptake.

The ACCC suggests a further change to the PerformanceMetrics value. Currently, it is proposed that this value be split into unathenticated and authenticated metrics. The ACCC suggests that splitting this value by performance tier (i.e. Unauthenticated, High Priority, Low Priority, Unattended etc.) would better align these measures with the metrics reported for invocations and averageResponse. This change would assist the ACCC’s monitoring of Data Holder’s compliance with the performance requirements.

The ACCC notes suggestions by participants regarding the availability of Get Metrics data. As flagged by the DSB above, the ACCC will continue to collaborate with its regulatory partners to assess how Get Metrics data can most effectively enhance the CDR ecosystem but suggests that such measures should be considered separately from this decision.

@cuctran-greatsouthernbank
Copy link

cuctran-greatsouthernbank commented Apr 3, 2023

Overall this will be a large-sized change for Great Southern Bank to implement. Given we have already planned work up until July 2023, it would be much appreciated if the obligation date for this change can be at least 6 months once the decision is made.

Issue: Error code mappings.
Out of the 2 options proposed, we prefer option 1 - Split the error counter to http error code and corresponding counts. This will give better understanding on what all error codes application retuned. Currently all 5XX considering as same.

Issue: Lack of consent metrics
We need clarification on historical records - are we looking for counts (authorisation/revocation) from beginning or the last 8 days? All other metrics data carrying last 8 days' data, but customer count and recipient count is not.

Issue: Scaling for large data holders.
We prefer Option 2 - tiered TPS and Session count NFRs based on the number of active authorisations the data holder has.
The customer base and the uptake of Open Banking at Great Southern Bank remain relatively small compared to the big banks. This option will help us reduce the cost of maintaining our infrastructure to meet the current TPS requirements. We can proactively manage the active authorisation and scale up slowly as required. Depending on the tiered threshold, potentially we can look at ensuring we meet the current tier plus the next tier up to cater for any sudden influx of registrations.
Further consultation to define the tiered threshold will be much appreciated.

@anzbankau
Copy link

We are broadly supportive of proposal to uplift the CDR’s non-functional requirements as outlined in Decision proposal 288. This decision proposal describes a range of topics, and we suggest any proposed implementation schedule be priority driven with careful consideration given to improved consumer outcomes, ecosystem value and impact to data holders (i.e., cost, time, and complexity of implementation).

Specific points of feedback as follows:

Item Feedback
Get Metrics Issues: authenticated and unauthenticated We support the recommendation to "Split between authenticated and unauthenticated".
Get Metrics Issues: Granularity alignment to NFRs We are not clear on the benefit, consumer or otherwise, for reporting on an hourly basis. Suggest that the benefit is clearly articulated before changes are made.
Get Metrics Issues: Lack of consent metrics As above we suggest that the consumer benefit of this proposal be clearly articulated as the effort to implement is likely to be significant.
NFR Issues: Scaling for large data holders We support scaling at brand level and the uplift of the TPS ceiling in accordance with appropriate demand forecast. We have yet to see evidence that the current TPS ceiling is inadequate and/or adversely affecting consumer outcomes. We suggest an evidence-based approach to forecasting future demand, so that holders can plan implementations with sufficient lead time. We note that open banking digital channels are not like-for-like with existing banking channels, the CDR rules for entitlements mean that there are additional performance considerations for data holders. We welcome the opportunity to work with the DSB to review the current demand on the system. We do not support removing site wide NFRs.
NFR Issues: NFRs for incident response We are broadly supportive of NFRs for incident response and endorse a more transparent approach to tracking issues and resolution. NFRs for incident resolution are problematic as there is no easy way to guarantee resolution times, particularly with complex issues which require interactions between ADRs, consumers and data holders; these interactions can be laboured and transactional, owing to the limited information which can be exchanged outside of the CDR.  We are also unclear how issues can be objectively and consistently classified in terms of severity and prioritisation without independent mediation.
Implementation Considerations Per earlier point, implementation should be priority-based with appropriate consideration given to the ecosystem’s capacity for change and demonstrable consumer benefit. A more predictable change cadence with sufficient lead time for implementation is recommended.
Get Metrics changes:Errors We recommend to count by HTTP status code rather than URN.

@kristyTPGT
Copy link

TPGT appreciates the opportunity to provide feedback in relation to Decision Proposal 288. Please find our feedback attached.
DP-288 Final Response.pdf

@johnAEMO
Copy link

johnAEMO commented Apr 6, 2023

AEMO thanks you for the opportunity to respond to this Decision Proposal

In terms of feedback to the getMetrics API, AEMO has the following comments:

  1. The definitions of each field could be made clearer by including the httpStatus codes applicable to each field. That is:
    • Availability – is presumed to be the % all requests not returning a 5xx series status code
    • Performance – is presumed to be for all successful requests (200s) within thresholds
    • Invocations – is presumed to be all requests (successful or not)
    • Average response – is presumed to be for all successful requests (200s)
    • Tps fields – is presumed to be all requests (successful or not)
    • Errors – are presumed to be all 5xx series status code responses
    • Rejections – are presumed to be all 429 series status code responses (traffic threshold limits)
    Missing:
    • Other 400 series errors should be reported. These are not currently reported in getMetrics and in some instances are significant in number and complete the overall picture of request quality
    • 95th percentile – is statistically a more useful indicator of overall performance spread than the currently requested mean

  2. NFRs for AEMO as a secondary data holder who is without access to the necessary fields to determine if the request is Customer Present or Not Present. At this stage we have made assumptions that the customer is present, except where there are multiple service points requested (getServicePoints API only).

  3. Performance Observations
    AEMO currently has issues with its Usage APIs’ performance in providing large payloads and is undertaking a proof of concept to identify where and how to best address this. There are two changes in the industry that will increase payload size in the short term and medium term. Both changes will likely impact performance.
    • In the short term the tranche 2 obligation for complex requests is expected to include additional multiple service points in one request and that will impact performance.
    • In the medium term, the industry is planning to accelerate the upgrade from basic meters to interval meters as part of the AEMCs ‘Review of the regulatory framework for metering services’. The objective of the review is to replace all basic meters with 5 minute interval meters in the NEM; each meter will provide 288 interval reads per day per unit of measure.

While we accept that AEMO is obliged to service every request it receives, there are some observations we have already made that may improve the ADRs’ experience of this service:
• AEMO receives interval meter Usage data from meter data providers at best the day after the reading is taken – multiple API requests within a day will not yield any more up-to-date data. Interval meter readings of 5-30 minutes are used by the energy industry to settle the market and to charge retailers for the energy their consumers have used during each interval, (who in turn use this to bill their consumers). While the meter might read every 5-30 minutes, this does not indicate the frequency that usage data is circulated across the industry.
• AEMO Basic meter Usage data is typically read on a 1-3 monthly basis and it too is shared across the energy industry at best the day after reading. Similarly, multiple requests in a day will not yield more up-to-date data.

@AusBanking-Christos
Copy link

6 April 2023

Submitted on Thursday, 6 April 2023 via: [Consumer Data Standards Australia - GitHub site](#288)

Dear @CDR-API-Stream

ABA Response to Decision Proposal 288 – Non-Functional Requirements

The Australian Banking Association (ABA) welcomes the opportunity to respond on behalf of our Members regarding DP 288 Non-Functional Requirements.

The ABA has met with Members to discuss DP 288 in more detail and provides the following feedback.

A point raised by Members centred on Dynamic Client Registration (DCR) Response Time - NFR. Members noted that current CD Standards (the Standards) response times for DCR can prove challenging to comply with, given the additional latency taken by the Accredited Data Recipient’s (ADRs) to undertake the registration request JWT validation required by the Standards. This additional latency time which Data Holders (DH) rely on, in relation ADRs to outbound connection whitelisting is not split out from times noted in the Standards. We propose that the DSB further reconsiders amending response times to reflect this. As a point of reference, we include a link to last year’s Dynamic Client Registration Response Time NFR #409.

We note DP 288 confirms that the DCR will not be subject to change but reserved for a future direction of the Register Standards. We ask the DSB to reconsider our Members’ position to address their concerns on this point as part of the development requirements in the DP 288.

Equally, we do thank the DSB for providing clarity on the origins of the six new Consent Metrics (new authorisations) introduced into DP 288. Where the DSB explained that the ACCC requested these specific new Consent Metrics, so the ACCC can determine where customers are dropping off in the consent flow, and whether this is occurring at the ADR or DH ends.

We propose an open discussion or workshop with the ACCC regarding their request for additional consent metrics as a way to understand and improve consent drop off rates. The cost and effort to add these metrics, when aggregated across all DHs is significant. We propose that a small number of DHs that between them cover most consent flow types (basically looking at different OTP delivery mechanisms), to volunteer and provide the metrics requested on a one-off basis as input for a study into improvement in consent flow UX, which is presumably what the ACCC want the metrics for in the first place. This would lead to a faster outcome and be cheaper for not only all DHs but also for the volunteers (as they would not be extending the Metrics API, only collating the data on a one-off basis). We also note that consent flow is likely to change radically because of Action Initiation and the introduction of FAPI 2.0 and RAR.

Should the above volunteer proposal not be accepted, some Members have raised comment on the DP 288 section around Implementation Considerations, which includes the six new Consent Metrics. We note that the DSB acknowledged that it was prepared to ramp up the implementation schedule over an extended period. An initial proposal raised by the DSB was for five years being a potential period of implementation. The ABA welcomes this proposal by the DSB, to allow our Members to better resource and budget accordingly for these, and other priorities, including those planned for future CDR implementation (e.g., Action Initiation and Non-Bank Lending).

We would also ask the DSB further considers how it would prefer to update the CX journey negative path. At any point along the customer journey, the customer can decide to cancel and there can be multiple reasons why (regardless of whether the customer is still at ADR side or DH side). Recognising it is not only about the customer hitting a technical issue and can’t continue with the journey. Ideally at the point of customer-initiated cancellation, data should be collected as to why the customer decides to cancel and it should be a “standardised” set of reasons Members can all report on. Currently (incl., ADR/DH) this data is not collected from customer when they cancel, as it is deemed as introducing friction when it is not on the DSB’s CX flow.

We generally understand the DSB’s proposal to balance the requirement on TPS ceiling obligations tied to the number of consents held by each bank. Meaning this is intended by the DSB to be a fairer allocation of investment across individual banks. As opposed to setting a fixed figure which some smaller Members may result in excessive systems costs based on TPS ceiling measure which banks are not likely to reach.

Members have expressed challenges on TPS thresholds around provisioning for peak times. Members have suggested further workshops be facilitated by the ACCC and DSB on how to address this matter on TPS and response time concerns and achieve a fair and reasonable model across all industries and emerging areas like Action Initiation. Members believed this approach could better serve reaching a resolution than direct feedback to a DP.

We would rather have a staged lift in TPS that is tied to a realistic industry consensus forecast. If the increase is staged over a number of years, we would also like a mechanism to periodically revise the required TPS as more data becomes available. Alternatively, if a formulaic approach tied to consents is taken, we would expect that the formula be deployed in a manner that gives Members enough time to budget for and implement system uplifts to cater for increased TPS NFRs, including systems changes for third party service providers.

We also propose that demand management is considered. For example, demand from ADRs could be spread across 24 hours, and not 3 hours in the early morning. This could be enforced through hourly quotas. Another consideration is restricting the number of times that slow moving data is queried. If a given data set is only updated daily, then this could be flagged with a new metadata field that the ADRs would have to respect and only request that data once a day.

In conclusion, we note DP 288 raises challenges for smaller DHs around TPS and consents, with a few options raised by the DSB to remediate under the heading, Scaling for large DHs. One proposal includes, ‘increase in the site wide TPS and Session Count NFRs’. Some Members have requested evidenced based data that the DSB sees why, or foresees in the ecosystem to warrant a change, and the types of changes being proposed by the DSB.

Further discussions or workshops with the ACCC and the DSB to discuss these NFRs and other appropriate matters to understand how this potential proposal could be applied efficiently, would benefit our Members. Given if it were applied by the DSB, to accommodate for a rise in TPS ceiling thresholds, this would likely result in significant investment to affected ABA Members.

We thank the DSB again for the opportunity to respond on behalf of our Members, as we are equally thankful for the DSB extending our response date by a week.

We look forward to continuing our engagement and thank the DSB for its support in these matters.

Yours sincerely

Australian Banking Association

@Telstra-CDR
Copy link

Please find attached feedback from Telstra
DP288 - Feedback.pdf

@CDR-API-Stream
Copy link
Contributor Author

Thanks everyone for all of the feedback. There was a lot that came in just before or over the Easter weekend. We are going through the feedback and will respond incrementally over the next couple of days.

We will leave this consultation open during this time and for a further couple of days so that everyone can respond to what we will be proposing to take to the chair.

@CDR-API-Stream
Copy link
Contributor Author

CDR-API-Stream commented Apr 12, 2023

Submitted via email on the 6th of April 2023

NAB Response to Decision Proposal 288 – Non-Functional Requirements

National Australia Bank Ltd (NAB) welcomes the opportunity to respond to Decision Proposal 288 Non-Functional Requirements. Due to technical issues, we have not been able to submit our response via Github. As such, we provide our response to certain items below.

Dynamic Client Registration

As per previous GitHub issues listed below, we request that DCR performance threshold is increased.
Dynamic Client Registration Response Time NFR · Issue #409 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub
CDR Data Holders outbound connection whitelisting · Issue #418 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub

Whilst we acknowledge that further consultation into DCR is opened under Noting Paper 289 – Register Standards Revision, we request that increase in DCR performance threshold is implemented as a quick fix whilst a strategic plan forward is discussed as a part of Noting Paper 289.

Scaling NFRs for Large Data Holders

We suggest further workshops be facilitated by the DSB and ACCC on the how to address the TPS issue and achieve a fair and reasonable model across all industries and emerging areas like action initiation. We prefer to have a staged lift in TPS that is tied to a realistic industry consensus forecast. We also suggest that ADRs factor TPS thresholds to their implementations, as Data Holders should not be forced to invest into expanding their capabilities due to ADR implementation choices, i.e. using heavy batch processes to request data in bulk. As the API Availability threshold is set to 99.5% per month and API performance requirements enable fast data sharing, the ecosystem should be moving towards real-time on-demand data.

API Response Times

Based on the interesting points raised in the GitHub issue #566 (Optionality of critical fields is facilitating data quality issues across Data Holder implementations · Issue #566 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub), we believe that NFRs should be enabling the data sharing ecosystem rather than constraining it. Current NFRs have been made binding without any extensive consultation or with any consideration of the unique challenges presented by legacy systems that hold CDR Data. We believe the focus of CDR at this stage should be on data quality and adoption rather than imposing arbitrary, restrictive performance requirements. As one of the CDR principles is that the experience should be commensurate to digital channels, API response times should also be aligned. We strongly recommend that each API performance threshold is increased by at least 1000ms.

NFRs for incident response

NAB is strongly of the view that service level agreements for incident response must consider implementations where multiple data holders (and potentially third parties) are involved. Such incidents take considerable amount of time, effort, and coordination between all involved parties. CDR service management portal should also be uplifted to allow multiple parties to work on an incident and have visibility into it.

Impracticality of Current API Performance Requirements for Complex White Label Implementations

Context

With the acquisition of Citigroup’s consumer banking business, NAB is now the CDR Data Holder for white label credit cards issued under Card Services, Coles Financial Services, Kogan Money Credit Cards, Qantas Premier Credit Cards, Suncorp, Bank of Queensland, and Virgin Money Australia. Whilst some of these white label products are completely serviced by NAB (including CDR Data sharing and data sharing consent), some are serviced in partnership with other institutions including other ADIs that have their own separate CDR obligations. Further adding to the complexity, CDR data sharing was implemented using a third-party service provider. Figures 1 and 2 below visualise current implementations:

image

image

When these solutions were implemented, the direction was to prioritise customer experience and consistency with existing digital (and non-digital) servicing models, with additional considerations including technical complexity, scalability, compliance deadlines and opportunities to improve existing channel integration. The understanding at the time was that the non-binding NFRs were to undergo a robust consultation prior to become binding and the consultation would factor in the complexity of white label arrangements, especially ones where multiple parties are involved to provide optimal customer experience.

API Response Time Requirements

Current NFRs for API response times are not achievable for white label implementations where one ADR facing party must integrate with multiple Data Holders to provide CDR Data. Currently, the API response times measure individual API response times, however in a complex white label implementation, there are multiple steps that need to be completed in the background to:

  • establish a secure connection between multiple Data Holders;
  • authenticate the customer in context; and
  • retrieve CDR Data.

An additional consideration in this scenario is network latency, especially in instances where infrastructure of the involved parties is not in the same region or a country.

This consequently means that even in a scenario where each individual Data Holder meets the prescribed API response times, the nature of the implementation means that the ADR facing API response time will be over the threshold.

NAB is of the view that the issue could be addressed by increasing the API response time thresholds across the board, which we believe would have a broader positive impact on the ecosystem. It would alleviate NFR pressures on Data Holders, who are often in a position where they must make trade-offs to remain compliant with NFRs. NAB believes that the focus of the CDR ecosystem should remain on customer experience and adoption.

Alternatively, the metrics reporting could be enhanced to allow ADR facing Data Holders to report metrics based on their own environment, with additional fields to report on data sharing metrics of another Data Holder that supplies CDR Data via a private integration. NAB would welcome the opportunity to contribute to a discussion regarding the development of new metrics applicable to complex white label arrangements.

@jimbasiq
Copy link

Considering the comments around the difficulty of Data Holder implementation whilst balancing other work and obligations, Basiq would be supportive of a phased delivery approach and further discussion is required to agree and prioritise "most useful" and "easier to implement" metrics. I would prefer to have several most useful metrics in three months rather than all metrics in 12 months.

@jimbasiq
Copy link

On the topic of a TPS metric. It is always going to be a challenge for Data Holders to "right size" their infrastructure in order to avoid negatively affecting a consumer. For instance crystal balls or true elastic scalability will be required to set TPS and Session Count NFRs based on the number of active authorisations the data holder has.

Can I suggest the TPS metrics drives the ongoing obligation. i.e. Data Holders do not just report on TPS but on % utilisation of their current limit. If metrics show TPS is regularly exceeding a defined threshold (e.g 90%) the Data Holder should be obligated to raise their TPS.

@jimbasiq
Copy link

One last comment on the Consent Metrics.

For abandonedConsentCount
The number of initiated consent authorisation flows abandoned (for any reason) during the period, reported on a daily basis

Could we get more granular than "for any reason", a Data Holder should be able to detect the different between:

  • abandon without interaction from the consumer
  • failure to successfully authenticate
  • stage at which the consumer abandoned post authentication

@CDR-API-Stream
Copy link
Contributor Author

Here are the proposed changes to the Non-Functional Requirements for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change:

Non-Functional Requirement Changes

Tiering of Traffic Thresholds

As there was consensus support for a tiered approach to traffic thresholds based on number of active authorisations the DSB is proposing amendments to the standards as outlined below. These thresholds have been developed from the data that the DSB has been able to obtain regarding actual TPS and authorisation metrics for existing data holders.

The following statements in the standards in the Traffic Thresholds section will be amended:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • 300 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

These statements will be replaced with:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • For Data Holders with 0 to 2,000 active authorisations, 200 TPS total across all consumers
  • For Data Holders with 2,001 to 5,000 active authorisations, 300 TPS total across all consumers
  • For Data Holders with 5,001 to 10,000 active authorisations, 350 TPS total across all consumers
  • For Data Holders with 10,001 to 25,000 active authorisations, 400 TPS total across all consumers
  • For Data Holders with 25,001 to 50,000 active authorisations, 450 TPS total across all consumers
  • For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

Note that this will be a reduction in expectation for the vast majority of existing Data Holders and will be an increase in expectation for a small number of the most active Data Holders.

It is proposed that these changes will be tied to a Future Dated Obligation of Obligation Date Y23 No. 5 (13/11/2023)

NFRs for Low Velocity Data

To a differentiation for the calling by ADRs of low velocity data sets the following text will be added to the Data Recipient Requirements sub-section in the Non-functional Requirements section of the standards.

Low Velocity Data Sets

For endpoints that provide access to data that is low velocity (ie. the data does not change frequently) the Data Recipient is expected to cache the results of any data they receive and not request the same resource again until the data may reasonably have changed.

For low velocity data sets, if the same data is requested repeatedly a Data Holder may reject subsequent requests for the same data during a specified period.

Identified low velocity data sets are to be handled according to the following table noting that:

  • the Velocity Time Period is a continuous period of time in which calls beyond a specific threshold MAY be rejected by the Data Holder
  • the Allowable Call Volume is the threshold number of calls to the same resource for the same arrangement above which calls MAY be rejected by the Data Holder
Data Set Impacted Endpoints Velocity Time Period Allowable Call Volume
NMI Standing Data Get Service Point Detail 24 hours 10 calls
Energy Usage Data Get Usage For Service Point, Get Bulk Usage, Get Usage For Specific Service Points 24 hours 10 calls
DER Data Get DER For Service Point, Get Bulk DER, Get DER For Specific Service Points 24 hours 10 calls

As this change is really an expansion of the requirement that ADRs minimise traffic with Data Holders most ADRs should already be minimising highly cacheable data so no future dated obligation will be placed on this change. Feedback on this aspect of the proposal is welcome.

@anzbankau
Copy link

Here are the proposed changes to the Non-Functional Requirements for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change:

Non-Functional Requirement Changes

Tiering of Traffic Thresholds

As there was consensus support for a tiered approach to traffic thresholds based on number of active authorisations the DSB is proposing amendments to the standards as outlined below. These thresholds have been developed from the data that the DSB has been able to obtain regarding actual TPS and authorisation metrics for existing data holders.

The following statements in the standards in the Traffic Thresholds section will be amended:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • 300 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

These statements will be replaced with:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • For Data Holders with 0 to 2,000 active authorisations, 200 TPS total across all consumers
  • For Data Holders with 2,001 to 5,000 active authorisations, 300 TPS total across all consumers
  • For Data Holders with 5,001 to 10,000 active authorisations, 350 TPS total across all consumers
  • For Data Holders with 10,001 to 25,000 active authorisations, 400 TPS total across all consumers
  • For Data Holders with 25,001 to 50,000 active authorisations, 450 TPS total across all consumers
  • For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

We are supportive of tiering to allow lower thresholds for Data Holders with fewer users, however we do not agree that the maximum TPS levels should change at this time. 

Any uplift of TPS beyond 300TPS has a large impact on Data holders. Given this, the proposal to introduce new tiers of up to 500TPS should be performed through a dedicated Decision Proposal. This will allow Data Holders visibility of this impactful change, and time to assess implementation considerations.

@Origin-Rachel
Copy link

I think there needs to be a distinction between the type of consumer that these non-functional requirements apply to. I refer in particular to rules around response times for the "Get Bulk" APIs, which have no upper limit as to the number of accounts expected in the response. This means, for example, that the expected response time for "Get Bulk Billing" for a consumer with 1 account is the same for a consumer with 10, 50, or 100 accounts, when in practice increasing the number of accounts naturally results in an increased response time due to the amount of data requested.

I suggest reviewing the practicality of applying the same response time to every CDR customer. I note that these requirements seem written predominantly for retail/mass market consumers. It is possible for business customers in energy to have over 100+ accounts.

@AusBanking-Christos
Copy link

Dear DSB,

In light of the discussion today with the DSB and our Members, who are seeking further opportunity to provide additional feedback, can we please request extending the consultation for another week to 19 May 2023.

Kindest Regards,
Australian Banking Association

@cuctran-greatsouthernbank

We also agree with @AusBanking-Christos and would like to request for an extension of the consultation period to 19 May 2023.

Kind regards,
Great Southern Bank

@CDR-API-Stream
Copy link
Contributor Author

This consultation will be extended until the 19th May as requested. The DSB would prefer not extend this consultation any further beyond this date.

We understand the need for modifications to NFRs to be an ongoing process and to be based on objective data. It would appear that this may require changes the NFRs to be supported by a more specific, regular and engaged consultation process.

To that end we are planning a series of workshops specifically on NFRs for the ecosystem in late July or early August. These workshops will be used to work with the community to create an ongoing consultation process for NFRs that works for everyone as well as to canvas the community about any issues and solutions related to the NFR standards that the community wishes to raise.

More details on these workshops will be announced in due course.

@johnAEMO
Copy link

Comment on the NFRs for low velocity data:

  • The proposed allowable call volume of 10 will need to make allowances for instances when there are multiple pages - we have just seen calls with 3 service points for Usage data which have spanned 8 pages. The presumption is that each page can be called up to 10 times?
  • This NFR is required! - AEMO has been subject to hits requesting a different page of Usage data for the same 3 NMI request every 6 seconds over the last 3 days. While we can reconfigure our infrastructure to allow for this type of request pattern, we would contend that this is atypical and allowance should not be made.

@perlboy
Copy link
Contributor

perlboy commented May 17, 2023

The feedback for this DP is significant making it difficult to comment further. What I'll note here is that it appears to discuss both NFR and Metrics endpoints simultaneously when the reality is that they are two separate spheres. Essentially NFRs set the thresholds and are likely to be more structurally architecture components while Metrics report on them which is more of an engineering activity.

I suggest a more focused pair of DPs is proposed so that feedback can more easily be targeted on the specific areas.

@AusBanking-Christos
Copy link

18 May 2023

Submitted on Thursday, 18 May 2023 via: Consumer Data Standards Australia - GitHub site

Dear @CDR-API-Stream

ABA Follow Up Response to Decision Proposal 288 – Non-Functional Requirements

ABA welcomes the plan for a series of workshops on NFRs for the CDR ecosystem. The proposed multistakeholder approach will lay the foundations for a shared and transparent capacity planning framework that balances the needs of all participants while ensuring appropriate customer outcomes.

The workshops will provide the opportunity for richer performance data to be assessed when setting NFR standards. ABA member banks commit to working with the DSB ahead of the meeting to identify a consistent data set that will be the most useful contribution to the workshop process.

We welcome the opportunity to contribute toward the development of NFR standards that will result in a sustainable and predictable capacity planning model for all CDR participants.

@jimbasiq
Copy link

Basiq feedback on the Traffic Thresholds proposed amendment is we are generally supportive but still concerned with the upper boundary. A highest limit dictated in

For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

seems low considering Basiq currently have considerably more than 50,000 screen scrape active authorisations with each of the major banks, some in the hundreds of thousands. We intend to move all of these connections from screen scrape to open banking CDR connections.

If CDR intends to move data sharing from screen scraping to CDR it needs to both support the existing load and provide some overhead. I don't believe the current proposal is doing this.

@AGL-CDR
Copy link

AGL-CDR commented May 18, 2023

Thank you for the opportunity to provide feedback on this area of discussion.

AGL does not support the tiering of thresholds for TPS for energy.

This is because:

  • There is currently no evidence that energy Data Holders are failing to meet existing TPS obligations.
  • There is currently no evidence that CDR consumers accessing energy Data Holders have adverse experiences because of limitations inherent in the TPS NFR’s.
  • Using Active Authorisations as a predictive benchmark for TPS volumes has not been established for energy. These brackets are therefore not based on ‘real world’ observed TPS levels where it comes to energy-related traffic.
  • Active Authorisations in energy may fluctuate substantially more than in the banking sector because of the Multisite concept where hundreds (or possibly thousands) of authorisations may be initiated in a 24 hour period. This may result in traffic volume spikes that are difficult to plan for and may cause a temporary spike above a given TPS threshold before returning to previous ‘normal’ volumes.
  • Ramping up capacity to meet hypothetical volumes is unrealistic as it will result in costly infrastructure upgrades that arrive after such spikes occur and there’s currently no evidence that higher TPS volumes would be sustained after an initial spike subsides. AGL would then incur considerably higher infrastructure costs associated with lengthy periods of unutilised capacity.

AGL requests that these proposed changes are delayed and revisited (for energy) until such time that at least twelve months of real-world traffic volumes have been observed following Tranche 3 Large Retailer Go Live. (November 2024)

@commbankoss
Copy link

Regarding additional consent metrics, CBA suggests a similar outcome, i.e. improvements to the consumer experience for consent flow to reduce drop-off rates, could be achieved through a consultative approach with ADRs and DHs. Our recommendation is that a sample of relevant consent metrics be amalgamated by participants and provided to the DSB as input. This approach would more cost effective for the ecosystem, achieve a similar outcome and avoid regret spend if authentication and consent flows are matured to enable Action Initiation in the future.

@anzbankau
Copy link

In light of recent discussions, ANZ requests that the proposed tiering remains conditional upon the outcome of the forthcoming workshops. Given the complex nature of open banking systems, meeting the revised tiering is unlikely to be a simple scaling out exercise. The workshops must consider that data holders will require extensive capacity planning, design and implementation activities.

@NationalAustraliaBank
Copy link

NAB welcomes the plan for further workshops on NFRs. As the topic appears to be of great interest to CDR community, we recommend these workshops are scheduled sooner rather than later to maintain the positive momentum.

We also acknowledge DSB feedback regarding white label implementations and are keen to engage with DSB and any other interested participants to explore the topic in detail. With regards to the proposed Get Metrics future dated obligations, we request that they are pushed back by one release cycle, i.e. that the v4 FDO is aligned with Y24 #1 (11/03/2024) and v5 is aligned with Y24 #2 (13/05/2024).

@WestpacOpenBanking
Copy link

Westpac welcomes the opportunity to respond to the additional proposals added to DP288.

Scaling NFRs for Large Data Holders

A tiered approach by activity is an improvement to the current standard. Nevertheless, Westpac suggests that the proposal needs to be evidence-based prior to structuring the tiering levels and thresholds. Our evidence suggests that current activity in the ecosystem do not warrant the unusually high thresholds in the current proposal. We welcome the opportunity to discuss the TPS proposal in the planned workshops for July-August and we support earlier comments that this is not ready for presentation to the DSB chair.

Westpac notes that it is difficult to set a fair and adequate TPS level without context of the use-cases that the ecosystem is wanting to support; since some use-cases require more load than others. We suggest that focus should be on activity growth in the medium term future only. Handling of larger volumes can be revisited as the ecosystem matures with clearer pipeline of future use-cases and activity types flowing in the ecosystem. This would allow better allocation and direction of investment that is aligned to the Govt intention as announced in the recent Budget.

Proposed changes to existing metrics

Westpac is broadly supportive of the proposed changes to existing metrics.

Proposed new metrics

Westpac notes from various comments above that there may be various uses to the statistics around ‘abandonment by stage’ by different parties within the ecosystem (regulators, ADRs, DHs, incumbents, and prospects). We suggest the following improvements to increase the value of new metrics prior to implementing changes:

  • Scope efficiently the purpose of the metrics prior to implementation: A more prudent approach to obtaining insights into abandonment issues would be to apply a “lean start-up approach”; for example, to first collect data as a one-off basis from Data Holders (ie manually produced) and making the results available for the ecosystem to consume and analyse. A consultation after this exercise would give better clarity to all participants regarding their needs, and provide better input as to whether the type of metric is sufficient and relevant to their needs/uses.

  • Increasing value and insights from metrics:

    • Authorisation and abandonment metrics need to be associated to the software product that the authorisations are for.

    • To be able to increase customer adoption in the ecosystem, it is more meaningful to understand the driver of the customer’s decision; and deriving metrics from DHs presuppose the problem of customer abandonment relates to the stages of the authorisation. From a consumer viewpoint, the reason for customer abandonment on average is unlikely to correlate to technical issues encountered during DH’s stages of the authorisation flow (eg customer’s change of mind with using ADR software product, cold-feet with data-sharing, staff-testing a select component of the flow).

    • Westpac supports ABA’s position that the DSB allows a negative CX journey path for a “standardised set of reasons” that can be presented to customers at the point of customer abandonment. Furthermore, in advance of the suggested CX changes, a lean low-cost approach could be to launch a standardised one-off survey monkey distributed to genuine ADR’s customers that did not complete data-sharing authorisation (ie exclude staff testing activities).

Westpac also notes that there are many comments and questions around the definition of the metrics that needs to be discussed and resolved prior, to presentation to the DSB Chair. Considering the nature and size of the change vary depending on these definitions, it would be more appropriate to set the delivery timelines after the conclusion of the discussions or workshops. We ask that in light of the current backlog of standard changes, 9 months be provided as minimum to allow organisations to budget resource and deliver. The ecosystem cannot sustain ongoing urgent revisions to standards as we have recently experienced with FAPI 1.0

@JohnMillsEnergyAustralia

Thank you for this opportunity to make a submission. EnergyAustralia submits the following:

With the energy sector only so recently going live the existing NFRs for us as a Data Holder remain untested by ADR usage seen to-date. Therefore the need to revise the NFRs so dramatically, and then to apply these to energy sector would be premature.

We are aligned with the AGL submission made on this topic that is reflective of the energy sector.

It appears that a staged approach to retain the existing NFRs for Energy may well prove more suitable to support a nascent CDR sector like Energy. This will avoid the risk of over-funding capacity. A sectorial approach should be based on CDR usage statistics from the Energy sector so when it reaches the maturity of banking sector it would see it move to the next stage of NFRs. It would then see more appropriate NFRs for more mature sectors and existing NFRs for new to CDR sectors like Energy for their first two years.

Publication of metrics of overall usage remains of benefit. However more detailed publication of NFR metrics on such small numbers of usage is not really presently of industry benefit (until the 2 year point) following their CDR implementation, and only if volumes increase. Such limited usage will skew the usage to potentially mis-represent any valid conclusions drawn.

Further. we specifically endorse the final paragraph from the AGL submission on AEMO performance that concludes with “AGL considers that it would be appropriate for AEMO to establish its own service desk arrangement for the resolution of tickets directly with ADRs and reduce administrative pressures on data holders to manage these issues.”

@ACCC-CDR
Copy link

Ecosystem metrics data exposed by participants via the GetMetrics API is published by the ACCC on the CDR Performance Dashboard. Several of the changes proposed here will result in breaking changes for the Performance Dashboard, including breaking continuity of historical data and the ability to compare metrics between versions, which is a mandatory requirement for us to operate and regulate the system. While breaking changes are sometimes necessary to advance the API and enrichen metrics data over time, the averageTps and peakTps overall values need to be maintained in the metrics payload to facilitate this continuity. As the CDR ecosystem grows, our experience shows that it is impractical for all participants to transition between API versions in unison. A carefully constructed transition plan will therefore be necessary to ensure CDR metrics data relating to all participants remains publicly available. Given that we will be responsible for making the necessary changes to the Performance Dashboard and already work closely with participants, we will take the lead on this transition. Retirement dates for v3 should not be set until we have developed a transition plan.

In reference to the JSON schemas for v4 and v5 as posted in the above comment, we provide the following feedback:

  • If the plan is to introduce additional granularity into the metrics provided, we need to ensure that the overall values are retained for continuity of historical data.
  • As previously stated in our comment above we would like to split the performance metric by performance tier (i.e. Unauthenticated, High Priority, Low Priority, Unattended etc.)
  • Providing an aggregated performance metric for all authenticated calls does not provide enough granularity to assess the stated NFR.

@CDR-API-Stream
Copy link
Contributor Author

Thanks everyone for the final feedback. This thread will now be locked and responses will be posted and a final decision created for submission to the chair.

Note that the final decision on TPS thresholds may take a few more days as the DSB have been offered additional data that may influence the specifics of the thresholds to be set.

@ConsumerDataStandardsAustralia ConsumerDataStandardsAustralia locked and limited conversation to collaborators May 25, 2023
@CDR-API-Stream
Copy link
Contributor Author

Response to feedback on low latency data clusters:

  • There is general consensus that this approach is appropriate
  • There hasn't been any feedback seeking a change to the current proposal

In response to the question from AEMO: yes, the current proposal would allow for each page of usage history to be called up to 10 times per day. If the threshold needs to be adjusted based on actual experience that can be consulted on in the future. In the interim this should allow for calls being made every couple of minutes to be managed to protect core systems

@CDR-API-Stream
Copy link
Contributor Author

Response to feedback on changes to the Get Metrics API:

Additional Changes

  • As per the request from the ACCC, we will add in fields for the existing aggregated metrics for the purposes of trend continuity. This will be in both v4 and v5 of the Get Metrics API
  • As per the request from the ACCC, instead of splitting performance metrics between authenticated and unauthenticated we will these metrics by tier. A metric for the aggregated NFR of 99.5% of all invocations within performance requirements will be retained. This will be included in v5 of the Get Metrics API

Implementation Considerations

  • In response to the request for a later implementation date for the Get Metrics changes we will propose retaining the existing date for v4 and pushing back the date for v5 by one milestone. The reason for retaining the v4 date is so that tranche 3 energy retailers going liver at the end of this will be reporting data consistently in the new format from the beginning but also to align to the feedback received that it would be better to begin getting the improved data earlier rather than later
  • A deprecation date for v3 of Get Metrics will not be set at this time. The ACCC have indicated they will provide a schedule for their adoption of v4 and v5. Once this is done a deprecation date for v3 will be able to be set.

Responses to other feedback

Some participants suggested other mechanisms for providing data rather than updating the Get Metrics API. We have not modified our proposal in response to this feedback for two reasons:

  1. We were specifically requested for improvements to the Get Metrics API by the regulator to assist in the ongoing management of the ecosystem and to reduce the cost of ad hoc reporting. These changes are in line with that request
  2. Reception of suggestions related to the voluntary provision of data is influenced by the lack of response to such requests for data by the DSB in the past. It is not clear that a mechanism for voluntary provision of data would be effective

The suggestion that authorisation abandonment metrics should be aligned to software product has not be incorporated into the proposal. Doing so would increase implementation costs but would also be of minimal value as the proposed metrics only come into play once software product process has successfully completed (ie. the customer has already accepted the proposed consent presented by the ADR). The metrics are only representative of the data holder screens which are common across all software products. We may consider this feedback in the future if the concept of data recipient metrics is introduced to CDR.

It was suggested that energy retailers should not be required to provide metrics until the 2 year point of implementation. This is not consistent with the requirements applied to the banking sector in the past but, more importantly, is not a question being considered in this consultation so no action is being taken on this feedback.

@CDR-API-Stream CDR-API-Stream added Status: Feedback Period Closed The feedback period is complete and a final decision is being formulated and removed Status: Open For Feedback Feedback has been requested for the decision labels May 26, 2023
@CDR-API-Stream
Copy link
Contributor Author

CDR-API-Stream commented May 29, 2023

The DSB has received data from multiple banks related to the number of active authorisations and TPS levels. Each of the banks that provided detailed data ask that it be kept confidential so it will not be published but it does give a stronger evidence foundation for setting the TPS tiering.

As a result of this data it would appear that our initial proposal (which was based only on number of customers) was far too aggressive and should be altered significantly.

The new proposed tiering for site wide authenticated peak TPS will therefore be:

  • For Data Holders with 0 to 10,000 active authorisations, 150 peak TPS total across all consumers
  • For Data Holders with 10,001 to 20,000 active authorisations, 200 peak TPS total across all consumers
  • For Data Holders with 20,001 to 30,000 active authorisations, 250 peak TPS total across all consumers
  • For Data Holders with 30,001 to 40,000 active authorisations, 300 peak TPS total across all consumers
  • For Data Holders with 40,001 to 50,000 active authorisations, 350 peak TPS total across all consumers
  • For Data Holders with 50,001 to 60,000 active authorisations, 400 peak TPS total across all consumers
  • For Data Holders with more than 60,000 active authorisations, 450 peak TPS total across all consumers

Note that the implications of this tiering strategy is as follows:

  • No existing data holder will have any immediate obligation increase
  • The majority of data holders will get a significant reduction in their site wide obligation (most data holders fit inside the lowest two tiers currently)
  • Energy retailers will get a significant reduction in obligation. The tranche 3 retailers going live at the end of the year will probably remain in the bottom tier for the medium term
  • For ADRs that are planning to migrate screen scarping customers there should now be enough headroom for interacting with the larger banks as the number of consents being established grows

Responses to specific feedback by participants is as follows:

  • There is consensus support for a tiering approach to site wide peak TPS thresholds.
  • A number of data holders requested that change be made right now in advance of the proposed workshops later this year. Considering the length of time already invested in consulting on this issue and the existing problems being articulated by the ADR community this is not the preferred approach.
  • There was a request to exclude energy retailers from these changes. The NFRs in the standards are deliberately cross-sectoral and the intent is for this to remain the case. The reasons for this exclusion seems to be related to ensuring the obligation on energy retailers is not increased. As the proposed changes results in a significant reduction in obligation for energy retailers (including the tranche 3 retailers) these issues seem to be adequately addressed.
  • There was a suggestion to change the NFRs for energy C&I customers specifically. This feedback may be important to address but does not seem aligned to the specific issue being consulted on. This feedback will be noted and raised again during the NFR workshops to be held later this year.

@CDR-API-Stream
Copy link
Contributor Author

The Data Standards Chair has approved Decision 288 attached below:
Decision 288 - Non-Functional Requirements Revision.pdf

It is intended that this decision will be published in the standards in v1.25.0 in the next two weeks.

@CDR-API-Stream CDR-API-Stream added Status: Decision Made A determination on this decision has been made and removed Status: Feedback Period Closed The feedback period is complete and a final decision is being formulated labels Jun 21, 2023
@nils-work nils-work added this to the v1.25.0 milestone Jul 11, 2023
@nils-work
Copy link
Member

Standards version 1.25.0 has now been published, incorporating the changes detailed in the decision above.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Category: API A proposal for a decision to be made for the API Standards made Industry: All This proposal impacts the CDR as a whole (all sectors) Status: Decision Made A determination on this decision has been made
Projects
None yet
Development

No branches or pull requests