Data61 is supporting the development of API standards to help Australian consumers provide trusted organisations of their choice with access to their data, in safe, customer-friendly ways. Its first focus is the banking sector. The aim? A more innovative, open banking sector with consumers able to make informed choices about their finances: whether that’s using product comparison services, getting personalised budgeting and accounting support or making decisions about the bank that’s best for them.
The Engineering Working Group focuses on demonstrating these API Standards through the delivery of usable software artefacts.
The Engineering team are pleased to announce a new release of our testing artefacts, which have been updated to conform to version 1.0.0 of the Data Standards.
The key artefacts of interest in this release are the payload validation tool, available in a standalone command line tool and our parameterised test tool, which is executed as a maven goal and can be found in the reference-test suite reference-test suite.
For a quick overview of how the two tools work, please refer to these two diagrams:
Nick Hamilton Interim Engineering Lead
It has been a while, but the CDS Engineering team can now give an update on our progress.
Firstly, in our team Nick Hamilton is joining us as interim Lead Engineer. Nick will draw upon his previous project delivery experiences in the FinTech/banking sector and his work as an engineer on earlier versions of the CDR Standards and InfoSec to provide technical guidance as we continue to produce artefacts to support the CDR ecosystem. We are also pleased to announce that we are adding an additional Senior Engineer to the team who will supplement Fei’s good work on the Engineering artefacts.
Over the last few weeks the Engineering team has been working on consolidating the existing 10 git repositories into a single repo. This will make it easier for the community to navigate through the artefacts and to better understand their relationships. It will also enable better dependency and version management and will simplify our continuous integration. The cds-models, api-model and cds-conformance repos have been renamed to reference-test, and it can be found in the java-artefacts repo (https://github.com/ConsumerDataStandardsAustralia/java-artefacts/).
Over the next month, the Engineering team will be directing our efforts towards updating all our artefacts to align them with the latest versions of the CDS standards. We expect to release a 0.9.3 version first, and expect subsequent versions (0.9.4, 0.9.5 and 0.9.6) to all be released by 30th of September (subject to there being no significant rule updates that might cause major revisions to the Standards). An important component of our effort is updating the payload validation tool in the reference testing suite.
For those who are not familiar with our CDS testing suite, it has two components. The first component is a Payload Validation Tool, which checks the payload responses returned from Data Recipients. This tool accepts a JSON file as input (either locally or via a URL) and reports on the “correctness” of the structure of the data. It can be used to check the validity of endpoints defined in the Standards. The second tool – the Parameterised Test Tool - provides a way for testers to supply configuration files that specify custom input parameters to send to CDS endpoints and the response data that is expected to be returned. This tool utilises the open-source Serenity testing library (http://www.thucydides.info/#/whatisserenity) to automatically make endpoint requests with the custom input parameters, and confirms that the responses are as expected. Currently it only supports the Product API, but work will begin on extending it to encompass Customer APIs in November.
Once the Engineering artefacts have been updated to the latest version of the Standards API, our engineering efforts will focus on implementing support for the InfoSec API into the artefacts. This is expected to be a major update that may take several months of engineering effort.
While our planned deliverables are currently confined to use in a desktop sandbox, we would welcome any contributions from the community that might extend them to support a cloud-based environment. We have created simple licensing agreements to ensure any individual or corporate contributions are recognised and these Harmony-based agreements will be published on the engineering website in the near future.
As always, if there are any questions in regards to our artefacts, please feel free to contact the Engineering team, or raise a git issue if you find a bug or want to suggest a feature enhancements.
We look forward to working with you as we head toward implementing the CDS.
Most of the CDR community already are aware that Stuart Low has left Data61 to pursue other interests.
We thank Stu for his passionate contribution to the CDS discussions and particularly for his work developing the related engineering outputs.
The return of Perlboy is already evident and we look forward to his constructive involvement to help make the CDR regime a success!
All the best going forward, Stu!
Recently I have updated some 0.9.3 artefacts.
The updates are:
It’s been a little while but I have now published updated documentation for the 0.9.3 artefacts.
In addition I’ve uploaded and linked the Workshop Round 2 material.
It’s been a few weeks since I last sent an update so I thought it was prudent but I will keep it short.
Both Sprint 4 and Sprint 5 focused on internal code review with significant tweaks around ensuring the code developed for the artefacts had a high quality baseline. Following the result in Canberra, it is important that we realigned our work with the intended CDR delivery schedule so that we can ensure that the tools delivered are the most useful relative to the expectations of the government for Data Holder’s.
Consequently we are now focused on the July 1 target timeline for Product Reference Data and functionally demonstrating all of the components to make this a reality.
As promised I have now published a video on YouTube giving a quick demonstration of the artefacts as they stand right now.
Feedback always welcome and we look forward to providing further updates soon.
It’s always great when after effort in many different areas you have something that pays off. Looking back on the past 4-6 weeks and what we’ve got done we are proud of where we have got so far. It’s also a time to take note and improve on what has been developed (more on that later).
This week we made available a number of CDS Codegen generated artefacts:
Java Client Library available on Sonatype
Once again it has been a pretty big fortnight as we got deep into taking our Models built in Sprint #1 and initiating the production of Codegen. While we haven’t quite got to where we wanted to be by now we are pretty happy with the progress and have now made available Product API focused samples for Client, Server Stubs (using Spring) and a Model Holder .
For the adventurous these are available directly from source now and we have put in a task on Sprint #3 to publish Docker containers and produce a quick showcase video. Within Sprint #3, we will now use these samples as input templates into the ongoing code generation work currently occurring.
With Easter upon us we completed a retrospective and the final report is available in the Sprint #2 documentation. All up there has now been 30,000 (!!) lines of additional code added to our repositories in Sprint #2 alone and we ticked over on 300+ commits across our repositories this sprint. We suspect this will only accelerate further as we progress.
Well it has been a pretty full on fortnight as we did our first major sprint but we are happy to report that after quite a few long days we achieved our major target of defining the Standards in Java Models and generating a swagger specification that is mostly identical to the Standards swagger. We completed a retrospective and the final report is available in the Sprint #1 documentation, we will look to finalise Sprint #2 planning on Monday.