Data61 is supporting the development of API standards to help Australian consumers provide trusted organisations of their choice with access to their data, in safe, customer-friendly ways. Its first focus is the banking sector. The aim? A more innovative, open banking sector with consumers able to make informed choices about their finances: whether that’s using product comparison services, getting personalised budgeting and accounting support or making decisions about the bank that’s best for them.
The Engineering Working Group focuses on demonstrating these API Standards through the delivery of usable software artefacts.
Most of the CDR community already are aware that Stuart Low has left Data61 to pursue other interests.
We thank Stu for his passionate contribution to the CDS discussions and particularly for his work developing the related engineering outputs.
The return of Perlboy is already evident and we look forward to his constructive involvement to help make the CDR regime a success!
All the best going forward, Stu!
Recently I have updated some 0.9.3 artefacts.
The updates are:
It’s been a little while but I have now published updated documentation for the 0.9.3 artefacts.
In addition I’ve uploaded and linked the Workshop Round 2 material.
It’s been a few weeks since I last sent an update so I thought it was prudent but I will keep it short.
Both Sprint 4 and Sprint 5 focused on internal code review with significant tweaks around ensuring the code developed for the artefacts had a high quality baseline. Following the result in Canberra, it is important that we realigned our work with the intended CDR delivery schedule so that we can ensure that the tools delivered are the most useful relative to the expectations of the government for Data Holder’s.
Consequently we are now focused on the July 1 target timeline for Product Reference Data and functionally demonstrating all of the components to make this a reality.
As promised I have now published a video on YouTube giving a quick demonstration of the artefacts as they stand right now.
Feedback always welcome and we look forward to providing further updates soon.
It’s always great when after effort in many different areas you have something that pays off. Looking back on the past 4-6 weeks and what we’ve got done we are proud of where we have got so far. It’s also a time to take note and improve on what has been developed (more on that later).
This week we made available a number of CDS Codegen generated artefacts:
Java Client Library available on Sonatype
Once again it has been a pretty big fortnight as we got deep into taking our Models built in Sprint #1 and initiating the production of Codegen. While we haven’t quite got to where we wanted to be by now we are pretty happy with the progress and have now made available Product API focused samples for Client, Server Stubs (using Spring) and a Model Holder .
For the adventurous these are available directly from source now and we have put in a task on Sprint #3 to publish Docker containers and produce a quick showcase video. Within Sprint #3, we will now use these samples as input templates into the ongoing code generation work currently occurring.
With Easter upon us we completed a retrospective and the final report is available in the Sprint #2 documentation. All up there has now been 30,000 (!!) lines of additional code added to our repositories in Sprint #2 alone and we ticked over on 300+ commits across our repositories this sprint. We suspect this will only accelerate further as we progress.
Well it has been a pretty full on fortnight as we did our first major sprint but we are happy to report that after quite a few long days we achieved our major target of defining the Standards in Java Models and generating a swagger specification that is mostly identical to the Standards swagger. We completed a retrospective and the final report is available in the Sprint #1 documentation, we will look to finalise Sprint #2 planning on Monday.
As communicated previously we have been working towards an initial set of feedback requests. The first two of these are now published and open for comment (see title links) on the GitHub engineering issues list.
Our intention will be to have these formally submitted as Decision Proposals following an initial feedback period. Our target completion duration for feedback is 14 days but we are open to extensions when “it makes sense”.
The Engineering Working Group has had a busy fortnight with extensive planning, definition of goals and deliverables along with more lower level task generation for Sprints #1 to #3. In addition Fei Yang joined the team providing additional engineering capability.