About this report
The following report gives feedback to assist assessors with general issues and trends that have been identified during external moderation of the internally assessed standards in 2025.
It also provides further insights from moderation material viewed throughout the year and outlines the Assessor Support available for Digital Technologies.
Please note this report does not introduce new criteria, change the requirements of the standard, or change what we expect from assessment.
On this page
Insights
92004: Create a Computer Program
Performance overview
This standard involves developing a computer program to perform a task. The program needs to include selection, iteration, use data stored in a collection, and be documented with comments. Evidence of both testing and debugging is also required to show that the program works as expected.
Evidence that met the requirements of the standard:
- Showed the code and either allowed the code to be run or showed the program running as a screencast.
- Included specific inputs that had been tested and the results of the testing, including expected results and actual results.
- Gave examples of debugging as before and after screenshots, or descriptions of changes made to code either to fix issues in the program or to improve the program as a result of testing.
Practices that need strengthening
To achieve, a program needs to use all the techniques listed in Explanatory Note (EN) 2. The main issues seen are the absence of an iterative structure or loop coded by the student and the absence or trivial use of a collection.
Testing evidence will show specifically which expected cases have been entered into the program and the outcomes of these tests. Good practice suggests that a final test of completed code will show that the program achieves the expected task.
Students are also expected to provide evidence of debugging, showing the errors or issues identified during testing and how they were fixed or improved. Debugging should be based on actual testing by running the code, rather than incremental development across multiple versions (which does not demonstrate debugging on its own).
For game programs, students should prepare a test plan outlining the tests they intend to run and the expected results (including boundary and invalid cases), to ensure that the game meets the task requirements. They should then provide clear evidence showing whether the game passes each test case or else a video showing that the game works as expected.
For Merit, boundary testing is required. This involves selecting data that will test each boundary. Ideally, testing will include values on, above, and below each boundary. Evidence should show what data has been tested and what the results are of each test. It is expected that the program, when run, will behave correctly on and around each boundary.
Also, for Merit, comments should be used to clarify the purpose of code sections, particularly conditional logic and loop structures. Comments that simply describe what a code block does at a superficial level, e.g. “run the quiz”, may not be sufficient to demonstrate understanding or intent. Comments should instead communicate reasoning and purpose. Line-by-line commenting is not required, and should be avoided unless it adds meaningful clarity.
Excellence requires testing to show that the program works correctly when invalid data is input, and the program should behave robustly when run.
Assessors are asked to provide programs in runnable format, such as .py files for Python. Alternatively, a video of the program running could be provided in addition to readable code.
Issues specific to NCEA-Auto assessment
For schools using the NCEA-Auto assessment, students should write and test their programs within the system to ensure that evidence of testing and debugging is captured in the generated report.
Teachers are still expected to verify the work against the standard’s criteria. At Merit level, the use of succinct, descriptive variable names and comments that clarify the purpose of code sections needs to be confirmed by the teacher. Explicit evidence of the student running boundary testing is also required, even if the program passes the system’s automated testing. Teachers should verify which test runs demonstrate boundary testing.
The same requirement applies to invalid testing for Excellence, where explicit evidence must be presented and verified by the teacher.
92005: Develop a digital technologies outcome
Performance overview
This standard involves describing and developing an outcome using digital tools or techniques and testing the outcome to show that it performs as intended. The complexity of the outcome should be appropriate for level 6 of the New Zealand Curriculum.
Evidence that met the requirements of the standard:
- Described clearly the features and functions of the outcome to be developed that would enable the outcome to meet its intended purpose.
- Used tools or techniques beyond the basics within the chosen software.
- Included evidence of testing the outcome against the set requirements and specifications to ensure it was fit for the purpose.
Strong evidence arose when individual projects allowed for a range of purposes and outcomes.
Practices that need strengthening
A Digital Technologies outcome needs to be described in terms of purpose, user requirements, and specifications. Ideally these descriptions will be personalised by each student. The final outcome will be tested to confirm that it works and that it meets the requirements and specifications.
Files submitted should include the outcome in editable format along with evidence of testing. Testing should specify what has been tested, the result of each test, and changes or improvements made. A log of the development process is not required. Tools or techniques could be documented by the student or the assessor for less common outcomes.
For Merit, evidence needs to make clear what improvements have been made as a result of the student’s own testing. Logging incremental development is different from testing to improve fitness for purpose. Where students have requested feedback from other people as part of their development process, this often limits their evidence of improvements made as a result of their own testing. Ideally students will show before and after screenshots for each improvement made. Conventions for the outcome’s domain need to be followed, and could be documented by the student or the assessor.
For Excellence, it is expected that the outcome is fully fit for purpose once improvements from trialling with others have been made. It is also expected that the requirements and specifications described by the student are all met. Evidence needs to support how the tools or techniques have been used optimally, and could be documented by the student or the assessor.
Supporting information about conventions and optimal use can be found in the Internal Assessment Activities and related Assessment Schedules on the NCEA website.
91896: Use advanced programming techniques to develop a computer program
Performance overview
This standard involves the development of a program to perform a specified task. Evidence must show the use of at least two advanced techniques during the development. The standard requires a focus on testing and debugging the program and on documenting the code with comments.
Strong evidence arose when code was able to be run or screencasts confirmed the function of the program, with testing evidence presented in an organised way showing each input of the program being tested and whether expected outputs were generated or not.
Practices that need strengthening
This standard requires the use of at least two advanced techniques. For a program developed using a game engine, responding to GUI events typically counts as one technique. Another technique is required to achieve.
The new version of this standard requires boundary testing at Merit level. Boundary values or cases, particularly in a game setting, need to be clearly specified and tested. Where students are developing their own games, teachers need to ensure that at least one boundary exists.
For Merit, conventions appropriate to the programming language should be followed. For less common languages, conventions could be documented by the student or the teacher. Comments need to describe code function and behaviour at key points throughout the program. Line-by-line commenting is not required.
For Excellence, comprehensive testing of a range of scenarios for each input is expected, including expected, boundary, and invalid cases.
91900: Conduct a critical inquiry to propose a digital technologies outcome
Performance overview
This standard requires students to undertake an inquiry process broad enough to allow for a range of possible digital technologies outcomes. Research and analysis undertaken will allow for the inquiry focus to be refined before a proposal is written. Risks and mitigations are explained, and the proposal is related back to the research.
Strong evidence arose when students began by documenting a range of issues of concern to them. For example, one task began with this statement: “You have the opportunity to develop your own inquiry focus based on your genuine care and curiosity about the world around you.”
Practices that need strengthening
There are key differences between this inquiry standard and the corresponding Level 2 standard.
At Level 3, the inquiry focus must be refined before proposing an outcome. Beginning with a broader inquiry focus and open questions enables explorations of a range of possible digital technologies outcomes. The outcome should emerge from the inquiry, rather than being predetermined.
For Achieved, relevant risks should be identified and ways to mitigate these risks explained. More than one risk is expected. For example, the risk of not meeting the project deadline could be mitigated by using planning tools and setting milestones.
The proposed outcome must include sufficient detail to support further development, such as the purpose, end users, physical and functional requirements, technical specifications, and any required resources such as hardware and software.
For Merit, different perspectives should be compared and contrasted. These may include viewpoints from potential end users and from individuals of different age groups, educational or cultural backgrounds, or positive and negative attitudes towards the issue.
Possible future opportunities and their impacts need to be discussed. This discussion should draw on the findings from the inquiry research and may suggest (for example) how the outcome could be expanded in the future to address additional needs or adapted for use in different environments.
Strengths and weaknesses should relate to the proposed outcome rather than the inquiry process. More than one of each is expected.
Evidence of effectively managing milestones and inquiry progression is also required at Merit level. A timeline on its own, without indication of how it was followed or adjusted, is not sufficient.
For Excellence, ideally all sources used should be critiqued for bias and inaccuracies. Findings from the research should also be critiqued for accuracy, relevance, reliability, and significance.
91901: Apply user experience methodologies to develop a design for a digital technologies outcome
Performance overview
This standard involves developing a range of designs that apply user experience methodologies. One design is then chosen for modelling and testing so it can be developed further. Relevant implications are explained and addressed.
Strong evidence arose when it was clear how the user experience methodologies had influenced the range of design ideas.
Practices that need strengthening
This standard differs from other Level 3 standards as, at Achieved level, relevant implications must be explained in terms of how each one relates to the design and how it can be applied. At Merit level, students evaluate how each implication has been addressed.
For this standard, user experience methodologies should be investigated and applied. Evidence should demonstrate a correct understanding of the selected methodologies.
For Achieved, at least three initial designs are required, with one clearly chosen for further development. Developing a single design incrementally or iteratively is not sufficient. A final design must be presented with enough detail to clearly demonstrate the intended outcome.
Assessor Support
NZQA offers online support for teachers as assessors of NZC achievement standards. These include:
- Exemplars of student work for most standards
- National Moderator Reports
- Online learning modules (generic and subject-specific)
- Clarifications for some standards
- Assessor Practice Tool for many standards
- Webcasts
Exemplars, National Moderator Reports, clarifications and webcasts are hosted on the NZC Subject pages on the NZQA website.
Online learning modules and the Assessor Practice Tool are hosted on Pūtake, NZQA’s learning management system. You can access these through the Education Sector Login.
Log in to Pūtake (external link)
We also may provide a speaker to present at national conferences on requests from national subject associations. At the regional or local level, we may be able to provide online support.
Please contact assessorsupport@nzqa.govt.nz for more information or to lodge a request for support.