Embracing AI in student assessments

27 August 2025

By NZQA Chief Executive, Dr Grant Klinkum

NZQA has been using AI tools to help improve the quality, consistency, and speed of our work since 2019.

With recent Government announcements on proposed changes to senior secondary qualifications, it’s timely to share NZQA’s existing AI work and our future intentions.

In May this year, over 55,000 students sat the writing literacy assessment, which is usually attempted by students in Year 10. NZQA used an AI-powered “Automated Text Scoring” tool to mark these assessments, letting us return results to students 3.5 weeks earlier than we could last year. This gives more time for students who did not achieve to prepare for the next assessment opportunity.

This is a bold step for high stakes assessments, compared with most other countries. We have taken a careful, gradual approach, starting with small trials and building up to larger pilots.

In 2024, following an earlier small-scale pilot, we tested the AI tool on 36,000 writing samples and found it was just as reliable as human markers.

To be extra sure, experienced human markers double-checked over a third of the May 2025 results - especially those close to the achievement boundary. If the AI and human scores didn’t match, we used the human mark.

The AI algorithms we use are specifically trained with thousands of previous writing assessment responses. This is the strength of NZQA’s approach, rather than relying on generic AI tools.

We expect AI will increasingly become part of our suite of digital tools applied to a wider range of tasks, much like digital banking has become part of financial services.

As we move forward, we’ll continue to focus on privacy, security, ethical guidelines, avoiding bias, quality assurance and data sovereignty. We are fully aligned with government’s guidance and aspiration for adopting safe and responsible AI.

To support this, we are growing our own expertise further to ensure the technologies we build or adopt are customised to New Zealand’s cultural context.

This year, we are planning to test AI marking - using past digital exam data - for selected end-of-year exams. We want to see if the same AI that works well for a generic writing assessment can also be used for a wide range of subjects.

We’re also exploring how AI could help with exam development. For example, it could create a first draft of an exam paper for our team of experts to finalise, or help check the quality of human developed exams.

Broader rollouts will only occur where AI proves to be as good as - or better than - human performance.

AI will not replace our subject assessment specialists. It will act as a member of the team, helping make decisions faster, more consistently, and based on evidence. In the foreseeable future, this is not about cost-cutting - it’s about improving quality, timeliness and consistency.

In 2021, NZQA developed an AI solution to detect exam breaches, and continues to build on and refine these capabilities.

Soon after that we built an AI solution to support customers wanting information from our website and we are further enhancing the AI solution to provide tailored responses to various questions by intelligently navigating NZQA website content. There are over 250,000 student and customer interactions with our AI tool each year.

A working assumption is that there will be exponential improvement in AI Large Language Models (LLMs) over the coming years.

This will enable us to use AI as an integrated tool inside all aspects of managing a robust national qualifications system, including our work to support internal and external assessment in any new senior secondary school qualifications.

If used responsibly and cautiously, AI has the potential to strengthen our education system. With our deep expertise in assessment, we are well positioned to lead this work with a strong focus on student success, care and transparency.