Personalised, scalable, and academically rigorous
With the rise of artificial intelligence tools like ChatGPT, Copilot, and Gemini, traditional text-based assessments are increasingly at risk of academic integrity breaches. Viva-style oral assessments offer a promising alternative by allowing students to verbally demonstrate their understanding. However, scaling vivas for large cohorts presents challenges in terms of resource allocation, consistency, and accessibility.
iViva is an AI-powered system designed to enhance viva-style assessments by generating personalised questions, supporting multimodal practice, and providing immediate feedback. By automating key aspects of the viva process while maintaining human oversight, iViva aims to make vivas more scalable, consistent, and accessible for both educators and students.

Build personalised viva questions that align with rubrics and standards.

Practice viva responses with guided prompts and immediate feedback.

Coordinate scalable, ethical assessment workflows with human oversight.
Guided practice space with text and voice, giving students iterative feedback to build confidence before the viva.
For staff to efficiently generate rubric-aligned viva questions tailored to submissions and assessment standards.

Primary Contact — Course Director, Bachelor of IT (FSE)

Co-Lead — Course Director, Bachelor of Commerce (MQBS)

Teaching & Leadership Academic (FMHHS)

Professor of Educational Technologies (FoA)

Learning Design and Production Lead (FSE)

Project Administrator

UX/UI Designer and Development Lead

Software Developer

Software Developer

Software Developer

Software Developer
From project initiation to beta testing — follow iViva's journey month by month.
We formalised the project scope and objectives, confirmed collaboration with the AI development team, and identified two core user groups: Student Practice, and Question Generation. Initial feasibility discussions around AI-supported viva assessment laid the groundwork for everything that followed.
A multidisciplinary student development team was recruited and onboarded, with formal roles allocated across front-end, back-end, AI integration, UX research, and documentation. We selected our AI model approach, defined the integration strategy, and established the foundational front-end architecture and data flows.
We conducted in-depth interviews with academic staff to surface key concerns and refine our feature set. AI prompt structures were designed with care, and a formal Human Research Ethics Application (HREA) was submitted to ensure our research met institutional standards.
Front-end development commenced alongside the knowledge input ingestion pipeline. We integrated Azure-based secure data storage hosted in Australia and began early structured testing of our AI models — validating core prompt logic ahead of the next build phase.
Regeneration logic was implemented with a hybrid notes system supporting both global and per-item instructions. Transcript handling and an AI-assisted review interface were integrated, the Help & Guides knowledge base was developed for students and staff, and internal stress testing covered file handling, permissions, and AI edge cases.
The team conducted comprehensive internal alpha testing across file input robustness, AI question personalisation, summative workflow integrity, accessibility features, and role-based permissions. Edge cases were identified and documented, the UI was refined based on usability findings, and improved error handling and loading states were shipped.
Our testing cohort expanded to include real teaching staff and students. We collected qualitative and quantitative feedback, finalised the Help & Guides support documentation, and prepared mid-grant progress documentation — marking a significant milestone on the path toward full deployment.