Empowering Viva Assessments

Personalised, scalable, and academically rigorous

The Problem

With the rise of artificial intelligence tools like ChatGPT, Copilot, and Gemini, traditional text-based assessments are increasingly at risk of academic integrity breaches. Viva-style oral assessments offer a promising alternative by allowing students to verbally demonstrate their understanding. However, scaling vivas for large cohorts presents challenges in terms of resource allocation, consistency, and accessibility.

Our Solution

iViva is an AI-powered system designed to enhance viva-style assessments by generating personalised questions, supporting multimodal practice, and providing immediate feedback. By automating key aspects of the viva process while maintaining human oversight, iViva aims to make vivas more scalable, consistent, and accessible for both educators and students.

Who Is iViva For?

Educators

Educators

Build personalised viva questions that align with rubrics and standards.

Students

Students

Practice viva responses with guided prompts and immediate feedback.

Support Teams

Support Teams

Coordinate scalable, ethical assessment workflows with human oversight.

Two Core Functions

Student Practice Tool

Guided practice space with text and voice, giving students iterative feedback to build confidence before the viva.

Personalised Question Generation

For staff to efficiently generate rubric-aligned viva questions tailored to submissions and assessment standards.

Our Team

Charanya Ramakrishnan

Charanya Ramakrishnan

Primary Contact — Course Director, Bachelor of IT (FSE)

A/Prof. Prashan Karunaratne

A/Prof. Prashan Karunaratne

Co-Lead — Course Director, Bachelor of Commerce (MQBS)

A/Prof. Josephine Paparo

A/Prof. Josephine Paparo

Teaching & Leadership Academic (FMHHS)

Prof. Matt Bower

Prof. Matt Bower

Professor of Educational Technologies (FoA)

Matthew Robson

Matthew Robson

Learning Design and Production Lead (FSE)

Signe Duff

Signe Duff

Project Administrator

Thanh Thanh Vo-Pham

Thanh Thanh Vo-Pham

UX/UI Designer and Development Lead

Minh Vy Ha

Minh Vy Ha

Software Developer

Scott Xu

Scott Xu

Software Developer

James Kim

James Kim

Software Developer

Thien Duc Trac

Thien Duc Trac

Software Developer

Progress

From project initiation to beta testing — follow iViva's journey month by month.

September 2025

Project Initiation & Strategic Alignment

We formalised the project scope and objectives, confirmed collaboration with the AI development team, and identified two core user groups: Student Practice, and Question Generation. Initial feasibility discussions around AI-supported viva assessment laid the groundwork for everything that followed.

October 2025

Team Formation, Architecture Design & System Planning

A multidisciplinary student development team was recruited and onboarded, with formal roles allocated across front-end, back-end, AI integration, UX research, and documentation. We selected our AI model approach, defined the integration strategy, and established the foundational front-end architecture and data flows.

November 2025

Stakeholder Consultation & Ethics Submission

We conducted in-depth interviews with academic staff to surface key concerns and refine our feature set. AI prompt structures were designed with care, and a formal Human Research Ethics Application (HREA) was submitted to ensure our research met institutional standards.

December 2025

Technical Build Phase I

Front-end development commenced alongside the knowledge input ingestion pipeline. We integrated Azure-based secure data storage hosted in Australia and began early structured testing of our AI models — validating core prompt logic ahead of the next build phase.

January 2026

Technical Build Phase II & Feature Expansion

Regeneration logic was implemented with a hybrid notes system supporting both global and per-item instructions. Transcript handling and an AI-assisted review interface were integrated, the Help & Guides knowledge base was developed for students and staff, and internal stress testing covered file handling, permissions, and AI edge cases.

February 2026

Alpha Testing

The team conducted comprehensive internal alpha testing across file input robustness, AI question personalisation, summative workflow integrity, accessibility features, and role-based permissions. Edge cases were identified and documented, the UI was refined based on usability findings, and improved error handling and loading states were shipped.

March 2026

Beta Testing

Current

Our testing cohort expanded to include real teaching staff and students. We collected qualitative and quantitative feedback, finalised the Help & Guides support documentation, and prepared mid-grant progress documentation — marking a significant milestone on the path toward full deployment.