Skip to main content
Intranet

Designing assessment and minimising risks

The new assessment framework will require consideration (and mitigation) of new integrity risks.  This page will be updated soon, but for more information visit the Teaching & Learning Response to Artificial Intelligence (AI) page

Assessment design is instrumental in shaping what students learn and how they engage with the learning process. To ensure students can equitably demonstrate their learning, teachers have a responsibility to set well-designed assessment tasks. Moreover, assessment and curriculum design form a key part of prevention strategies for maintaining academic integrity. 

Unit coordinators must consider how their assessment design facilitates student learning and how the learning can be assured. Ensuring that students meet the learning outcomes and demonstrate the knowledge to be awarded their qualification will require multiple, inclusive and contextualised approaches to assessment. 

You can read more on designing assessment on Teaching Resources Hub.

Determining the assessment schedule

Assessment categories and types are listed in the Assessment Procedures 2011, Schedule 1. There are five assessment categories: Exams, Skills-based assessments, Submitted work, In-class assessments and Group work. 

The subcategories are Final exam, In-semester exam, Placements, Skills-based evaluation, Creative assessments/demonstrations, Assignment, Honours thesis, Dissertation, Tutorial quiz, small test or online task, Small continuous assessment, Presentation, Optional assignment or small test, Participation, Group assignment, Group presentation. 

Unit of study coordinators are required to specify the assessment regime for their unit of study in Sydney Curriculum (including information about assessment types, weighting, duration and timing) and publish an online unit of study outline

For Semester 1, 2025, unit coordinators need to determine whether assessments are secured or unsecured, following the ‘two-lane approach’ and consider the level of AI usage. See more on the Teaching & Learning Response to Artificial Intelligence (AI) page

Identifying risks

Policy requires that unit coordinators review and renew assessments to eliminate or minimise opportunities for students to gain unfair advantage through plagiarism or academic dishonesty. Unit coordinators should review the assessment regime each time a unit is offered, including redesigning assessment tasks to prevent any breaches of academic integrity from reoccurring. Assessment tasks should not be reused in a way that would give some students an advantage, or an opportunity for advantage.

Schedule One of the Academic Integrity Procedures includes a  Summary of Assessment Types, Risks and Mitigating Strategies. Each faculty will have processes for conducting this assessment, and you may wish to use this easy-to-use risk assessment matrix template (DOCX, 32k) to help you reduce risk in your units of study.

Responding to generative AI

Generative artificial intelligence (AI) has created new challenges for educators seeking to design assessments that provide assurance of learning and uphold academic integrity. Assessment reform is necessary to foster productive and responsible ways for students to engage with AI, while also balancing the risks posed by AI. Students must be able to demonstrate disciplinary and graduate AI skills, and the ability to work ethically with such technologies. Assessments must include appropriate checkpoints to validate such learning.

Starting from Semester 1, 2025, the default position is to allow AI  in assessment, except for examinations and in-semester tests, unless expressly prohibited by the unit coordinator. Unit coordinators should provide guidance and examples of generative AI relevant to the assessment or unit of study. Assessment instructions should include how students can integrate AI outputs into their submissions. Additionally, students should be engaged in developing digital and information literacy skills to verify the accuracy of AI outputs and recognise misinformation.

Examples of how to adapt assessment practices to respond to generative AI and other integrity risks:

  • Generally, adjust marking rubrics to privilege critical thinking, accuracy, use and integration of sources.
  • For written assessments (e.g. essay, reflection, report), consider ways of assuring attainment of learning outcomes in supervised on-campus settings, such as through draft sessions in tutorials.
  • While turning these into multimodal assessments (e.g. presentation, video, poster etc) may not significantly benefit assurance of learning, they can be more authentic and motivating.
  • For multimodal artefacts, include a live Q&A session after delivery to verify student knowledge and skills.
  • Scaffold writing tasks as appropriate to support student learning, for example staging the writing process with in-class activities or requiring  submission of drafts to demonstrate process.
  • Relying solely on instructing students to use unit-specific content, recent information, local context, unique data sets, personal reflections, and student experiences and contexts is not enough, as these tasks are easily completed by AI. However, it is recommended to provide local and current cases or unique data sets that are not easily replicable, as this can help prevent contract cheating and collusion.
  • Require assessments to be submitted  as Word documents to provide greater metadata for investigating academic integrity breaches. Students should also keep drafts of assignments for up to one year.
  • Require students to submit acknowledgements on the AI tools and outputs used. Students should retain records of the AI outputs used.
  • Regularly revise assessment topics and questions to prevent collusion, contract cheating, and AI use.
  • Mark student work primarily composed during class time.
  • Conduct live debates, Q&As, panel discussions, and interactive oral assessments to assure knowledge.
  • Consider implementing compulsory weekly work-in-progress submissions to monitor ongoing student engagement.
  • Establish a shared understanding of expectations around AI use and shared responsibility for submissions.
  • Follow the guidelines for other assessment types.
  • Consider formative peer review of individual contributions to the group work.
  • If these are supervised, there is a lower risk of AI misuse and other breaches.
  • For unsupervised assessments (e.g. software use), ensure key learning outcomes are also assessed in a supervised environment.
  • Consider reweighting any unsupervised quizzes.
  • All examinations and in-semester tests must be invigilated by University-approved invigilators, either in-person on campus or through online human proctoring. 
  • Online assessments (e.g. quizzes, case studies) are high risk. If these are relied-upon to provide assurance of learning, they should be redesigned to a more secure form of assessment such as interactive oral assessments, viva voces, or other in-class assessments such as a Q&A.
  • Using image-based questions are not a sustainable solution as AI advances. Browser plugins that read questions off LMS and provide AI-suggested answers make online quizzes highly unassured assessments.
  • If in-person: this is lower risk, but try to maximise authenticity of the assessment such as allowing devices, using case studies, allowing certain resources, etc – and note that these are not 100% cheat-proof
  • Avoid reusing past exam questions or substantially rewrite questions if re-use is essential
  • Ensure exams focus on authentic demonstration of skills rather than mere knowledge recall.
  • See the Exams process guide for information on the creation and submission of in-semester and final exams into Canvas. 

Detecting academic integrity breaches

Research shows that individual markers are often the most effective in making initial detections, based on their knowledge of a student’s ability and the relevance of their responses to questions.  While there are tools available to assist in this process, the importance of academic judgment cannot be overstated when it comes to interpreting whether plagiarism or other forms of academic dishonesty (eg, recycling, peer-to-peer plagiarism and collusion, etc.).

The Academic Integrity Policy (2022) requires students to submit all text-based written assignments to similarity detection software. We use Turnitin for this purpose. Turnitin searches for matches between the text in student assignments and text sourced from the internet, published works and student assignments previously submitted to Turnitin.

With the approval of the Deputy Vice-Chancellor (Education) different similarity-detecting software may be used for other types of assessments. Where similarity-detection software is used, it must also be declared in your unit of study outline.

New third-party learning teaching and assessment technologies are required to be submitted to the eTools Review Committee in accordance with the Learning and Teaching Policy.

Turnitin released an AI detection tool in April 2023. The University has chosen not to release this tool for broad use due to concerns over the tool’s reliability, the potential for false positives, and potential biases. The Office of Educational Integrity has access, however, and will use it in suspected reported breaches, alongside other tools.

If you suspect that a student has used generative AI inappropriately for an assessment submission, you need to report this as a case to the Office of Educational Integrity for further investigation. Never submit student work to AI detection software yourself – this is a breach of student privacy and intellectual property.