The AI and Assessment Town Hall on 9 April 2025 provided an important update on how the University is adapting assessment design in response to the growing impact of generative AI.
Led by Professor Adam Bridgeman, Pro-Vice Chancellor (Educational Innovation) and Danny Liu, Professor of Educational Technologies, the Town Hall detailed the challenges that generative AI presents to our existing assessments, our progress in implementing the new Sydney Assessment Framework, implications for Semester 2 2025, and support available to help staff align existing assessments to new categories and types.
The Town Hall covered the following topics:
The key priority for Semester 2 is re-mapping your unit’s assessments to the new Sydney Assessment Framework. While most of the technical mapping is being centrally supported in partnership with the faculty teams, your review and input is essential.
Educational designers are re-mapping existing assessment to the new categories and types. You will receive suggested re-mappings for your unit(s) - review and approve or request changes. Note: emails will be sent to unit coordinators upon re-mapping completion during April and May.
If the re-mapping doesn't reflect your unit's intent, you can suggest alternative mappings.
Once approved, central teams will enter the data into Akari on your behalf. You will then follow normal faculty/school processes for unit outline approval.
The University is committed to supporting academics through this transition. A range of resources, tools, and people are available to help you understand and apply the new Sydney Assessment Framework.
Updates, resources, and case studies will continue to be shared via:
- AI in Assessment intranet page
- Teaching@sydney resources on AI
- The AI for Educators
-Faculty-level communications
Download the AI and Assessment Town Hall presentation (pdf, 3.16 MB).
During the Town Hall, attendees asked questions on a range of themes including:
Refer to last year’s Teaching@Sydney article for a comprehensive list of frequently asked questions and answers including those from November’s Town Hall. On this page you will also find responses to other common questions raised during the Town Hall. In some instances, questions have been combined with others on the same topic or rephrased for clarity.
How will secure assessments be implemented for online and remote students, particularly in postgraduate programs?
Secure assessment for online students must involve in-person supervision. This may include supervision at partner universities, hospitals, workplaces, or other approved locations. While this is more complex than online invigilation, it is necessary to meet integrity requirements and needs to be considered at a program level with, for example, such supervised assessment in a capstone unit in a postgraduate course. An exceptions or risk-based process will be available during the transition.
Is a secure assessment required per unit or per program?
The University must assure learning at the program level. Not every unit must include a secure assessment, but programs must have secure assessments at key points to demonstrate achievement of learning outcomes. Ensuring adequate secure assessment for every program and pathway may require assessment and even curriculum redesign in the coming years. For semester 2, the emphasis is on mapping our current assessments to the new framework so that this work can begin.
Will secure assessments become hurdle tasks, and how should they be weighted within the unit?
In many cases, it will be natural for Secure (Lane 1) assessments to become hurdle tasks – such as when they assure learning at the program-level. Such hurdle tasks may become part of “milestone” and “stage gate” points in courses and so it is important that a program level approach is considered.
How are secure assessments being managed logistically given increased demand (e.g., space, invigilation, scheduling)?
The Exams Office and faculties are collaborating to manage logistical challenges. Some secure assessments will be delivered in semester, including oral exams, practicals, and in-person Q&A. Support for mapping, supervision and scheduling is ongoing.
Is the University increasing resources to support additional workload for professional and academic staff, especially with the shift to in-person assessments?
Ensuring adequate secure assessment for every program and pathway may require assessment and even curriculum redesign in the coming years. For semester 2, the emphasis is on mapping our current assessments to the new framework so that this work can begin. Program-level thinking will enable us to plan the volume of secure assessments and has been shown to reduce workload for educators and students. Resources are available to assist educators in this process and in assessment redesign.
Can coordinators still prohibit the use of AI in open (Lane 2) assessments?
No. The option to prohibit AI in unsecured (Lane 2) assessments will be removed from Sydney Curriculum after Semester 1, 2025. From Semester 2, 2025, AI use will be permitted in all open assessments. Educators are though encouraged to guide students in how and when to use generative AI. In many cases, educators may well recommend that AI is not used.
What should I do if using AI in assessments conflicts with ethics approvals, privacy, or publication policies (e.g., in research-based or WIL units)?
In these cases, students must be instructed not to submit sensitive, personal, health, or unpublished data into generative AI tools. This is not strictly a matter or educational integrity, but of legal, ethical, and research compliance. The University has clarified that content generated by others (e.g. student submissions), proprietary teaching resources, and research data must not be uploaded to public AI tools. A version of the existing University guardrails for staff on the intranet will be published on the Current Students website ahead of Semester 2.
What if students or staff have moral objections to using AI?
These concerns should be acknowledged and discussed. AI is becoming a foundational technology like electricity or the internet, and students will need to understand it to be prepared for the future. Dialogue is encouraged, and ethical concerns can be addressed through learning conversations.
Ethical and moral objections are part of critical engagement. University-wide discussions on responsible use are encouraged and supported.
Can we make a statement that unethical AI use is a breach of academic integrity, similar to contract cheating?
Under the Academic Integrity Policy as it applies from Semester 2, 2025, it will be a breach of academic integrity to use AI to complete or contribute to a Secure assessment without approval, or complete or contribute to an assessment without appropriate acknowledgement. The Policy also stipulates that putting University teaching or course materials, content generated by another student, IP from external partners, or any person’s personal or health information into a generative AI tool is a breach of integrity. Students should be reminded of their obligation to comply with all University policies during their candidature, including when they are using AI for their studies and assessments. This includes the Academic Integrity Policy, the Student Charter, the Student Discipline Rule and the Acceptable Use of ICT Resources Policy.
The AI section of the Academic Honesty Education Module (AHEM) includes information about responsible use of AI, and it will be updated again ahead of Semester 2. The University has also developed guardrails about the safe and responsible use of AI for staff. A version of these will be published on the Current Students website ahead of Semester 2.
How do I make open assessments less vulnerable to AI misuse and more meaningful for learning?
Open assessments provide opportunities for students to learn and receive feedback on their understanding and the quality of their work. The AI for Educators website gives examples of each Open assessment type. Good Open assessments are those that promote learning and provide students with actionable feedback.
Open assessments can be paired with a Secure assessment is a task involves both assessment for and of learning. For example, a task might involve a presentation followed by Q&A in class. The former is an Open assessment where students are aided in researching and preparing a presentation, perhaps using the assistance of AI in making the presentation slides. The latter is a Secure assessment if carried out in class and with suitable supervision
Are AI-assisted tasks like Canvas quizzes still valuable for learning and marking?
Very much so! Such quizzes are in the ‘Practice and application’ category and are extremely useful for students to check their understanding and receive feedback on areas of weakness either before or after class. Students love and greatly appreciate having such opportunities to cement their knowledge.
How will we detect and respond to AI misuse in assessments, especially if we can’t rely on detection tools?
The focus is shifting from detecting cheating to detecting learning. Secure assessments are designed to confirm the presence of learning. Educators are encouraged to guide students toward helpful AI use and design assessments that reveal whether learning has taken place.
If, in their academic judgment, a marker suspects that a student’s assessment shows signs of AI use, but there has been no acknowledgment by the student of such use, they can and should report this to the Office of Educational Integrity via the reporting form, and OEI will conduct a review. Not appropriately acknowledging AI use is a breach of academic integrity.
Will the University update its definition of academic integrity in the age of AI?
The Academic Integrity Policy has undergone extensive review in relation to AI for implementation in Semester 2. Integrity now focuses on securing and demonstrating learning outcomes. Use of AI must be transparent, responsible, and aligned with assessment purpose. Secure assessments allow equitable and enforceable control over AI use, while open assessments require educator guidance and student accountability.
The University's approach aligns with the Australian AI Ethics Principles and defines integrity in terms of both secure verification and relevant learning.
Can students just cite AI use and avoid academic integrity breaches?
The updated Academic Integrity Policy stipulates that Unit coordinators may specify in the unit of study outline or assessment instructions what form of citation/ acknowledgement is required in relation to AI. This could include requiring a log of students’ AI inputs and outputs, where appropriate.
If no specification is made, the Policy provides that students must acknowledge the use of generative AI tools by listing: the name and the version of the tool used; the publisher; the uniform resource locator (URL); and a brief description of the context in which the tool was used.
Additionally, simply citing AI does not absolve students of other integrity responsibilities outlined in the Policy. The Policy provides that students remain accountable for the accuracy and originality of their work, including any errors or fabricated content produced by AI.
What is being done to address contract cheating alongside AI misuse?
Contract cheating remains a very serious issue. It is treated differently from AI use, which is increasingly embedded into productivity tools and workplace practices.
The University’s Office of Educational Integrity has seen evidence of contract cheating providers updating their marketing and service offerings for the era of AI, with many likely to harness AI to complete student work. Any staff member can report suspected contract cheating to the Office of Educational Integrity, who will conduct a thorough review. Severe penalties are applied in cases where contract cheating is substantiated.
The Office of Educational Integrity continues to educate students on the risks and consequences of engaging with contract cheating services, and works with faculties and units across the University to detect and disrupt these services where possible.
Are students being supported in learning how to use AI ethically and effectively?
Students have access to a dedicated Canvas site which was co-designed and written with students. It outlines responsible and effective AI use. Activities on using AI have been introduced into transition units and the Library holds regular sessions for students. A mandatory module is not currently planned as it may not be as effective as in-class activities led by their educators. Staff are encouraged to model responsible AI use and encourage student engagement through discussions, low-stakes tasks, and safe experimentation with AI.
Lots of questions around the workload involved in securely assessing students: has the university considered that many of its issues are the result of its unsustainable expansion (in terms of both curriculum complexity and student numbers).
The changes in our Assessment Framework and integrity policy reflect the need for our courses to both assure learning and be relevant to contemporary technologies such as generative AI. Universities across the world, both big and small, are considering or implementing similar approaches to ours. The volume and placement of Secure assessments should be considered at a program rather than at a unit of study level. As shown elsewhere, program-level design can reduce the volume of assessment and reduce workload for students and staff.
Are these changes being driven by TEQSA?
TEQSA helps to ensure that higher education providers are compliant with relevant legislation including the Higher Education Standards Framework (Threshold Standards) 2021. This legislation states that providers must ensure that course (i.e. program) learning outcomes are assured.
TEQSA has not yet issued prescriptive guidance but encourages institutions to demonstrate robust approaches to academic integrity and assessment in the age of AI. The University’s two-lane approach is featured as good practice in TEQSA’s Gen AI strategies toolkit. TEQSA has indicated that demonstrating assurance of learning in the age of Gen AI will be a strong feature of future guidance.
Will changes to unit outlines or assessment types still require normal approval processes?
Yes. Any substantial changes will go through standard faculty-based approval mechanisms and all unit outlines will need to be approved as normal.
How will the new assessment types map to special considerations outcomes? Will there be a new matrix?
Schedule 3 in the Assessment Procedures details this mapping. The new matrix mapping the new assessment types to special consideration outcomes has been approved and will be published shortly on the intranet. Educational designers and academic support teams are already using the draft matrix to guide implementation, and it will be available to all staff for consistency in applying special consideration procedures.
How are we addressing equity concerns in student access to AI tools?
Only University-provided AI tools, like Microsoft Copilot, may be required in assessments. This ensures that no student is disadvantaged due to cost or access. Equity considerations under the Higher Education Support Act (HESA) also prohibit mandating external paid AI tools.
Will AI ethics be explicitly taught to students?
AI ethics are embedded in learning conversations and supported by the University's adherence to Australia's AI Ethics Principles. Students are encouraged to critically engage with ethical implications of AI use in their discipline.
What’s the long-term vision for assessment at the University in the age of AI?
The University envisions an evolving approach where assessment continuously adapts to technological developments. Secure assessments will provide checkpoints for verifying learning, while open assessments encourage authentic engagement with disciplinary content and AI literacy. Program-level design will ensure both integrity and relevance.
Long-term reform includes embedding AI capabilities across disciplines, evolving heat maps and assessment plans, and focusing on feedback, authenticity, and sustainable assessment.
How is promoting AI compatible with the University’s sustainability goals?
The University has a responsibility to ensure its graduate are equipped to be leaders in contemporary society. As generative AI becomes ubiquitous in the tools, services and processes that we use in our work and lives, the University needs to continue to develop ethical and productive leaders who can positively influence areas such as policy and software development to ensure that environmental and other impacts are at the fore. Educators are encouraged to use their expertise in these areas to engage students in discussions on all of these matters
Do we need to have a proportion between secure and non secure lane assessments?
This question is best considered at the program level not at the unit of study level. Programs will need to have the proportion of Secure assessments needed to ensure the progression of students through them and the assurance of learning at each stage. This is likely to differ substantially according to the nature of course and the disciplines.
How do we weight lane 1 vs lane 2 assessments, given that some fraction of students will use a quick AI route to generating Open assessment, without much learning involved? Can Open assessments have any significant weight?
Marks for Open assessments are part of the feedback students receive on the quality of their understanding and work. If students do not actually learn through Open assessments, they will not succeed in the Secure asessments in their program and they will not graduate.
Re the 'no middle ground' thinking with these lanes, what about assessments that combine secure and open elements? Eg writing a report (open) on data generated as an assessment task in a lab (secure)?
The two parts of the task described assess different learning outcomes, would probably need different academic adjustments and forms of special consideration and have different integrity settings: they should be described as two assessments: (1) writing a lab report using suitable technologies and software (Open) and (2) a Q&A in the laboratory (Secure).
Can you walk us through a practical two-lane approach in a subject without a programmatic approach to assessment design? (I.e. programmatic is something coming... but I'm keen to apply this approach now, at a subject level.)
For Semester 2, the primary task is to re-map current unit assessments to the new framework. As this is being done, course and program coordinators should begin to consider where pathways lack sufficient secure assessment and equally where there is too much. Responsibility for the security of the degree lies at the faculty/course/discipline level rather than at the unit level.
Unit coordinators are asked to engage in ensuring the re-mapping process is accurate and then to the discussions needed to plan at the program-level. Assessment re-design at the unit level to address program-level assurance is not requested – it will require additional work and may need to be re-done later.
Does the 2-lane approach mean that all Lane 1/'secured' assessments will become hurdle tasks?
This will depend on the program. We may see more hurdle tasks being introduced but this should be planned at the program level.
Essays are "open", but surely there is a difference between an essay that is developed over time through smaller tasks and then discussed in an oral exam and an essay that is not developed and discussed in this way. Is another lane needed?
If a task actually involves several sequenced activities, then it should be described as separate assessments. This helps to clarify for students what is required, ensures the correct adjustments are applied if appropriate, ensures students receive formative feedback and ensures that integrity is correctly represented. For example, a writing task that covers most of a semester might involve (1) a literature review, (2) in class writing tasks, (3) a draft, (4) submission of the final document and (5) an in-class oral. Activities (1), (3) and (4) are Open whilst (2) and (5) are Secure.
How can we ensure that lane 1 assessment is secure for students who have scheduling adjustments requiring us to run their assessment at a different time than their peers?
Any adjustments or forms of special consideration must not reduce the integrity of the Secure assessment. Secure assessments should be planned at the program level so that the volume is suitable and their integrity can be assured.
Are lane two assessments formative only, or can they still be summative?
In most units, Open assessments will be for marks with the grade reflecting the level of understanding and quality of the work and functioning as part of the feedback the student receives. The function of Open assessments will depend on the role they play in the programs the units contribute to. If ‘summative’ refers to the program, then Open assessments are most naturally formative. If ‘summative’ refers to the individual unit, then Open assessments may be formative or summative.