GRASP

Re-negotiating academic integrity in the age of GenAI writing


February 20th, 2026, by Sascha Szpilewski Tag(s): GenAI, Academic writing

As a former HDR student, writing was never just a way to transcribe my understanding but an integral part of how I developed that understanding in the first place. Knowledge emerged and took shape not only through the research I conducted but through the repeated act of rewriting it, moving slowly from awkwardly framed paragraphs toward clearer expression, and thus deeper understanding. My scholarly identity evolved through this slow, often uncomfortable practice of thinking in words.

Now, generative AI (GenAI) sits deeply entangled within that process, ready to autocomplete, correct, restructure, or “improve” my sentences at the click of a button. As Sarah Eaton (2025) argues, we have now entered the post‑plagiarism era, within which the boundaries between human and GenAI contributions are increasingly blurred, and where traditional academic integrity models focused on textual originality are no longer sufficient.

Writing ≠ transcription

Research on academic HDR writing practices suggests that writing is more than a technical skill to be acquired. It contributes to the development and construction of scholarly identity and epistemic authority (Hyland, 2002), functions as a pedagogical exercise through which scholars are formed rather than merely assessed (Aitchison & Lee, 2006), and, as Lindsay (2020) argues, positions scientific writing as an act of fundamentally thinking in words. Within this understanding of writing (as a site of knowledge production), the introduction of GenAI within academic writing practices represents more than a tool designed to improve research outputs, it signals an epistemic shift.

Scaffolding or offloading?

Research suggests that GenAI can support productivity gains and provide cognitive scaffolding for HDR students, particularly in brainstorming and structuring their work (Mabirizi et al., 2025). It can also offer feedback that serves different and complementary functions to human feedback (Henderson et al., 2025).

However, scholars such as Dawson (2020) express concern about the long‑term effects of over‑reliance on these collaborations. They warn that such dependence may lead to cognitive offloading, replacing the very intellectual labour through which scholarly identity is traditionally formed, and reducing opportunities for deep learning and critical engagement (Kasneci et al., 2023).

GenAI is not going away. HDR students already use it, and policing its use has proved impossible, with universities consistently several steps behind emerging language‑processing tools. As such, the question arises whether we need to reconceptualise our understanding of plagiarism and academic integrity, and whether the use of GenAI in academic writing practices supports scholarly development or, through long‑term reliance, quietly displaces it.

From policing text to making thinking visible

Institutional guidance documents increasingly allow the use of GenAI under conditions of transparency and responsibility, with an emphasis on the student remaining accountable. Some encourage disclosure, and most advise students to “check with your supervisor.” However, while institutions are clear about what is permitted, they are far less explicit about how exactly HDR candidates can integrate GenAI without undermining their own authorship or scholarly development. The supervisor’s willingness and capacity to mediate in this space also remains uncertain, to say the least.

This is where the post‑plagiarism framework (Eaton et al., 2025) becomes useful, as academic integrity can no longer be established through policing textual originality alone. A new approach is required, one that positions academic integrity alongside authorship, accountability, and the visibility of cognitive labour. In an HDR context, writing is less about producing a dissertation than about becoming a researcher.

We are already here

There is little, if any, academic writing now unmediated by GenAI. Spellcheck, grammar tools, autocomplete, and GenAI summaries (just to mention a few) now permeate not only Google but also academic databases such as ProQuest.

However, epistemological shifts in writing and knowledge production are not new. The printing press redefined plagiarism and the dissemination of knowledge, and the emergence of the World Wide Web and search engines did the same. We are now living through another such shift, one that affects the very fabric of how knowledge is produced.

It is time to move the conversation from whether to allow and how to police GenAI toward how to confront the epistemic shift it represents. For HDR students, the stakes extend far beyond plagiarism and compliance. If writing is thinking, then safeguarding scholarly development requires safeguarding the conditions under which thinking occurs, especially when machines are watching.


References

Aitchison, C., & Lee, A. (2006). Research writing: problems and pedagogies. Teaching in Higher Education, 11(3), 265–278. https://doi.org/10.1080/13562510600680574

Dawson, P. (2020). Cognitive Offloading and Assessment. In D. Boud, P. Dawson, M. Bearman, R. Ajjawi, & J. Tai (Eds.), Re-imagining University Assessment in a Digital World (pp. 37–48). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_4

Eaton, S. E., Moya Figueroa, B. A., McDermott, B., Kumar, R., Brennan, R., & Wiens, J. (2025). What should we be assessing exactly? Higher education staff narratives on gen AI integration of assessment in a postplagiarism era. Assessment and Evaluation in Higher Education, 1–20. https://doi.org/10.1080/02602938.2025.2587246

Henderson, M., Bearman, M., Chung, J., Fawns, T., Buckingham Shum, S., Matthews, K. E., & de Mello Heredia, J. (2025). Comparing generative AI and teacher feedback: student perceptions of usefulness and trustworthiness. Assessment and Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2025.2502582

Hyland, K. (2002). Authority and invisibility: authorial identity in academic writing. Journal of Pragmatics, 34(8), Article 0378. https://doi.org/10.1016/S0378-2166(02)00035-8

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274

Lindsay, D. R. (2020). Scientific writing : thinking in words (Second edition.). CSIRO Publishing.

Mabirizi, V., Katushabe, C., Muhoza, G., & Rugasira, J. (2025). A systematic review of the impact of generative AI on postgraduate research: opportunities, challenges, and ethical implications. Discover Artificial Intelligence, 5(1), Article 238. https://doi.org/10.1007/s44163-025-00495-3


Please make any anonymous comments/ feedback, or suggestions for further posts at this link. If you would like to get in touch, or write a post for the Ideas Hub blog, please email karen.miller@curtin.edu.au


Photo by Bhautik Patel on Unsplash