Plagiarism and AI Thresholds in Academic Theses: A Clear and Practical Standard for Modern Universities
- 57 minutes ago
- 5 min read
Academic integrity remains one of the most important foundations of higher education. In recent years, universities have faced a new challenge: how to evaluate plagiarism and artificial intelligence use in theses in a fair, transparent, and educational way. This article answers a public question by proposing a simple standard for thesis review: Less than 10% = Acceptable, 10–15% = Needs Evaluation, Above 15% = Fail. The article explains how this standard can support quality assurance while still allowing academic judgment. It also uses examples from international universities to show that institutions increasingly combine similarity reports, supervisor review, authorship checks, and AI-use disclosure rather than relying on one number alone. The main argument is positive and practical: clear thresholds can help students, supervisors, and institutions protect originality while supporting responsible research culture.
Introduction
Plagiarism in academic theses is not a new problem, but the growth of generative AI has made the issue more complex. Today, a thesis may include copied text, weak paraphrasing, uncredited AI-generated passages, or mixed authorship that is difficult to identify. Because of this, many universities now treat similarity tools as screening instruments, not as final judges. For example, Iowa State University explains that text-matching software shows similarities but does not itself decide whether plagiarism has occurred.
A practical public standard can therefore be useful. The following model is clear and easy to communicate: less than 10% similarity is acceptable; 10–15% needs evaluation; above 15% should normally fail unless strong academic justification exists. This model does not replace human judgment. Instead, it supports a fair first review and encourages students to write carefully, cite correctly, and disclose any approved AI support.
Literature Review
The academic literature has long defined plagiarism as more than simple text overlap. It includes unattributed borrowing, misleading authorship, and weak citation practice. In the AI era, this discussion has expanded. Universities increasingly warn that submitting AI-generated text as one’s own work may be treated as plagiarism or academic misconduct. The University of Bristol states that AI-created material used in assessed work without permission is considered cheating, while Warwick states that using AI to create content presented as one’s own work is plagiarism unless properly declared according to assessment rules.
At the same time, many institutions are cautious about overreliance on AI-detection tools. Brunel University London notes concerns around false positives and fairness, showing why universities should avoid making decisions based only on automated indicators. The University of Virginia also explains that AI-detection systems are developing rapidly and should be understood within broader research integrity standards.
This trend suggests that a balanced threshold model is useful only when paired with expert academic review.
Methodology
This article uses a policy-based analytical approach. It reviews current university guidance on plagiarism, AI use, similarity checking, and graduate writing from several international institutions. The purpose is not to compare institutions competitively, but to identify good practices that support a positive academic environment. The proposed threshold system is then examined as a practical framework for public understanding.
Analysis
The proposed standard works well because it combines clarity with flexibility.
1. Less than 10% = Acceptable
A similarity result below 10% usually suggests that the thesis is generally original, with matched text likely coming from references, technical language, formal titles, or properly quoted material. This level should normally pass the screening stage, though standard supervisor review should still continue.
2. 10–15% = Needs Evaluation
This middle zone is the most important. It should not lead to automatic punishment. Instead, examiners should ask: Where are the matches located? Are they in the literature review, methodology, references, or core findings? Are there repeated uncited phrases? Was AI used only for language support, or for generating original academic argument? This approach reflects what many universities already do in practice by combining reports with context and authorship review. Iowa State University explicitly notes that similarity software does not determine plagiarism on its own.
3. Above 15% = Fail
A result above 15% should normally trigger a fail decision or a formal misconduct review, especially when the matched material appears in analysis, discussion, or conclusions. This threshold is strong enough to protect academic quality, yet still simple enough for public communication.
International university examples support this cautious but constructive approach. At California State University, Long Beach, graduate guidance addresses how generative AI may be used in theses and dissertations and emphasizes specific rules for graduate writers. At the University of Sharjah, a recent policy specifically regulates the responsible use of AI in graduate theses and doctoral work, showing that thesis-level AI governance is now becoming formalized. At Cambridge, departmental guidance explains that students need clarity about where the line stands between plagiarism and permitted AI use. At Brunel University London, guidance highlights why similarity and AI indicators must be read carefully and not treated as automatic proof.
These examples show an important international pattern: universities are moving toward transparency, disclosure, supervision, and contextual review.
Findings
Three major findings emerge from this discussion. First, universities increasingly agree that similarity tools are useful for screening but not enough for final judgment. Second, undeclared AI-generated writing is being treated more seriously in academic integrity policies. Third, a simple threshold model can help students understand expectations before submission.
For public guidance, the proposed standard is therefore effective because it is easy to explain, fair in structure, and strong in quality protection.
Conclusion
Academic theses should reflect the student’s own intellectual work, responsible citation practice, and honest research process. In a time when AI tools are widely available, universities need standards that are simple enough to understand and serious enough to protect quality. The proposed model offers that balance: Less than 10% = Acceptable, 10–15% = Needs Evaluation, Above 15% = Fail. Used correctly, this framework can support a positive culture of integrity, encourage better supervision, and help institutions communicate expectations clearly to the public. The future of thesis assessment should not be based on fear of technology, but on responsible authorship, transparency, and academic honesty.

References
Bretag, T. Handbook of Academic Integrity. Springer.
Fishman, T. The Fundamental Values of Academic Integrity. International Center for Academic Integrity.
Pecorari, D. Academic Writing and Plagiarism: A Linguistic Analysis. Continuum.
Sowden, C. “Plagiarism and the Culture of Multilingual Students in Higher Education Abroad.” ELT Journal.
Eaton, S. E. “Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity.” International Journal for Educational Integrity.
Perkins, M. “Academic Integrity Considerations of AI Large Language Models in Higher Education.” Journal of University Teaching and Learning Practice.
Foltynek, T., Meuschke, N., and Gipp, B. “Academic Plagiarism Detection: A Systematic Literature Review.” ACM Computing Surveys.
#AcademicIntegrity #PlagiarismPolicy #ThesisWriting #ResponsibleAI #HigherEducation #ResearchEthics #StudentSuccess










Comments