top of page

Plagiarism and AI Thresholds in Academic Theses: Global Practices and Standards

  • Writer: OUS Academy in Switzerland
    OUS Academy in Switzerland
  • Apr 7
  • 6 min read

Abstract

Academic institutions worldwide use similarity detection tools and policies to preserve the integrity of theses. In this article, a three-band standard is proposed: 0–10% = Acceptable, 10–15% = Needs Evaluation, Above 15% = Fail. We review existing practices in universities, explore real policy samples, describe workflows, analyze challenges with AI, and offer recommendations. The goal is to help universities, supervisors, and students adopt clear, fair, and effective standards to ensure originality while recognizing legitimate overlap.


Introduction

Academic theses are major documents in higher education. They represent original research, argument, analysis, and synthesis. Plagiarism—or borrowing another’s words, ideas, or data without proper attribution—threatens credibility. At the same time, the use of AI tools (for drafting, language correction, idea generation) introduces new ethical questions.

Universities use software like Turnitin, iThenticate, or similar tools to detect text overlap. But numeric similarity scores can mislead: small overlap may hide serious plagiarism; larger overlap may be benign (methods section, quotes, common definitions). With the rise of generative AI, universities also need policies that define acceptable and unacceptable AI use in theses.

This article examines what universities usually do now, introduces the 0–10%, 10–15%, >15% similarity-range model, and discusses how real policy practice aligns or diverges. It aims to help institutions establish or refine their guidelines, supervisors understand evaluation, and students prepare responsibly.


Literature Review

  • Studies show that plagiarism detection tools are helpful but not sufficient. They cannot detect intent, nor do they always recognize poor paraphrasing, or false citations.

  • Research has stressed that high similarity alone does not equal misconduct. Important is where overlap occurs (results vs literature review, for example), whether proper citation is present, and whether the student can explain the overlap.

  • With AI tools, recent academic writing on integrity underscores disclosure of use, transparency, and process evidence (drafts, prompt history). There is also concern about false positives in AI detection tools, especially for students who are non-native speakers.

  • Some universities have published policy statements or advice about AI, noting that detection tools must be used with caution, that students must acknowledge AI assistance, and that substantive writing by AI is generally disallowed unless explicitly permitted.

One real example: the University of Washington in the United States has guidance that distinguishes “editorial assistance” from substantive AI assistance. They require responsible use and disclosure. Although they do not publicly define exact similarity thresholds like “0-10% / 10-15% / >15%,” their policy illustrates the combination of human judgment, ethical guidance, and process documentation.


What Universities Usually Have: Common Practices

From a survey of many academic integrity policies and interviews with faculty, these practices are common:

  1. Similarity / Text-Matching Tools

    • All or most universities require theses to be submitted to a similarity detection tool.

    • The reports show percentages of matching text, and highlight exact sources of overlap.

  2. Manual Review

    • Supervisors or examiners do not accept the tool’s percentage alone to judge plagiarism.

    • They check where overlap appears, how much is quoted or attributed, whether paraphrasing is adequate.

  3. Academic Integrity Modules / Training

    • Many institutions provide training in citation, paraphrasing, research ethics.

    • Workshops may address AI tools, how to use them ethically, how to avoid plagiarism.

  4. AI Disclosure Policies

    • Increasingly, universities require students to state whether they used AI in writing, say what kind of tool, for what purpose (grammar, drafting, idea generation).

    • Some policies allow limited use (grammar checking, style editing), prohibit or severely limit use for substantive writing (analysis, literature review, conclusions) unless approved.

  5. Viva / Defense / Oral Examination

    • Oral defenses help verify authorship. Students may be asked to explain or read aloud portions, or defend their data, analysis, and argument.

    • This helps detect whether the student fully understands what they submitted.

  6. Revision & Sanctions

    • If similarity is high or AI use undisclosed, students may be required to revise sections.

    • In serious cases, universities have academic misconduct proceedings which may lead to failure of thesis, suspension, or other penalties.


The Three-Band Similarity Model: 0–10%, 10–15%, >15%

Here is a proposed model, with how each band might be interpreted and applied, along with challenges and safeguards.

Similarity Range

Interpretation / Proposed Policy

What Examiners Should Do

0–10%

Acceptable. Normal overlap from citations, common phrases, technical terms.

Spot check for any large overlapping chunks; ensure properly cited; accept if clean.

10–15%

Needs Evaluation. Overlap may be significant; risk of patchwriting or poor paraphrase.

Manually review all overlapping parts; ask student for clarifications; request revisions if needed.

Above 15%

Fail (or conditional fail) if overlap is widespread, unattributed or in critical sections.

Strong examination; possibly reject thesis or require major revisions; trigger academic conduct procedures if intentional misconduct is suspected.


Analysis: How This Model Aligns with University Practices

  • Many universities, while not using exactly these percentages, follow a similar structure: low overlap often accepted, moderate overlap reviewed carefully, high overlap leading to potential failure or misconduct.

  • The 10-15% band is realistic: many reports show that moderate overlap arises in literature reviews, methods section, or if students reuse boilerplate descriptions. Universities often demand manual evaluation here.

  • The >15% band corresponds to what many institutions treat as serious overlap, especially when large blocks are unattributed, or when key argument or results are in overlapping text.

  • The model allows clarity for students, so they know roughly what to expect, while preserving examiner discretion to consider context, intent, and contribution.


Challenges and Considerations

  1. Context Matters

    • Overlap in literature review is different from overlap in conclusions or discussion.

    • Some technical or methodology text inherently uses standard wording; multiple theses may use similar phrases.

  2. False Positives

    • Automatic detection may flag common phrases, references, or inherently standard definitions.

    • AI detection tools have error margins; non-native English writers may use phrasing that triggers flags.

  3. AI Use Complexity

    • Tools vary a lot: grammar correction vs content generation vs paraphrasing machines.

    • Students may not understand what constitutes unacceptable AI use.

  4. Variability Between Disciplines

    • In some fields (e.g. law, humanities), quotations, citations, text reuse are more common.

    • In STEM, code, formulas, or standard methods may lead to automatic overlap which should be judged differently.

  5. Supervision and Student Training

    • If supervision is weak or students are not taught citation and paraphrase skills, similarity and AI misuse increases.

    • Institutions need to invest in education, resources, and supervision.


Findings: Typical Outcomes and Best Practices

From policies and published cases, these are findings about what tends to work:

  • Transparency: When students know policy thresholds, allowable AI use, and how similarity reports will be judged, they are less likely to inadvertently violate rules.

  • Process Documentation: Collecting draft versions, keeping supervisor feedback, tracking AI tool use (prompts, output) helps in evaluation and defense.

  • Oral Defense: It remains a powerful tool to clarify misunderstandings, check whether the student really understands the content, and discourage misattributed or AI-generated content.

  • Consistency: Units or faculties that apply thresholds consistently (with discretion) reduce confusion and perceived unfairness.

  • Flexibility with Oversight: Allowing students to revise moderate overlap cases is better than an all-or-nothing approach. A chance to correct improves learning and maintains standards.

  • AI policy clarity: The best practices distinguish between acceptable and unacceptable AI uses; demand disclosure; avoid punishing legitimate, limited AI help.


Sample Real Policy Example

One real policy worth noting is from an American university that distinguishes editorial AI assistance from substantive AI writing. Students are required to disclose use of generative AI, stating tool name and purpose. The policy also states that AI detection scores are advisory, not definitive. Examiners are instructed to review drafts, ask clarification, and use oral defense elements to confirm authorship.

This real sample shows that while universities may not publicly post exact numeric thresholds, many do have firm practices about disclosure, human judgment, and process. It supports the model of having numeric similarity bands plus policy surrounding AI and review.


Recommendations

Based on what universities usually do, combined with the three-band model (0-10%, 10-15%, >15%), here are recommendations for institutions:

  1. Adopt the Similarity Bands in Policy

    • Clearly state in thesis guidelines that 0–10% is acceptable, 10–15% triggers evaluation, >15% may lead to failure or misconduct.

  2. Define “What Counts”

    • Clarify what overlap is acceptable: quotes, citations, standard methods.

    • Clarify unacceptable overlap: unattributed text, large copied sections, AI-generated content without attribution.

  3. Require AI Use Disclosure

    • Students should disclose if they used tools like AI language models or paraphrasing tools, for what purpose, and provide records (prompts, output).

  4. Provide Draft Review and Supervisor Input

    • Supervisors should review early drafts, use similarity tools at proposal and mid-thesis stages to catch issues early.

  5. Use Oral Defense / Viva

    • Incorporate questions that test student understanding of key sections, data, arguments to guard against ghostwriting or overreliance on AI.

  6. Set Up Revision Paths

    • For cases in the 10–15% band or marginal >15% when overlap is only in less critical parts, allow revisions under supervision instead of immediate fail when possible.

  7. Offer Training and Support

    • Workshops on paraphrasing, citation, academic writing, ethics.

    • Advice about AI tools: how to use ethically, what to avoid.

  8. Consistency and Fairness

    • Apply policy uniformly across departments and disciplines.

    • Ensure examiners have guidance and training to interpret similarity reports and AI disclosures.


Conclusion

Universities usually handle plagiarism and AI issues with a combination of detection tools, manual review, disclosure policies, and oral defenses. Although few institutions publicly adopt exact thresholds, the proposed model—0–10% Acceptable, 10–15% Needs Evaluation, >15% Fail—fits well with many existing practices. It offers clarity, fairness, and a structure for evaluation, while preserving necessary human judgment and ethical oversight.

Institutions adopting this model should combine it with strong policies on AI use disclosure, evidenced authorship, supervisor engagement, student training, and viva/defense mechanisms. When properly implemented, such a framework can maintain high academic standards and support authentic scholarship in an era of rapid technological change.


Keywords: plagiarism thresholds, AI use policy, academic theses integrity, similarity bands, thesis evaluation



ree

 
 
 

Comments


Top Stories

Merely appearing on this blog does not indicate endorsement by QRNW, nor does it imply any evaluation, approval, or assessment of the caliber of the article by the ECLBS Board of Directors. It is simply a blog intended to assist our website visitors.

Stay informed with the latest rankings and insights in the field of business education. Subscribe to our newsletter for exclusive updates.

Thank You for Subscribing!

  • Youtube
  • Instagram
QRNW Ranking Logo

© Since 2013 by ECLBS. All rights reserved.

www.QRNW.com Quality Ranking NetWork, is an Independent not-for-profit organization that evaluates and ranks the world's premier business schools.

This website primarily operates in English. Any translations provided are for assistance purposes only and cannot be considered official.

The ranking is administered by an independent group of experts who operate as a non-profit association. The ranking office operates autonomously from the accreditation team, ensuring a clear separation of functions. While the accreditation team focuses on evaluating institutions based on established criteria and standards, the ranking office employs its expertise to assess and rank universities and business schools using a variety of metrics and methodologies. This separation ensures objectivity and impartiality in both processes, maintaining the integrity and credibility of the rankings and accreditation systems.

The European Council of Leading Business Schools (ECLBS) is a not-for-profit association on business education. We are committed to providing reliable and up-to-date information on the best business schools in the world. Submit Your Scholarly Papers for Peer-Reviewed Publication: Unveiling Seven Continents Yearbook Journal "U7Y Journal" ISSN:3042-4399 

We are passionate about helping students make the best decisions when it comes to choosing the right business school. Our rankings are based on a comprehensive assessment of the reputation, social media, website quality, etc... there is no valid academic ranking until today, and our ranking is based on the business school image all over the world. 

European Council of Leading Business Schools ECLBS (Nonprofit Organization)
Zaļā iela 4, LV-1010 Riga, Latvia / EU (European Union)
Tel: 003712040 5511
Association Registered Identification Number: 40008215839
Association's Foundation Date: 11.10.2013
ECLBS is a member of IREG International Ranking Expert Group - IREG Observatory on Academic Ranking and Excellence in Belgium - Europe, the Council for Higher Education Accreditation (CHEA) Quality International Group (CIQG) in the USA and the International Network for Quality Assurance Agencies in Higher Education (INQAAHE) in Europe.

Contact Us

Thanks for submitting!

bottom of page