NEW CHALLENGES FROM GENERATIVE ARTIFICIAL INTELLIGENCE: FABRICATION, FALSIFICATION AND PLAGIARISM
DOI:
https://doi.org/10.55003/JIE.25103Keywords:
Generative artificial intelligence, Fabrication, Falsification, Plagiarism, Technology ethicsAbstract
Generative Artificial Intelligence (GenAI), built on the Transformer architecture, presents substantial challenges to academic integrity and intellectual property protection. Three distinct misconduct patterns are identified. Fabrication occurs in three forms: authorial style imitation falsely attributed to legitimate authors, fabricated academic articles structured to appear credible, and synthetic online identities designed to mislead readers. Falsification arises when AI manipulates source material until its original meaning is lost, with decontextualized summarization producing systematic factual distortion. Plagiarism has become increasingly difficult to detect, as AI-generated paraphrasing circumvents conventional detection software, while unauthorized use of copyrighted material for model training remains legally unresolved. For 16 detection tools tested, only Copyleaks, Turnitin, and Originality.ai achieved consistently high accuracy, and existing tools misclassified over half of non-native English writers' texts as AI-generated, indicating significant detection bias against Thai and non-Anglophone scholars. No tool exists for Thai-language detection. An integrated framework addressing legal reform, detection technologies, educational strategies, platform governance, and public policy is proposed.
References
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., ... Kaplan, J. (2022). Constitutional AI: Harmlessness from AI feedback. https://arxiv.org/abs/2212.08073.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems. https://arxiv.org/abs/2005.14165.
Copyright Act B.E. 2537. (1994, December 21). Royal Thai Government Gazette. Vol. 111, Part 23 Kor, pp. 1–34. (in Thai)
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8/.
Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248), 1–43.
Jawahar, G., Sagot, B., & Seddah, D. (2020). What does BERT learn about the structure of language? https://aclanthology.org/P19-1356.pdf.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), pp. 389–399.
Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023a). A watermark for large language models. https://proceedings.mlr.press/v202/kirchenbauer23a/kirchenbauer23a.pdf.
Kirchenbauer, J., Geiping, J., Wen, Y., Shu, M., Saifullah, K., Kong, K., Fernando, K., Saha, A., Goldblum, M., & Goldstein, T. (2023b). On the reliability of watermarks for large language models. https://openreview.net/pdf?id=DEJIDCmWOz.
Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that's fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), pp. 104–117.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21.
Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. https://open-publishing.org/journals/index.php/jutlp/article/view/635/635.
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? https://openreview.net/pdf?id=NvSwR4IvLO.
Samuelson, P. (2023). Generative AI meets copyright. Science, 381(6654), 158–161.
Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 1–10.
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313.
Turnitin. (2023). AI writing detection: The facts. https://www.turnitin.ph/solutions/topics/ai-writing.
Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621(7980), pp. 672–675.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. https://arxiv.org/pdf/1706.03762.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html.
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. https://arxiv.org/pdf/2306.15666.
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. https://arxiv.org/pdf/1905.12616.
Zhang, Y., Sun, S., Galley, M., Chen, Y. C., Brockett, C., Gao, X., Gao, J., Liu, J., & Dolan, B. (2020). DIALOGPT: Large-scale generative pre-training for conversational response generation. https://arxiv.org/pdf/1911.00536.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Journal of Industrial Education

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
"The opinions and contents including the words in papers are responsibility by the authors."
"ข้อคิดเห็น เนื้อหา รวมทั้งการใช้ภาษาในบทความถือเป็นความรับผิดชอบของผู้เขียน"