Saltar a casilla de búsquedaSaltar a navegaciónIr directamente al contenido principal

Testing of detection tools for AI-generated text

  • Debora Weber-Wulffg(Author)
    ,
  • Alla Anohina-Naumecae(Author)
    ,
  • Sonja Bjelobabab(Author)
    ,
  • Tomas Foltynekh(Author)
    ,
  • ,
  • Olumide Poppolac(Author)
Research Output: Contribution to journal Article Revisión por expertos

Acceso abierto

Métricas de publicación

Métricas

SciVal
Citas
338
SciVal
FWCI
50.10
SciVal
Número de autores
8
SciVal
Percentil de artículo
99
SciVal
Percentil superior
1

PlumX, opens in new tab

Capturas
566
Redes sociales
214
Menciones
185
Citas
321

Resumen

Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.

Información de Publicación

Tipo de resultado

Research Output: Contribution to journal Article Revisión por expertos

Idioma original

English

Número de artículo

26

Páginas desde-hasta (Número de páginas)

Páginas 1-39 (39 páginas)

Revista (Volumen, Número de Edición)

International Journal for Educational Integrity (Volumen 19, Número 1)

Hitos de publicación

  • Published - 25/12/2023

Estado de publicación

Published - 25/12/2023

ID de publicación externa

  • Scopus: 85180443619

Detalles de Financiación

The authors wish to thank their colleague Július Kravjar from Slovakia who contributed a full set of test documents to the investigation. The authors also wish to thank their colleagues from Turkey, Salim Razı and Özgür Çelik, who participated in the initial stages of the discussions about this research endeavour, but due to the devastating earthquake in February 2023 were not able to contribute further. The tool similarity-texter was created as part of the bachelor’s thesis of Sofia Kalaidopoulou and is based on Dick Grune's sim_text algorithm. It was submitted to the HTW Berlin in 2016 and is available under a Creative Commons BY-NC-SA 4.0 International License at https://people.f4.htw-berlin.de/~weberwu/simtexter/app.html. ChatGPT was NOT used to tweak any portion of this publication.