Skip to contents

Data derived from the paper GPT detectors are biased against non-native English writers. The study authors carried out a series of experiments passing a number of essays to different GPT detection models. Juxtaposing detector predictions for papers written by native and non-native English writers, the authors argue that GPT detectors disproportionately classify real writing from non-native English writers as AI-generated.

Usage

detectors

Format

A data frame with 6,185 rows and 9 columns:

kind

Whether the essay was written by a "Human" or "AI".

.pred_AI

The class probability from the GPT detector that the inputted text was written by AI.

.pred_class

The uncalibrated class prediction, encoded as if_else(.pred_AI > .5, "AI", "Human")

detector

The name of the detector used to generate the predictions.

native

For essays written by humans, whether the essay was written by a native English writer or not. These categorizations are coarse; values of "Yes" may actually be written by people who do not write with English natively. NA indicates that the text was not written by a human.

name

A label for the experiment that the predictions were generated from.

model

For essays that were written by AI, the name of the model that generated the essay.

document_id

A unique identifier for the supplied essay. Some essays were supplied to multiple detectors. Note that some essays are AI-revised derivatives of others.

prompt

For essays that were written by AI, a descriptor for the form of "prompt engineering" passed to the model.

For more information on these data, see the source paper.

Examples


detectors
#> # A tibble: 6,185 × 9
#>    kind  .pred_AI .pred_class detector     native name  model document_id prompt
#>    <fct>    <dbl> <fct>       <chr>        <chr>  <chr> <chr>       <dbl> <chr> 
#>  1 Human 1.00     AI          Sapling      No     Real… Human         497 NA    
#>  2 Human 0.828    AI          Crossplag    No     Real… Human         278 NA    
#>  3 Human 0.000214 Human       Crossplag    Yes    Real… Human         294 NA    
#>  4 AI    0        Human       ZeroGPT      NA     Fake… GPT3          671 Plain 
#>  5 AI    0.00178  Human       Originality… NA     Fake… GPT4          717 Eleva…
#>  6 Human 0.000178 Human       HFOpenAI     Yes    Real… Human         855 NA    
#>  7 AI    0.992    AI          HFOpenAI     NA     Fake… GPT3          533 Plain 
#>  8 AI    0.0226   Human       Crossplag    NA     Fake… GPT4          484 Eleva…
#>  9 Human 0        Human       ZeroGPT      Yes    Real… Human         781 NA    
#> 10 Human 1.00     AI          Sapling      No     Real… Human         460 NA    
#> # ℹ 6,175 more rows