I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating.
Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased.
This is
It’s definitely not as good as people think it is. The best description I heard was that AI outputs “hallucinations” as it only needs to look plausible, it doesn’t have to be right.
Which is why using it to detect cheating is a concern - you’d hope that it would only be used for a first pass only to be reviewed by a human later but some people are going to think that AI is infallible and leave it there.