top of page
Writer's pictureSharon Buechler

The AI Detector Minefield -- Unveiling Individual Vulnerability to False Accusation & the Legal Obligation to Do Something About It

Artificial intelligence (AI) has infiltrated numerous facets of our lives, from enhancing efficiency in industries to revolutionizing how we interact with information. However, as AI becomes more and more pervasive, the need to scrutinize its efficacy becomes more intense, particularly in domains where its judgments hold significant consequences. One such domain, ironically, involves the use of AI detector algorithms designed to distinguish between human-generated content and AI-generated content. While AI holds promise in various applications, there are significant concerns regarding its accuracy and potential implications of misjudgments, especially in academic settings.



AI Detection Vulnerability and Legality
Weighing individual vulnerability against AI detection


I decided to try feeding my writing through one of these so-called detectors (I won’t name it, but you can message me if you want to know). I had spent a lot of time writing a couple of essays for law-school admission. They were 100% my own work. After sending in my application, I started reading about AI in academics and the effort to detect when students use AI to draft their essays. My writing scores on entrance exams (which can’t be faked as far as I am aware) were quite high, so I felt fairly safe. It turns out that when I ran my own work through an AI-detector, both of my essays came back with a 100% chance they were written by AI.

It's bad enough these detectors claim to detect AI when they are so inaccurate, but to give a quantitative score of likelihood, implying a false sense of accuracy is even more concerning... particularly when the score comes back at 100%. 100%? I guess that makes me an AI! I have so many questions!


  • What are the consequences of this unfortunate assessment of my work?

  • What recourse do I have should the academic institutions to which I’m applying decide to use this tool and believe I didn’t write my own essays?

  • And if I am accepted, how can I feel safe submitting an essay knowing my writing is apparently exactly like AI?

  • How exactly are these detectors making determinations?

  • How can they get away with implying any level of accuracy for an individual piece of writing?

  • Even if they perhaps correctly identify AI-generated content at a population level, what happens to individuals when these detectors erroneously flag authentic human work?


The legal ramifications of erroneous AI detection are troubling. If a student's academic prospects are hindered due to a false positive from an AI detector, who bears the responsibility? Should the company behind the detector be held accountable for any resulting harm? Should the school be required to inform the applicant or student when decisions are made based on an AI detection tool? These questions need answers if educational institutions will continue to rely on AI tools for tasks like plagiarism detection and automated grading.


The broader societal implications of flawed AI detection also cannot be overlooked. In a world where AI permeates various decision-making processes, from hiring practices to loan approvals, the consequences of misjudgment by AI detectors extend far beyond academia. If individuals are unfairly disadvantaged or marginalized due to inaccuracies in AI detection, it perpetuates systemic biases and injustices.


Addressing the challenges posed by AI detectors requires a multifaceted approach. Transparency regarding the algorithms and methodologies used by these detectors is essential. Users should have a clear understanding of how decisions are made and the potential limitations of the technology. Assessing and testing accuracy, especially in judging individual works, is critical and must be transparent. Flaws need to be acknowledged, understood, and managed.


Most critical to the protection of the community and individuals is the creation of accountability mechanisms to mitigate the risks associated with erroneous AI detection. If assertions prove to be unfounded or misleading, there should be repercussions to ensure that users are not harmed by flawed technology.


  1. AI detection companies must be held accountable for claims they make regarding the accuracy of their detectors.

  2. AI detection companies need to be both legally and ethically accountable for judgments their AI tool is making about individual works.

  3. Institutions must be accountable for decisions they make based on AI detection tools and how they use the information detectors provide.


While AI holds promise in various applications, the accuracy of AI detectors and its implications have to be critically examined, particularly in sensitive domains like academia. The potential for false results poses significant risks to individuals, necessitating careful scrutiny and accountability measures. As we navigate the complexities of the AI revolution, it is imperative that we prioritize ethical considerations and strive for transparency, fairness, and accountability in the development and deployment of AI technologies.


AI Detection Vulnerability and Legality:  

And this article came back with a 95% chance of being written by AI… I promise it was not.


CONTACT ME TO SUBSCRIBE OR LEARN MORE ABOUT THIS TOPIC




13 views0 comments

Comments


bottom of page