Back in 2005, when I founded Gurufi.com, most of the MBA applicants I helped with their personal statements had essays with interesting ideas that needed developing, polishing, and editing. An interview, a few insightful questions, a couple of follow-ups, and a deft editorial hand would turn those rough clumps of coal into diamonds. Now, most of the essays I get are relatively well-organized and have no grammatical errors… but they’re boring as hell. What happened? ChatGPT.
I get it; I use AI to summarize and compose work emails to colleagues, and I even used ChatGPT to help me lose 30 pounds of fat and gain 10 pounds of muscle this year. It’s a great tool, but you shouldn’t use it to write your personal statement.
Over the course of these next five posts, I am going to talk about the dangers of AI in terms of the good ethical hacks for using AI to guide and refine your writing process. But first, I want to begin where many applicants start: do AdComs care, and will I get caught? The short answers are: absolutely… and maybe.
Schools Are Catching More AI-Written Essays Than Ever
MBA programs are more alert, more equipped, and more aggressive in identifying AI-generated writing than ever before. This is a relatively new development. Last year, schools were certainly aware of the issue, but they largely took a “wait and see” approach. Schools knew students were experimenting with ChatGPT, but very few institutions used systematic screening tools, and even fewer had clear policies on what qualified as AI-generated writing. Most admissions teams simply didn’t have the infrastructure or experience to differentiate AI text from human writing reliably.
That era is over.
A Year Ago: The “Wait and See” Phase
In the course of doing our GMATClub Podcast, we’ve talked to lots of representatives from various schools, and they mostly tell the same story: last year, they didn’t quite know how to approach the issue, but as the year wore on, it became increasingly clear that many essays were substantially written by AI, and it was a problem. To begin with, they had anecdotal suspicions: essays that felt strangely flat, generic, or tonally inconsistent. As I’ve noted in the past, when your dozens of essays per week, hundreds per year, the AI-generated texts stick out like a sore thumb. Last year, most schools didn’t ding those essays too hard because they didn’t yet have explicit policies in place for them, nor did they have the technical tools to match general suspicion with reliable evidence. As such, many committees treated the influx of suspicious essays as a data-gathering opportunity rather than a disciplinary issue.
This Year: Schools Are Using Real AI-Detection Tools
Fast-forward to the current admissions cycle, and the landscape has transformed. Most top programs now use either commercial products of in-house software to detect AI, and readers at many top school underwent training to help them spot AI-generated text. In other words, schools are no longer guessing. They are screening. Admissions officers have told us that they routinely flag essays for review, and not just when an AI detection coefficient number is high, but when a piece of writing feels artificial. In some cases, human readers catch AI essays that detectors score as “low likelihood.” The software is only one part of the system; professional judgment is the other.
This is the part many applicants misunderstand. You can run your essay through a detector and see a reassuringly low “20% AI” score. That won’t protect you. Why? Because committees rely heavily on qualitative signals. Does the writing sound uncannily polished but emotionally flat? Is the narrative strangely generic, as if it could belong to anyone? Is the vocabulary oddly inflated or inconsistent with other parts of the application? Does it lack personal depth, nuance, contradiction, vulnerability, or specificity? The question I often ask myself is, “does it feel like a person is in there?” When the answer is ‘no,’ your essay “feels AI-ish,” even with a low score, you may still get dinged. Detectability is not just a number; it’s a vibe.
That said, many of you will use commercial AI-detection software, get a number, and not know what to do with it. So, what numbers do schools actually care about? A common misconception is that any AI detection score is fatal. That’s not how schools operate; nor is that a proper understanding of the algorithms. Admissions officers know the tools are imperfect. They also understand that applicants legitimately use grammar-checkers, editors, and online writing tools. Moreover, an essay that is highly organized or structured will, even if 100% written by a human, likely get an AI detection score of 35% or higher.
So here’s a guideline for interpreting these numbers:
- Most schools don’t care about a score below ~70%. In fact, in the interviews we’ve done, 70% is the lowest cited number. What they care about is whether the writing reads like a real human being with a real story.
- A score in the 40–60% range may still be totally acceptable if the essay feels personal, specific, and voice-consistent.
- Internally, our hard ceiling is 50% (again, well below the 70% cap, and legitimately a number that a human-written essay might produce)
- Ideally, we think about 30% is what to aim for, especially when we “clean” high-AI score essays.
- Often, you should NOT aim for a 0% score. If you get it, great, but remember that these algorithms work by predicting the next bit of text and seeing how closely your text aligns with their predictions, so often the only way to fool them is to create an essay that uses unusual words and phrases or structures that border on random. That’s obviously suboptimal.
If your essay is AI’ish, the best route forward is to get help from a seasoned writer or capable consultant. The real issue often arises because you haven’t done the necessary introspection and brainstorming. At Gurufi.com, we specialize in that. We ask questions designed to make you a bit uncomfortable, think more deeply, and push you beyond your comfort zone, and from there we help you build a stronger essay.
If you’re in a time pinch and can’t build your essay from the ground up, we also routinely work with drafts that initially score 80–95% AI and bring them down dramatically while strengthening authenticity and narrative clarity. Our editors know how to identify AI-like rhythms, rewrite sections without losing your content, and preserve your voice while reducing AI fingerprints, thus rebuilding generic passages into a genuine personal narrative.
Tomorrow’s Post Preview: How Do You Use AI Without Getting Into Trouble?
If AI is so dangerous, should you avoid it entirely? Not at all. Subsequent blogs will explain, with actual examples and tricks you can use, how AI can be an outstanding brainstorming partner, idea organizer, and clarity enhancer. But it should not be the author of your final text. The essay has to sound like you, reflect your lived experiences, and convey your authentic voice. In other words… a PERSON has to do the writing. Starting tomorrow, we’ll explain how to do this.