Quick disclaimer: I am an admissions consultant and have been helping applicants crack top schools for the past 18+ years. Naturally, these days, a lot of people start by asking me whether admissions consulting — MBA admissions consulting or any form of admissions consulting — would still be relevant in the world of AI, and what value a human touch or an experienced consultant would bring to the admissions process. I am glad I stumbled upon this post, as I really want to tell people what a very experienced admissions consultant can do to a storyline and to an essay that ChatGPT or any other AI tool cannot.
1. An AI tool cannot identify a weak tangent the way an experienced professional can.
Let me give you an example. I was recently working on a contribution essay for London Business School with an applicant who sent me a very well-crafted essay — everything super polished. To him, it looked like the final version, and he thought I would be making micro changes. But I had to reject all of his contributions. Not because the writing was bad, but because over nearly two decades I have built an understanding of what kind of contributions actually work for London Business School — what the right litmus test is to identify whether a contribution is strong enough, and what kind of contribution the school would truly value from an international student who has never worked outside his home country. How will an applicant like that show something meaningful?
The risk with AI-generated essays is that they sound polished but generic. And if the applicant does not have the capability to judge whether the right litmus test is being passed by the contribution, then on what basis would the applicant change the narrative or deepen what that contribution actually means?
And this is not just about essays. There are so many situations and questions in a business school application where this kind of depth matters. Take recommendation letters, for example. Most applicants help their recommenders identify strengths and weaknesses to write about — and this is where things get very tricky. When a school like Wharton or Ross asks your recommender, "What are some of the weaknesses of this applicant?" — you need to write weaknesses that do not damage your entire case but still tell a very authentic story. There is a very fine art to this. You need weaknesses that feel real, that show self-awareness, but that do not make the admissions committee question whether you belong in their program.
Now, if a human being has worked on literally thousands of recommendation letters — has seen the same weaknesses play out across various categories of industries and functions — they know exactly what works and what does not. They know which weaknesses come across as genuine growth areas and which ones raise red flags. They know how to connect a weakness to a larger narrative of development without making it sound rehearsed. They know, for instance, that a weakness framed one way can make an applicant look coachable and self-aware, while the same weakness framed slightly differently can make them look like a liability. AI does not have that calibration. AI can give you a grammatically perfect, well-structured weakness — but it does not know whether that weakness will quietly kill your candidacy or strengthen it. Only someone who has seen how admissions committees react to thousands of these across schools, industries, and applicant profiles can make that call.
I want you to think about this disconnect — only an experienced consultant can look at something and say, "What am I comparing this against?" If I have an essay or a recommendation draft in front of me, my baseline built over the last two decades tells me whether this is weak — and I can think of 27 different things where I say, "I want to take this in a completely different direction."
2. A strong written critique takes the AI help to the next level.
I also realised that when I gave a very strong, detailed written critique on these essays, the applicants would go back to AI, insert my critique, and use it to generate stronger ideas. That is how they made very effective use of AI tools — to build essays to the next level. The whole point I am trying to make is that AI is deep and powerful, but it can only work with what you ask it to bring forth. If you as an applicant do not know the drawbacks or limitations of your current essay, then no matter what prompt you give, you are just going to generate a machine-polished version of a weak essay.
It takes a good consultant maybe five minutes to look at an essay and say, "This does not fit the bill." And then that consultant can come back with specific reasons — "The depth is missing here, this angle is off, these specific areas need reworking." The applicant uses those inputs, brainstorms, and comes back with a stronger version. The same applies to recommendations — a consultant can look at a weakness your recommender has written and immediately say, "This will hurt you. Change it." That level of school-specific, profile-specific judgement is not something you can prompt out of any AI tool.
Otherwise, getting into the rabbit hole of AI when you do not know whether the critique fits the bill or not — that can take you in a very, very wrong direction.
3. If everyone is using AI, how will schools differentiate?
Everyone is using AI. If every applicant is creating well-polished essays for business schools using AI, then on what basis would the school differentiate between strong applicants? One of two things can happen. One, schools will completely abolish the essay process and move towards something more human — like video essays, or interviewing more and more people — because they no longer trust written content. Or two, the expectations of a business school from an essay will significantly go up — the level of depth, the originality of insight, the extent of meaning one has to generate will all need to be far stronger.
AI is a very, very strong tool — but you can use it effectively only if you understand what the missing angle is.