Last visit was: 25 Apr 2026, 11:31 It is currently 25 Apr 2026, 11:31
Close
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.

Customized
for You

we will pick new questions that match your level based on your Timer History

Track
Your Progress

every week, we’ll send you an estimated GMAT score based on your performance

Practice
Pays

we will pick new questions that match your level based on your Timer History
Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.
Close
Request Expert Reply
Confirm Cancel
User avatar
RonPurewal
Joined: 15 Nov 2013
Last visit: 24 Apr 2026
Posts: 199
Own Kudos:
1,357
 [9]
Given Kudos: 24
GMAT Focus 1: 805 Q90 V90 DI90
GMAT 1: 800 Q51 V51
GRE 1: Q170 V170
GRE 2: Q170 V170
Expert
Expert reply
GMAT Focus 1: 805 Q90 V90 DI90
GMAT 1: 800 Q51 V51
GRE 1: Q170 V170
GRE 2: Q170 V170
Posts: 199
Kudos: 1,357
 [9]
1
Kudos
Add Kudos
8
Bookmarks
Bookmark this Post
User avatar
Bunuel
User avatar
Math Expert
Joined: 02 Sep 2009
Last visit: 25 Apr 2026
Posts: 109,830
Own Kudos:
811,244
 [1]
Given Kudos: 105,884
Products:
Expert
Expert reply
Active GMAT Club Expert! Tag them with @ followed by their username for a faster response.
Posts: 109,830
Kudos: 811,244
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
Edskore
Joined: 29 Dec 2022
Last visit: 25 Apr 2026
Posts: 279
Own Kudos:
Given Kudos: 5
Posts: 279
Kudos: 128
Kudos
Add Kudos
Bookmarks
Bookmark this Post
User avatar
RonPurewal
Joined: 15 Nov 2013
Last visit: 24 Apr 2026
Posts: 199
Own Kudos:
1,357
 [1]
Given Kudos: 24
GMAT Focus 1: 805 Q90 V90 DI90
GMAT 1: 800 Q51 V51
GRE 1: Q170 V170
GRE 2: Q170 V170
Expert
Expert reply
GMAT Focus 1: 805 Q90 V90 DI90
GMAT 1: 800 Q51 V51
GRE 1: Q170 V170
GRE 2: Q170 V170
Posts: 199
Kudos: 1,357
 [1]
1
Kudos
Add Kudos
Bookmarks
Bookmark this Post
Official Explanation:

Until last year, the dominant standardized test for applicants to a certain class of professional graduate schools included a 30-minute writing sample, to which the schools’ admission staff receive access. This writing sample was eliminated last year..

Robust test security guaranteed that every writing sample was, in fact, written by the named applicant. Admissions officers, concerned that applicants could enlist artificial-intelligence (A.I.) chatbots to compose their application essays, have called on the test maker to restore the writing sample. These officials claim that, if an applicant’s admission essays and writing sample are sufficiently similar in style, that applicant can be trusted to have written the admission essays without the help of A.I. Conversely, they say, the authenticity of an applicant’s essays should be flagged for further investigation if those essays differ enough in style from the applicant’s writing sample.

The five statements below consist of three observations that, if true, weaken the public position of the admissions officers quoted above; one observation that, if true, neither weakens nor strengthens their position; and one observation that, if true, weakens their position IF the primary writing style of A.I. chatbots conforms to a particular set of generalizations, but strengthens that position otherwise.

Select Irrelevant for the consideration that neither strengthens nor weakens the admissions officers’ position, and Depends for the consideration that weakens the officers’ position if A.I. chatbots’ writing styles converge on certain paradigms but that strengthens it if they do not. Make only two selections, one in each column.




The officers’ position, in essence, is this: “If your application essays read like your GMAT or GRE essay, then we can be confident that you wrote them yourself. If they don’t, then there’s a significant likelihood that you outsourced them to a chatbot—something we’ll have to look further into.”

As with most lines of inferential reasoning, there are two potential ways to weaken this one: “false positives” (essays that are unlike the writing sample but are the applicant’s authentic work) and “false negatives” (essays that resemble the applicant’s writing sample—leading the admissions staff to dismiss possible suspicions—but that were in fact written by A.I.). Any feasible, reasonably straightforward path that will lead to large numbers of either of these will WEAKEN the admissions officers’ position. To STRENGTHEN the officers’ position, we need a credible reason why NEITHER of the possibilities above will occur in large numbers.

Let’s examine what each choice does to the likelihood of “false positives” and/or “false negatives” as laid out above.

CHOICE 1: Highly educated aspiring professionals almost uniformly gravitate in their academic and career-related writing towards a highly specific, extensively socially conditioned writing style and prescribed hierarchy of values known as “corporatese”:

Standardized-test writing samples and graduate school application compositions both fall under the umbrella of “academic and career-related writing”—so, if the content of choice 1 is true, then the vast majority of applicants will write both of them with heavily “corporatese” style and content IF they write them both themselves. Under these conditions, essentially all essays authentically written by the named applicant will be extremely close stylistic matches for the applicant’s writing sample, because both will be written in “corporatese”. Therefore there will be essentially no false positives.

What about false negatives (essays that were written by a chatbot, but that still closely match the style of the applicant’s own writing sample)? That’s going to depend fundamentally on how chatbots tend to write—specifically, whether the standard/primary/default writing style of A.I. chatbots also hews closely to the tenets of “corporatese” writing.

If chatbots write primarily in “corporatese”, then EVERYTHING—the writing samples, the application essays actually written by applicants, and the application essays pumped out by chatbots—will be written predominantly in “corporatese”, for practically ALL applicants. In this case, the writing sample will be of no value in calling out chatbot-written work, so choice 1 weakens (actually not just weakens, but altogether invalidates) the officers’ position if the chatbots write corporatese text by default.

If chatbots write primarily in any non-“corporatese” style, then chatbot-created essays will stylistically clash with applicants’ writing samples (which will be corporatese)—whereas essays that are the applicants’ own honest work (which will also be corporatese) will read just like those samples. This is exactly what the officers are asserting will happen in both directions, so in this case choice 1 STRENGTHENS the admisisons officers’ position.

The DEPENDS column should therefore be marked for choice 1.

CHOICE 2: Admissions officers encourage applicants to interact as widely as possible with their school’s students, faculty, and alumni, and to write application essays informed by those exchanges. The writing sample, on the other hand, is written by the applicant alone in a secure solitary environment:

This consideration provides evidence that the content of application essays can (...and should, according to the admissions staff themselves) be influenced by other people’s input, while the content of the writing sample cannot.

The content of these compositions, however, has no relevance to the admissions officers’ position or plan. The writing sample is valuable (according to the quoted admissions officers) solely as a standard reference for each applicant’s writing style—in other words, as a demonstration of HOW the applicant writes, not WHAT.

There is no inherent commonsense reason to suspect that feedback on the content of at essay, whether thoughtful or otherwise, will change anything about the author’s writing style—nor does the passage say anything explicitly to that effect. Choice 2 is therefore irrelevant to the admissions officers’ plans and goals.

The IRRELEVANT column should therefore be marked for choice 2.

CHOICE 3: Applicants are given just 30 minutes to read and process the prompt and then plan and write an essay for the writing sample, but are given 4 to 8 months to perform the same steps for application essays:

A 30-minute writing assignment will never be more than a “rough draft” of the crudest imaginable kind. The writer must choose a topic and plan (in a broad-brushstrokes sense) the essay in the absolute least amount of time feasibly possible, in order to preserve enough time to physically type the essay with at least cursory attention to proper mechanics and clarity of meaning.

The topic of a 30-minute essay will therefore just be the first relevant illustration/argument/narrative to pop into the writer’s mind; the essay itself will be formulaic, in the literal sense (following some sort of formula or template that the writer will have memorized and rehearsed in advance), with important points stated simply—or even simplistically—either as audaciously sweeping generalities or as standalone anecdotes, and never anywhere between. Exceptions, nuance, complexity, and context-dependence cannot be included because there just isn’t enough time for the writer to cover them fairly. Nor will the writer be able to enliven the most rhetorically important parts of the essay—such as its first and last few sentences—with idiosyncratic, creative, or unusually compelling turns of phrase; those parts will instead be written with crude, banal kludges, like “introductions” or “conclusions” that are actually just word-for-word repetitions of thesis points already stated elsewhere... among many other possible adaptations to the unforgivingly short timeframe of the assignment.

In short, a 30-minute essay must be stripped entirely of everything that constitutes the craft or art of writing, with all those things replaced by literal formulas and templates that, at the cost of being boring, simplistic and trite, at least ensure that the applicant will put actual words on the screen quickly enough to finish within half an hour.

All of the above represent stylistic differences between 30-minute essays and compositions on any more ‘normal’ timeline (months, weeks, or even days). With several months to prepare an essay of potentially great significance (admissions essays could become the decisive factor in charting one’s entire remaining career path), applicants will make every effort to craft essays that have none of the above characteristics of 30-minute essays. Instead of just going with the first viable topic that floats to mind, applicants can spend days or weeks brainstorming, narrowing and then finally selecting the topics that they are best able to develop in relation to the prompts. They can put drafts through as many revisions as needed to arrive at final versions that are truly their best work and that bring their candidacies to life in the minds of admissions staff. In sum, they can—and should—produce essays that are crafted, in all the ways that 30-minute compositions cannot possibly be.

Therefore, if the statements in choice 3 are true, authentic student essays will practically never resemble the same students’ writing samples—meaning that the admissions officers’ ‘security’ plan will end up flagging almost every HONESTLY written essay, including ALL of the very best ones, as potentially suspicious. Needless to say, a 100 percent false-positive rate is a very bad outcome, so, choice 3 completely obliterates (= very very strongly weakens) any ostensible value of the plan.

Choice 3 is thus one of the three weakeners, which do not receive a mark in either column.

CHOICE 4: The writing sample is limited to objective analysis of factual statements that are provided with the prompt—making the applicant’s personality, values, experiences and knowledge all immaterial to the task. The schools’ application essays, on the other hand, are deeply personal reflections that require introspection into, and articulation of, the applicant’s fundamental values and priorities:

If choice 4 is true, then the schools’ application essays will constitute one of the few most intensely personal tasks that will ever be asked of the applicant in her/his lifetime—as wholly appropriate for a process whose results will largely determine the basic direction of the applicant’s remaining professional life—while the writing sample that admissions offices are proposing to use as a style reference sits all the way at the other pole of that spectrum: drearily objective, impersonal in the absolute, and self-contained (complete with canned facts) to ensure that nobody’s personal experiences even might crack open a window of insight into the prompt.

Variations in the writing style of any one person flow from differences in how she/he personally relates—or doesn’t—to various writing tasks. On that variable of “personal relation/engagement with the writing task”. the writing sample and the application essays differ as much as any two imaginable writing tasks possibly could, so, logically, we should expect the respective finished products to be as unlike each other, stylistically, as any two pieces of writing from that single applicant ever get (and moreover, just as for choice 3, the magnitude of the difference will probably be even greater for the very best essays submitted to each school). At bare minimum, the admissions officers’ assumption that their styles should match—closely enough to serve as a ‘two-factor authentication’ of sorts—is clearly not reasonable here.

Like choice 3, choice 4 tells us (among other things) that the admissions officers will flag almost every single authentic student essay, including all of the best ones, as ‘suspicious’ merely for differing so much from a 30-minute, template-based grind piece with zero personal significance to the writer. Choice 4 is therefore another ‘super weakener’ like choice 3.

CHOICE 5: When fed a moderate-sized sample of an individual’s writing, leading A.I. chatbots have typically developed the capacity to produce future compositions in a writing style that forensic linguistic analysts cannot reliably distinguish from the individual’s own:

Choice 5 says that A.I. chatbots can learn to write just like you—so much like you, in fact. that even professionals who make an actual living building arguments off of their analysis of writing samples will look at an A.I. output and think that it’s something you wrote yourself.

(This will only true once you’ve fed a bunch of your own writing to the A.I. as training data. However, anybody using A.I. specifically to cheat on a writing task will absolutely always answer Yes to “Would you like the A.I. to write in a style that looks like yours?”—so all of the cheaters for choice 5 will cheat with an A.I. that has been given the required training data to “write just like them”.)

Therefore, if choice 5 is true then there will be no consistently identifiable differences between authentic student essays and the outputs of A.I. engines of the type described in choice 5. There just won’t be any workable way to tell them apart—not even for professionals who specialize in writing-sample-based detective work, so certainly not for random university adcomm officers.

This, in turn, automatically invalidates the officers’ plan (regardless of what the plan’s specifics may be). If the considerations above are true, then, the A.I.-written impostor essays will be able to pass any test or criterion that authentic student essays can pass, and any test or criterion that flunks A.I. imitation essays will also flunk authentic student essays written in strict observance of the rules.

Choice 5 is thus the third ‘weakener’ (or, perhaps more accurately, destroyer) of the adcomm officers’ position, so it goes unmarked in either column.

The overall answer is choice 1 DEPENDS and choice 2 IRRELEVANT.
Attachment:
GMAT-Club-Forum-7w8seq0c.png
GMAT-Club-Forum-7w8seq0c.png [ 99.68 KiB | Viewed 533 times ]
User avatar
egmat
User avatar
e-GMAT Representative
Joined: 02 Nov 2011
Last visit: 24 Apr 2026
Posts: 5,632
Own Kudos:
Given Kudos: 707
GMAT Date: 08-19-2020
Expert
Expert reply
Active GMAT Club Expert! Tag them with @ followed by their username for a faster response.
Posts: 5,632
Kudos: 33,435
Kudos
Add Kudos
Bookmarks
Bookmark this Post
Let me walk through this step by step.

The admissions officers' position rests on one core idea: compare the WRITING STYLE of the secure writing sample against application essays. If styles match, the applicant wrote the essays. If styles differ, suspect AI.

This logic has a hidden assumption: a human applicant's style should naturally be consistent across both tasks, while AI-generated text would stylistically diverge from the applicant's authentic sample.

Now let's classify each statement:

Statements 3, 4, and 5 all WEAKEN the officers' position:
- Statement 3: A 30-minute timed essay vs. 4-8 months of polished writing will naturally produce different styles — even from the same human. This creates false flags.
- Statement 4: Objective factual analysis vs. deeply personal reflection naturally demand different writing styles — again, legitimate style differences unrelated to AI.
- Statement 5: AI can actually mimic an individual's writing style so well that experts can't tell the difference. This means AI-written essays COULD match the writing sample, defeating the whole detection scheme.

Statement 2 is IRRELEVANT (Column A): Yes, applicants talk to students and alumni before writing essays, while the writing sample is solitary. But gathering ideas and perspectives from others affects CONTENT, not WRITING STYLE. The officers' test is purely about stylistic comparison. Whether you brainstormed alone or after 50 conversations, your sentence structure, word choice, and voice remain yours. This neither helps nor hurts their position.

Statement 1 is DEPENDS (Column B): If virtually all educated professionals write in 'corporatese,' then everyone's writing sample will be in that style. Now ask: does AI also default to corporatese?
- IF YES (AI converges on corporatese): AI-written essays would stylistically match the applicant's writing sample. The style test can't catch anyone. This WEAKENS the officers' position.
- IF NO (AI does NOT write in corporatese): AI-written essays would look stylistically different from the applicant's universally corporatese writing sample. The style test works perfectly. This STRENGTHENS the position.

So the correct pairing is Row 2 for Irrelevant and Row 1 for Depends.

Answer: 2A, 1B
Moderators:
Math Expert
109829 posts
498 posts
212 posts