Until last year, the dominant standardized test for applicants to a certain class of professional graduate schools included a 30-minute writing sample, which which was passed on to admissions staff at each school to which the test taker eventually applied. Revisions made to this testing paradigm last year eliminated this writing sample—depriving admissions officers of a reference composition that, as a result of the test’s robust identity verification procedures, was guaranteed to have been written by the named applicant.Admissions officers at a number of these schools—increasingly concerned that applicants could enlist an artificial-intelligence (A.I.) chatbot to compose their application essays—have called for the test maker to produce a standalone version of its former writing sample, for applicants to take under the same identity verification and testing conditions as the test itself. While none of these officials certainly do not expect to catch all, or perhaps even most, applicants who attempt to outsource the creation of their application essays, they publicly maintain that the verified writing sample is an important ‘security check’—dismissing potential concerns about A.I. ghostwriting for applicants whose eventual application essays are, in a broad sense, stylistically parallel to the writing sample, while flagging for suspicion and further investigation any application essays that stylistically diverge from the named applicant’s verified writing sample in any obvious way.Below are three observations that (if true) weaken the public position of the admissions officers quoted above; one observation that neither weakens nor strengthens their position; and one observation that weakens their position IF the primary writing style of leading A.I. chatbots conforms to a certain set of generalizations, but strengthens that position otherwise. Select Irrelevant for the consideration that neither strengthens nor weakens the admissions officers’ position, and Depends for the consideration that weakens the officers’ position if A.I. chatbots’ writing styles converge on certain paradigms but that strengthens it if they do not. Make only two selections, one in each column.
Official Explanation:The officers’ position, in essence, is this: “If your application essays read like your GMAT or GRE essay, then we can be confident that you wrote them yourself. If they don’t, then there’s a significant likelihood that you outsourced them to a chatbot—something we’ll have to look further into.”
As with most lines of inferential reasoning, there are two potential ways to
weaken this one: “false positives” (essays that are
unlike the writing sample but are the applicant’s authentic work) and “false negatives” (essays that
resemble the applicant’s writing sample—leading the admissions staff to dismiss possible suspicions—but that were in fact
written by A.I.). Any feasible, reasonably straightforward path that will lead to large numbers of either of these will
WEAKEN the admissions officers’ position. To
STRENGTHEN the officers’ position, we need a credible reason why
NEITHER of the possibilities above will occur in large numbers.
Let’s examine what each choice does to the likelihood of “false positives” and/or “false negatives” as laid out above.
CHOICE 1: Highly educated aspiring professionals, despite having widely varying styles of personal expression in nonprofessional contexts, gravitate in their academic and career-related writing towards a highly specific, extensively socially conditioned writing style and prescribed hierarchy of values that has been dubbed “corporatese”, with extremely few exceptions or significant departures from the socially conditioned paradigms:Standardized-test writing samples and graduate school application compositions both fall under the umbrella of “academic and career-related writing”—so, if the content of choice 1 is true, then the vast majority of applicants will write both of them with heavily “corporatese” style and content IF they write them both themselves. Under these conditions, essentially all essays authentically written by the named applicant will be extremely close stylistic matches for the applicant’s writing sample, because both will be written in “corporatese”. Therefore there will be essentially no false positives.
What about false negatives (essays that were written by a chatbot, but that still closely match the style of the applicant’s own writing sample)? That’s going to depend fundamentally on how chatbots tend to write—specifically, whether the standard/primary/default writing style of A.I. chatbots also hews closely to the tenets of “corporatese” writing.
If chatbots write primarily in “corporatese”, then EVERYTHING—the writing samples, the application essays actually written by applicants, and the application essays pumped out by chatbots—will be written predominantly in “corporatese”, for practically ALL applicants. In this case, the writing sample will be of no value in calling out chatbot-written work, so
choice 1 weakens (actually not just weakens, but altogether
invalidates) the officers’ position
if the chatbots write corporatese text by default.
If chatbots write primarily in any non-“corporatese” style, then chatbot-created essays will stylistically clash with applicants’ writing samples (which will be corporatese)—whereas essays that are the applicants’ own honest work (which will also be corporatese) will read just like those samples. This is exactly what the officers are asserting will happen in both directions, so
in this case choice 1 STRENGTHENS the admisisons officers’ position.The
DEPENDS column should therefore be marked
for choice 1.
CHOICE 2: Admissions officers encourage applicants to interact as widely as possible with their school’s students, faculty, and alumni and to write application responses thoughtfully informed by those exchanges. The writing sample, on the other hand, is written by the applicant alone in a secured environment entirely devoid of human interaction:This consideration provides evidence that the
content of application essays can (...and should, according to the admissions staff themselves) be influenced by other people’s input, while the content of the writing sample cannot.
The
content of these compositions, however, has no relevance to the admissions officers’ position or plan. The writing sample is valuable (according to the quoted admissions officers) solely as a standard reference for each applicant’s writing
style—in other words, as a demonstration of HOW the applicant writes, not WHAT.
There is no inherent commonsense reason to suspect that feedback on the content of at essay, whether thoughtful or otherwise, will change anything
about the author’s
writing style—nor does the passage say anything explicitly to that effect. Choice 2 is therefore irrelevant to the admissions officers’ plans and goals.
The
IRRELEVANT column should therefore be marked for
choice 2.
CHOICE 3: Applicants are given just 30 minutes to read and process the prompt and then plan and write an essay for the writing sample, but are given 4 to 8 months to perform the same steps for application essays:A 30-minute writing assignment will never be more than a “rough draft” of the crudest imaginable kind. The writer must choose a topic and plan (in a broad-brushstrokes sense) the essay in the absolute least amount of time feasibly possible, in order to preserve enough time to physically type the essay with at least cursory attention to proper mechanics and clarity of meaning.
The topic of a 30-minute essay will therefore just be the first relevant illustration/argument/narrative to pop into the writer’s mind; the essay itself will be formulaic, in the literal sense (following some sort of formula or template that the writer will have memorized and rehearsed in advance), with important points stated simply—or even simplistically—either as audaciously sweeping generalities or as standalone anecdotes, and never anywhere between. Exceptions, nuance, complexity, and context dependence cannot be included because there just isn’t enough time for the writer to cover them fairly. Nor will the writer be able to enliven the most rhetorically important parts of the essay—such as its first and last few sentences—with idiosyncratic, creative, or unusually compelling turns of phrase; those parts will instead be written with crude, banal kludges, like “introductions” or “conclusions” that are actually just word-for-word repetitions of thesis points already stated elsewhere... among many other possible adaptations to the unforgivingly short timeframe of the assignment.
In short, a 30-minute essay must be stripped entirely of everything that constitutes the
craft or
art of writing, with all those things replaced by literal formulas and templates that, at the cost of being boring, simplistic and trite, at least ensure that the applicant will put actual words on the screen quickly
enough to finish within half an hour.
All of the above represent stylistic differences between 30-minute essays and compositions on any more ‘normal’ timeline (months, weeks, or even days). With several
months to prepare an essay of potentially great significance (admissions essays could become the decisive factor in charting one’s entire remaining career path), applicants will make every effort to craft essays that have none of the above characteristics of 30-minute essays. Instead of just going with the first viable topic that floats to mind, applicants can spend days or weeks brainstorming, narrowing and then finally selecting the topics that they are best able to develop in relation to the prompts. They can put drafts through as many revisions as needed to arrive at final versions that are truly their best work and that bring their candidacies to life in the minds of admissions staff. In sum, they can—and should—produce essays that are
crafted, in all the ways that 30-minute compositions cannot possibly be.
Therefore, if the statements in choice 3 are true, authentic student essays will practically never resemble the same students’ writing samples—meaning that
the admissions officers’ ‘security’ plan will end up flagging almost every HONESTLY written essay, including ALL of the very best ones, as potentially suspicious. Needless to say, a 100 percent false-positive rate is a very bad outcome, so, choice 3 completely obliterates (= very very strongly weakens) any ostensible value of the plan. Choice 3 is thus one of the three weakeners, which do not receive a mark in either column.
Choice 4: The writing sample is limited to objective analysis of objective, analytical prompts that are presented together with all facts of any possible relevance—making the applicant’s personality, values, life experience and even aggregate factual knowledge all immaterial to the task. The schools’ application essays, on the other hand, are deeply personal reflections that require introspection into, and frank articulation of, the applicant’s fundamental core values and primary life goals:If choice 4 is true, then the schools’ application essays will constitute one of the few most intensely personal tasks that will ever be asked of the applicant in her/his lifetime—as wholly appropriate for a process whose results will largely determine the basic direction of the applicant’s remaining professional life—while the writing sample that admissions offices are proposing to use as a style reference sits all the way at the other pole of that spectrum: drearily objective, impersonal in the absolute, and self-contained (complete with canned facts) to ensure that nobody’s personal experiences even
might crack open a window of insight into the prompt.
Variations in the writing style of any one person flow from differences in how she/he personally relates—or doesn’t—to various writing tasks. On that variable of “personal relation/engagement with the writing task”. the writing sample and the application essays differ as much as any two imaginable writing tasks possibly could, so, logically, we should expect the respective finished products to be as
unlike each other, stylistically, as any two pieces of writing from that single applicant ever get (and moreover, just as for choice 3, the magnitude of the difference will probably be even greater for the very best essays submitted to each school). At bare minimum, the admissions officers’ assumption that their styles should
match—closely enough to serve as a ‘two-factor authentication’ of sorts—is clearly not reasonable here.
Like choice 3, choice 4 tells us (among other things) that the admissions officers will flag almost every single
authentic student essay, including
all of the best ones, as ‘suspicious’ merely for differing so much from a 30-minute, template-based grind piece with zero personal significance to the writer. Choice 4 is therefore another ‘super weakener’ like choice 3.
Choice 5: When fed a moderate-sized sample of an individual’s writing, leading A.I. chatbots have typically developed the capacity to produce future compositions in a writing style that forensic linguistic analysts cannot reliably distinguish from the individual’s own:Choice 5 says that A.I. chatbots can learn to write
just like you—so much like you, in fact. that even professionals who make an actual living building arguments off of their analysis of writing samples will look at an A.I. output and think that it’s something you wrote yourself.
(This will only true once you’ve fed a bunch of your own writing to the A.I. as training data. However, anybody using A.I. specifically to cheat on a writing task will absolutely always answer Yes to “Would you like the A.I. to write in a style that looks like yours?”—so all of the cheaters for choice 5 will cheat with an A.I. that has been given the required training data to “write just like them”.)
Therefore, if choice 5 is true then
there will be no consistently identifiable differences between authentic student essays and the outputs of A.I. engines of the type described in choice 5. There just won’t be
any workable way to tell them apart—not even for professionals who specialize in writing-sample-based detective work, so certainly not for random university adcomm officers.
This, in turn, automatically invalidates the officers’ plan (regardless of what the plan’s specifics may be). If the considerations above are true, then, the A.I.-written impostor essays will be able to pass any test or criterion that authentic student essays can pass, and any test or criterion that flunks A.I. imitation essays will also flunk authentic student essays written in strict observance of the rules.
Choice 5 is thus the third ‘weakener’ (or, perhaps more accurately, destroyer) of the adcomm officers’ position, so it goes unmarked in either column.
The overall answer is
choice 1 DEPENDS and
choice 2 IRRELEVANT.
Attachment:
GMAT-Club-Forum-pwfn93g1.png [ 119.03 KiB | Viewed 806 times ]