This post originally appeared on the Christian Scholar’s Review blog, and is reproduced with permission.
In one of Grammarly’s ubiquitous advertisements, a hapless writer uses the popular AI-based editing software to improve an interpersonal communication. The furrowed brows accompanying the initial attempt (“we might be able to find a way to make this happen”) give way to gratified smiles as the sentence becomes “we can find a way to make that happen.” The “uncertain” version is replaced by the Grammarly-approved “confident” version. The advertisement (88 million views at the time of writing) is headed “Get Your Tone Just Right with Grammarly.” Getting rid of the signals of uncertainty matters, the narrator admonishes, “because when you write with just the right confident tone, you can meet any challenge to move your project forward.” This version of the Grammarly narrative is particularly direct, and Grammarly does offer to support a range of tones, but this is not the only ad that presents qualifiers to truth claims as “unnecessary words” to be earmarked for prompt deletion.
Grammarly helps with grammar, but its marketing messages foreground its ability to shape the tone of one’s self-presentation through language, whether at work or at school. This is, of course, an important skill and an area in which software can assist self-awareness. What is hard to know from the ads is whether these straying writers are being saved from the vice of pusillanimity or drawn into the vice of vainglory. According to Grammarly’s pithy parables of AI-assisted self-renewal, what matters for success is less the precise truth of our words than our willingness to make concisely confident claims.
A recent incident with another AI editor similarly caught my attention. I had a passage from Paul’s first epistle to the Corinthians open on my computer when Microsoft’s AI “Editor” suggested an improvement to the Apostle’s unhappy prose. Like Grammarly, the Editor embedded in Microsoft Word claims not only to catch errors in grammar and spelling but to offer “suggestions for refining your writing.” Here is the passage to which it took exception:
“The eye cannot say to the hand, “I do not need you,” nor in turn can the head say to the foot, “I do not need you.” On the contrary, those members that seem to be weaker are essential, and those members we consider less honorable we clothe with greater honor… (1 Corinthians 12:21-23a, NET)”
The Editor highlighted the words “seem to be” in the phrase “seem to be weaker” and displayed a warning: “Conciseness: Words expressing uncertainty lessen your impact.”
Bear with me while I throw conciseness to the winds, sinner that I am, and dwell in more detail on this one correction before circling back around to bigger questions.
First, notice that the proposed edit would significantly change the meaning of Paul’s statement. It seems to me (I express in these words an unrepentant degree of uncertainty) that Paul is here delicately balancing two matters. On the one hand we have an affirmation that every member of Christ’s body has an inalienable value that is not grounded in their own attractiveness, identity, or achievement. On the other hand, we have the stubbornly persisting human experience that some of those around us feel much easier to respect, like, and get along with than others.
If we were to ignore for the moment pesky details like the original Greek and go with the Editor, Paul would say “those that are weaker are essential.” In doing so he would join forces with the impulse to judge. His argument would collapse into something like “yes, there really are undesirable people in the church, but you have to put up with them.” This version would do little to challenge our condescension toward those we perceive as less smart, less able, and less worthy because it would affirm the accuracy of our perception while asking us to be noble about it. Paul’s hints of uncertainty in his actual sentence insert gaps between perception and reality. His verbal sidesteps (those who “seem to be” weak, whom “we consider” less honorable) create space for a different train of thought. They suggest that our prejudicial value judgments are not only subjective but a false representation of the true map of value. Those others who seem to be less are in fact essential. This way of putting it is not a condescending accommodation for those who just scraped in alongside us more worthy types, but a basic reality that invalidates our superior airs. Try applying both versions of the sentence to differences based on, say, disability, race, or gender and you will quickly see why this distinction matters quite a lot. The suggested edit would distort the passage, a point missed by the simple equation of “conciseness” with good writing.
There is a second problem. The rationale that the Editor offers for the suggested change is false. It is sometimes true that “words expressing uncertainty lessen your impact,” but only sometimes and in certain circumstances. Sometimes expressing uncertainty is exactly what is needed. Sometimes it may increase impact by successfully communicating vulnerability, or fragile hope, or by making an argument more precise and pointed. Like the Grammarly ads, the Editor has swallowed its own advice. It could have offered a more accurate (but presumably less “impactful”) version: “words expressing uncertainty might sometimes lessen your impact but are also often necessary and appropriate.” One important feature of good academic writing is precision, claiming only as much as is warranted. (If this were a standard feature of other kinds of writing, I would have purchased fewer things that did not do quite what was promised.) Such precision positively requires facility with expressions of degrees of uncertainty. The Editor does not seem very invested in offering that kind of help.
And then there is the infamous ChatGPT and the specter of the AI doing the writing for our students wholesale. I first tried this out via the GPT-3-based “personal assistant” in the Craft writing app. Like the much-debated ChatGPT interface, the Craft AI assistant can generate essays or other texts based only on a provided topic. For amusement, I asked it to write an essay on chapter 4 of the Didactica Magna. The book is a seventeenth century educational treatise by Comenius, and the chapter was fresh in my memory from working on a translation of it. It seemed obscure enough that it would not be well represented in the AI source corpus. I gave the AI assistant two attempts.
The second AI draft successfully identified the author of the book (the first got it wrong). Both drafts offered some broadly accurate background information. The prose flowed well. When it came to the specific topic, the app invented from whole cloth an entirely fictional account of the content of chapter 4. This, of course, matches the experience of others with the current iteration of AI writing, and my aim in mentioning it here is not to point out one more time that the content can be unreliable. What struck me instead was that faced with a lack of source information, moves such as admitting a lack of knowledge, recommending further investigation, or expressing degrees of uncertainty did not seem to be available to the text generator. It modeled a more extreme version of the Grammarly marketing ethos: if you don’t know the answer, make something up but state it directly and confidently, without equivocation.
The media kerfuffle around ChatGPT can give the impression that it is a unique harbinger of change, yet its derivatives and cousins are already embedded in our basic digital writing tools. What has been intriguing me is less the much-discussed possibility of students cheating and more the question of what kind of writing may be encouraged by AI assistants. Can an AI writing tool model or foster intellectual humility and a care for precision with the truth? Can a Christian concern for such matters withstand an apparently pervasive cultural emphasis on confident self-promotion? Can we respond to AI developments with something more pedagogically constructive than broad-brush fear that our students will cheat?
Raising such questions is not in itself an argument for or against AI text editing tools. It is just a reminder that our digital tools, like our tools in general, are not all-purpose. They provide us with specific affordances. Those affordances nudge us toward specific habits of work. Those habits of work influence the way we move through the world and the personas that we shape. Against that background, I find myself wondering whether the ChatGPT debates might be obscuring a less obtrusive transformation: the reshaping of the selves we perform through the slow drip of a thousand tiny edits and a countless host of ads for grammar checkers. Even as AI helps students with some verbal skills that we have long labored to teach, it may be creating a further need to more intentionally teach the skills involved in critically processing its suggestions.