Is ChatGPT Good or Bad for Higher Ed?

Mark J. Drozdowski, Ed.D.
By
Updated on January 26, 2023
Edited by
Learn more about our editorial process
Higher education is in panic mode over this new chatbot, though the angst may be somewhat overblown.
Featured ImageCredit: Image Credit: Gabby Jones / Bloomberg / Getty Images

  • Released last year, ChatGPT is a free chatbot that provides answers and solutions in a conversational tone.
  • Some faculty fear the tool will enable students to cheat by passing off bot-generated content as their own.
  • Others believe it can help students formulate ideas and hone their writing skills.
  • A test drive of ChatGPT demonstrated its abilities and limitations.

As a writer and writing professor, I’m both fascinated and frightened by the new artificial intelligence tool ChatGPT.

This chatbot can generate complex text far faster than I can — and do a pretty good job of it. College students naturally have caught on to this, using ChatGPT as a shortcut to completing writing assignments. Some might prefer the term “cheating.”

I took ChatGPT for a test drive to find out just how worried I should be.

What Is ChatGPT?

Launched last November by the artificial intelligence research company OpenAI, ChatGPT (“generative pre-trained transformer”) is a free language model chatbot with a clunky name designed to interact with users in a conversational manner.

It can answer questions, write an essay, craft a poem, produce a recipe, compose song lyrics, fashion a cover letter, and explain complex math problems, among other tasks.

It’s the next evolutionary step in predictive text systems. When writing an email or a text message, you’ve no doubt seen your computer or phone trying to anticipate the next word or phrase. This technology can be remarkably accurate and quite often annoying.

ChatGPT is not a search engine like Google or Yahoo. It doesn’t scan the internet for information. Instead, it draws from text databases consisting of books, Wikipedia entries, articles, Reddit conversations, and other forms of writing. All told, more than 300 billion words constitute its artificial brain.

Given the immense amount of misinformation found on the web, though, ChatGPT can provide erroneous answers. When I asked it a simple question about a book I wrote a few years ago, it offered a mix of correct and incorrect content — and different wrong answers each time I told it to “regenerate” its response.

To its credit, the tool knows its limitations, offering a disclaimer warning users about potential inaccuracies. Another drawback is that its knowledge doesn’t extend beyond 2021, so it can’t comment on recent events. Don’t ask it about the death of Queen Elizabeth II.

Yet, it can synthesize information and produce thoughtful content incredibly fast, and its writing chops are pretty decent, even if some copy feels rather stilted. The New York Times calls it “the best artificial intelligence chatbot ever released to the general public.”

No wonder it attracted more than 1 million users in the first five days following its release. In fact, it’s proven so popular that it frequently prevents access due to the high volume of users, at least in my experience.

Whether this new technology is exciting or terrifying depends on your perspective. But when a Washington Post article on ChatGPT includes a statement verifying that the piece was indeed written by reporters and not the tool itself, we know we’ve crossed some sort of technological threshold and entered a new dimension where the demarcation between reality and science fiction is dangerously blurred.

Should ChatGPT Worry College Professors?

During its first few months, ChatGPT has caused much hand-wringing among faculty, who naturally are anxious about the technology’s potential to disrupt teaching and learning and offer students easy shortcuts. Yet the chatbot itself doesn’t believe faculty have much to worry about.

Here’s what it had to say:

It is unlikely that ChatGPT or other language models should cause concern for college professors. While language models like ChatGPT have the ability to generate human-like text, they do not possess the ability to think or understand the context of what they are writing.

In fact, ChatGPT produced several paragraphs disarming the perceived threats college faculty face.

And that’s perhaps a good thing. The tool can function as a starting point, offering basic thoughts to nudge users in certain directions. But some faculty fear students will instead pass off generated content as their own.

“Academia didn’t see this coming,” said Furman University philosophy professor Darren Hick, “so we are blindsided. I reported [a cheating incident] on Facebook, and my friends said, ‘Yeah, I caught one too.'”

Submitting an essay generated by ChatGPT certainly constitutes cheating, but it’s not exactly plagiarism unless we anthropomorphize the bot. Tools such as Turnitin, a plagiarism-detection service, won’t prove useful in these cases.

Enter Edward Tian, a computer science and journalism student at Princeton, who created a program called “GPTZero” that detects machine-generated copy. More like his will likely crop up over the next few months, and Turnitin announced it would be incorporating AI detection into its tools.

Other faculty, including Marc Watkins, a writing professor at the University of Mississippi, are less worried.

“Let me calm your fears,” he wrote in Inside Higher Ed. “Students … will come to use this technology to augment the writing process, not replace it.”

The situation, some say, is akin to the advent of the handheld calculator, which didn’t eliminate math education but merely enhanced it.

But when AI-generated essays can slip through the professorial cracks and earn passing grades, something has to change. One tactic faculty are adopting is fashioning writing assignments too nuanced for bots to compute — acknowledging ChatGPT’s own admission that it can’t think on its own or understand context.

“Gone are prompts like ‘write five pages about this or that,'” wrote Kalley Huang in The New York Times. “Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.”

Another way to outmaneuver students is to have them write assignments in the classroom while monitoring their computer activity. That might work in some cases, but given that 3 in 4 students now take at least one online course, such oversight isn’t always possible.

I teach online, and all my writing assignments are asynchronous. Will the emergence of AI cause me to revise my assignments? Perhaps, if I detect an influx of bot-generated content, assuming I can.

For now, I’ll side with those who adopt an optimistic stance and believe AI can be a complementary tool, helping writers of all skill levels generate ideas, overcome writer’s block, and hone their craft.

“Once we bring the hype around ChatGPT into line with reality,” Amherst College English professor Christopher Grobe wrote in The Chronicle of Higher Education, “we can grapple with the true effects of generative AI: Far from replacing human intelligence, it will provide new starting points for some of the processes we routinely use to think.”

AI’s Potential to Upend Journalism

What concerns me more, honestly, is the potential of AI infiltrating writing and journalism and rendering writers obsolete.

Since 2016, The Washington Post has used an in-house bot called Heliograf to churn out stories, covering topics such as the Olympic Games and congressional elections. Who needs journalists when bots can do the job faster and cheaper, presumably with an acceptable degree of accuracy?

We may become as scarce as Walmart cashiers.

Yet surely ChatGPT couldn’t generate a thoughtful article on a topic as complex as legacy admissions, could it? Well, here’s what it had to say:

The use of legacy admissions, which gives preferential treatment to applicants who have family members who have attended the college or university, is a controversial topic. Supporters argue that legacy admissions help to maintain the institution’s traditions and foster a sense of community. Opponents argue that legacy admissions can perpetuate privilege and disadvantage qualified applicants from underrepresented groups.

And so forth.

Or how about the philosophical conundrum of whether higher education is a public or private good?

Higher education can be considered both a public and a private good.

From a public good perspective, higher education is considered a public good because it benefits society as a whole. It can lead to a more educated and skilled workforce, which can result in economic growth, higher productivity and innovation, and a more informed citizenry. Higher education can also lead to social and political benefits, such as greater civic engagement and reduced crime rates.

From a private good perspective, higher education is considered a private good because it provides benefits to the individual who receives it, such as increased earning potential, improved job prospects, and personal growth and development. Higher education can also provide social and cultural benefits to the individual, such as broadening perspectives and increasing opportunities for social and professional networks.

Not exactly riveting copy, but it’s certainly passable and technically correct. In time, will people researching such topics still consult journalists and other experts or simply type their queries into bots like ChatGPT and accept whatever consensus-driven results they spit out?

If the goal is to glean “information” that’s homogenized and filtered through algorithms, then ChatGPT and its successors will suffice. But anyone who craves original thinking, a unique voice and style, and the opportunity to connect your mind with someone else’s likely will find a bot’s offerings rather stale and uninspiring.

I suppose this means I can rest easy knowing that, at least for the immediate future, people will keep coming to websites like this one to get what they want. The human race thus far remains undefeated.