Skip to main content

Let’s Chat about ChatGPT

By Caroline Barnhill

A few decades ago, the idea of having a robot magically write your term paper was the wish of many a college student. Today? It doesn’t feel like such a pipe dream. Artificial intelligence is already being widely used across industries – so it was only a matter of time before it worked its way into the educational and professional creative landscape.

We sat down with Bill Rand, McLauchlan Distinguished Professor of Marketing and executive director of the Business Analytics Initiative at Poole College of Management, to learn more about ChatGPT, its risks and what educators need to keep in mind going forward.

What is ChatGPT?

ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot launched by OpenAI, an artificial intelligence (AI) research lab, in November 2022. It analyzes large amounts of data and computing techniques, which allows it to string together words in a meaningful way. You can ask it a question and the tool scans the internet to find answers. 

Give us some context of what led to the debut of ChatGPT?

There has been a natural evolution of AI. Recently, it showed large-scale promise in the early 2000s with computer vision, a branch of AI that essentially taught computers how to “see” and identify objects. We still have lots of applications of that today, such as driverless cars. From there, using those same technologies, there was a shift toward developing artificial neural networks – systems built to imitate the human brain. This led to a large interest in generative AI systems that can read a block of text and formulate something similar to it. Later, those efforts expanded to help those models retain what they learned and apply that information to future attempts. So basically, that’s the evolution you see from OpenAI’s GPT-1 to GPT-2 to GPT-3 to GPT-3.5, which is the series under which ChatGPT was developed.  

What kinds of things can you ask ChatGPT?

Anything. You can ask ChatGPT to give you a five-paragraph summary of Thoreau’s Walden, or you could be like my daughter, who asked it to generate a song about donkeys with big butts. And interestingly, its answers will never be the same. So the song it generates for me about donkey butts would be totally different from my daughter’s song. However, ChatGPT generates language based on the compilation of information it has found – even if that information isn’t correct. This means that ChatGPT can provide a long, detailed summary of a book that’s completely wrong, since results are based on data that was scraped from all over the web. While pieces of information might seem correct or relevant, the tool cannot ensure whether it’s accurate or not.

A lot of academics are really concerned about ChatGPT and how it might lead to issues such as plagiarism. What are your thoughts?

From what I’m hearing, ChatGPT has English departments across the country stressed! As I mentioned before, this tool has the ability to write good responses to any prompt you ask it. Even though it’s trained from data on the internet, it generates its own text that is based on that training and so doesn’t perfectly match any previous document. As a result, plagiarism detectors won’t be able to pick up on it. Even if the style is similar to something else that was written, it won’t be copied verbatim.

But fear not. Like I said, ChatGPT may be able to provide a summary of a book, but that summary might be completely inaccurate. Also, developers have created something called GPTZero, which detects writing that was autogenerated. 

This semester, I explained in my syllabus that students can’t use autogenerated text without my permission. I also warned them that I may run their responses through a GPT detector and if I discover that they used an AI tool, they’ll have to redo it as a verbal assignment. On the other hand, I have colleagues who teach AI courses and they are actually encouraging their students to use these tools. Personally, I’m not particularly worried about it being an issue, but I wanted to put a statement about it in my syllabus to keep my students from lazily using AI tools as a substitute for real learning. 

How do you see ChatGPT being used in higher education in the future?

I posted about ChatGPT on my social media feeds and I’ve gotten some interesting feedback, especially from colleagues and friends of mine who are educators. Some of them are doing cool things – like asking their students to use ChatGPT to generate an essay, and then having them critique, review and rewrite the essay themselves. That’s a great use of ChatGPT as a teaching tool. 

Beyond that, for any kind of field where you have to generate and create a lot of content, ChatGPT could help. As professors, we often limit the scope and amount of writing that we assign to students because we don’t want them overwhelmed by the task of putting sentences together when the idea is to teach broader content. In the same way that calculators help us do math faster, ChatGPT helps us generate text faster… with the caveat that it may be incorrect. 

How might you use ChatGPT in one of your marketing courses?

In my digital marketing class, my students have to turn in examples of digital marketing content they developed for a client. They could potentially use ChatGPT or a similar tool to generate 100 Instagram posts for the company and then select the best 25. Having a broader base to pull from gives them a better idea of how a social media campaign could run. The same goes for content creators in the real world. They can use ChatGPT to help them generate content, which they could then edit and refine. It elevates everyone and takes them from writing content to editing and refining it.

What are the biggest pitfalls of ChatGPT in its current form?

As I already mentioned, the responses generated by ChatGPT are not necessarily accurate – because accuracy wasn’t the goal of the tool. Also, bias is always an issue with AI. Machines don’t create knowledge on their own and it’s the same with ChatGPT. It simply reflects the knowledge that’s available on the internet back to us. As a result, any biases present on the internet are going to be present in its responses. 

Lastly, it’s random. Every time you pose a question, you’re going to get a different answer. And the quality of the answer could vary widely. So, when it comes to content creation, we still need a human. Sure, you can ask ChatGPT to write you a song about donkey butts and use the first one that’s given – but it might be the 65th response that’s going to be a viral hit!

This post was originally published in Poole Thought Leadership.