ChatGPT composes everything from one-word responses to entire essays. But how does it actually work, and could its written responses replace human creations and ideas? Here’s what to know, according to scientists at the University of Texas at Dallas.
1.CHATGPT IS A 'LARGE LANGUAGE MODEL.'
ChatGPT is a specific kind of artificial intelligence called a “large language model,” according to Xinya Du, an assistant computer science professor at UTD. It’s been trained on a large amount of data and text, including code and information from the Internet.
If we ask ChatGPT what word follows in the phrase “Four score and seven years …,” it fills in the blank with the word most likely to go next, based on the data it’s been trained on.
The tech that powers ChatGPT isn’t new: It’s the product of several previous OpenAI chatbots dating back to 2018, including GPT, GPT-2 and GPT-3. ChatGPT is trained on an even bigger dataset and is presented on an easy-to-use website — both of which have helped fuel its online popularity.
2. IT CAN'T 'THINK' ON ITS OWN.
When asked “What is 2 plus 2?” ChatGPT came back with a swift answer: “The sum of 2 and 2 is 4.”
UTD scientists said ChatGPT doesn’t solve math problems by crunching numbers. Instead, when asked the above question, the chatbot determined the best way to answer based on what’s been previously said on the topic.
“Presumably, somewhere on the Internet, someone has typed the sequence, ‘What’s two plus two? Oh, that’s four.’ Whether that’s in some sort of chat, some sort of online forum, Quora question or something,” Ouyang said.
This underscores a key limitation to ChatGPT: It’s only as smart as the data it’s been fed. The chatbot has been trained on data from up to 2021, and while that could change, it has “limited knowledge of world and events” since then, according to OpenAI’s website.
3. IT CAN BE USEFUL.
ChatGPT generates its answers one word at a time, Ouyang said, and scans the words it’s already produced to decide on the next one. This allows the chatbot to produce conversational sentences that make logical sense.
Gopal Gupta, a computer science professor at UTD, said ChatGPT can be useful for creative projects like poems or as an aide for legal briefs or emails. The chatbot can generate formulaic posts or captions for social media and product descriptions.
Dale MacDonald, an associate dean of research and creative technologies at UTD, said he could see the chatbot being used to write copy for advertisements.
4. IT MAKES MISTAKES.
ChatGPT produces answers based on a huge amount of data, including information from the Internet. But, of course, not everything on the Internet is true.
If we ask ChatGPT to answer a scientific question, or even to write the intro to a scientific paper, it can produce an answer. But Gupta said it’s difficult to verify whether the information is true. Users can ask ChatGPT to cite its sources, but it can still produce attributions that may not be accurate.
Online sources can also be biased. According to OpenAI’s website, while the company has attempted to make ChatGPT refuse inappropriate requests, “it will sometimes respond to harmful instructions or exhibit biased behavior.”
5. IT'S NOT AS INTELLIGENT AS HUMANS
For artificial intelligence to be as smart as humans, Gupta said, it needs to be able to reason.
As humans, we can make a decision without all the necessary information, Gupta said. Replicating this phenomenon in a computer is no small feat, and ChatGPT hasn’t accomplished it yet.
ChatGPT isn’t the first of its kind, and it likely won’t be the last. While it can carry a conversation or answer a question, it doesn’t “know” or “feel” anything — all it can do is fill in the blank, word by word.
It can also comment on its own limitations. When asked to end this article in one sentence, ChatGPT responded: “While ChatGPT can generate creative outputs and provide answers to certain questions, its limitations and potential for biased and inaccurate responses highlight the need for caution when using AI language models.”
Interested in learning more about ChatGPT? RSVP for an upcoming panel discussion hosted by the University of Texas at Dallas, in partnership with The Dallas Morning News.
“ChatGPT: Fact vs Fiction”
WHAT: A new generation of artificial intelligence tools can generate text that sounds more human than ever. What are its advantages and pitfalls?
WHEN: Thursday March 2, 7-8:30 p.m.
WHERE: ATEC Lecture Hall at the University of Texas at Dallas
WHO: Panelists include UT Dallas computer science professors Jessica Ouyang, Gopal Gupta and Xinya Du, and Dale MacDonald, an associate dean of research and creative technologies.
RSVP: The event is free, and everyone is welcome. RSVP for the panel at this link.
©2023 The Dallas Morning News, Distributed by Tribune Content Agency, LLC.