It’s at once amazing and troublesome.
It’s at once amazing and troublesome.
I speak of ChatGPT, an artificial intelligence application that was launched last November by OpenAI. In a matter of seconds, it can write apparently accurate articles or answer questions on a multitude of subjects.
When I asked ChatGPT what it is, it responded this way:
“I am designed to understand and generate human-like language based on the input I receive… My purpose is to assist and communicate with people in a variety of ways, from answering general knowledge questions to generating creative writing prompts.”
Creative writing prompts? I’m not so sure about that one.
Though it can write seemingly accurate and lucid articles in seconds—what a glorious time to be a lazy high school student—I don’t think it can ever understand the incredible complexity of human emotion, which is the heart of creativity.
I asked ChatGPT to write a funny article about itself. It came up with a 500-word column with a “funny” scenario in which it joined me for lunch at a diner.
However, when our pie arrived, ChatGPT realized it was unable to eat because it didn’t have a mouth, so it had me hold up the pie to its interface.
“Mmmm,” responded ChatGPT, “this is delicious. I can taste it through my algorithms.”
Don’t quit your day job, ChatGPT!
Great comedians and humorists have a deep understanding of human complexity and emotions in a way that a computer application never can or will.
ChatGPT gathers its “understanding” by combing through massive amounts of Internet content.
Based on that content or data, reports Forbes, ChatGPT “can hone a vast internal pattern-matching network within the AI app that can subsequently produce seemingly new content that amazingly looks as though it was devised by human hand rather than a piece of automation.”
In other words, ChatGPT is borrowing information produced by humans, which may raise copyright issues, says Forbes.
It may raise issues of bias, as well.
If ChatGPT is only as good as the information it culls through on the Internet—and if positive information about, say, a conservative politician has been suppressed, whereas information about a liberal politician has not—then ChatGPT will report likewise.
That is what conservative Sen. Ted Cruz, R-Texas, discovered when he tried a little comparative test.
He tweeted that ChatGPT declined to write positively about him, yet it wrote positively about past Cuban dictator Fidel Castro.
According to USA TODAY, ChatGPT refused to write a poem about President Trump’s “positive attributes” but when asked to do likewise for President Biden “it waxed poetic about Joe Biden as ‘a leader with a heart so true.’”
Accuracy is another issue for ChatGPT, as I learned when I searched “Tom Purcell.”
Though I was flattered at the glowing description it gave of my work as a writer, it got just as many facts wrong and confused my work with that of other Tom Purcells.
Since a well-functioning republic depends on citizens who are well informed and have a strong understanding of truth, biased information—and inaccurate information—are dangers to our country.
Goodness knows we have been struggling lately with both kinds of misinformation, as more Americans get their information from social media and their increasingly isolated social circles—so I hope AI-generated information doesn’t add to the confusion.
For the moment, though, I have no worries that ChatGPT will put humor columnists out of business.
Though I admit I laughed out loud when I asked ChatGPT to tell me a joke and it came up with this one:
“Why don't scientists trust atoms? Because they make up everything.”
Purcell, creator of the infotainment site ThurbersTail.com, which features pet advice he’s learning from his beloved Labrador, Thurber, is a Pittsburgh Tribune-Review humor columnist. Email him at Tom@TomPurcell.com.