close
close

Computers Can’t Tell Jokes – Daily Freeman

“If you could master any language in the world, what would it be?” “C++.”

It’s a classic programming joke. The humor is ironic: language skills are less important than technical skills. Humor, I’m told, doesn’t thrive in technology. Computers can’t understand it.

And, some would argue, neither can engineers.

But the computer bit is not quite accurate. Chatbots based on big language models, like ChatGPT, don’t understand things like we do.

But with enough data, they can communicate like us. They can even repeat jokes when prompted, just like some older people I know.

So they might be able to tell jokes, but only if they’ve heard them before. After all, chatbots predict the next word in a phrase based on billions of similar inputs.

In some sense we do this too. Practice makes perfect. Knowledge is power. When life gives you lemons… make a whiskey sour.

When life gives a language model a few lemons, it probably won’t turn a whiskey sour. For humans, A is not always A. For computers, it must be.

Unless told otherwise a hundred times, a computer will always make lemonade out of lemons.

So they don’t work with words the way you do, unless you compare a phrase you’ve heard with a hundred others before answering (and generate something random every now and then).

While you could argue that humans create meaning based on what we’ve heard before, with some variations, we can’t reduce language to data processing, especially when that information consists only of the written word.

There is more to joking than the wording of the joke. Language is more than writing. It flows, stops short and is characterized by gestures and sounds that are not words.

Some sounds are more enjoyable to people than others. “Klaxon” is funny. “Stolid” is not. “Diphthong” is fun. “Melody” is not. “Whack” is funny. “Antidisestablishmentarianism” is long.

Why are they funny? Maybe words that have the sound of a hard k, a stressed first syllable, and that are short are inherently funny to people. Maybe that’s only true for English speakers. Perhaps none of the above are good explanations.

In any case, computers don’t interpret “klaxon” as anything more or less than a sequence of characters, while we associate it with an image, a sound (awooga!) and much more.

When people are funny, they think about how a word is spelled, how it sounds, its denotation and connotation, where it fits best in a phrase, whether to include some Latin to sound more impressive, et cetera.

A computer doesn’t take these things into account. It just grabs a bunch of phrases someone marked as “humorous” somewhere and combines them with certain probabilities.

The result should be fun. Sometimes it is like that. Other times it is completely unreadable. Humans tend not to have the latter problem.

And they don’t tend to throw words together randomly either. Not unless they’re on their first date.

But despite what I have said, I admit that it is difficult to draw a line between where human communication ends and computer-like communication begins.

Especially when we learn a language, we use words or phrases depending on how we have seen them used.

Language models work in a similar way. They just synthesize information much faster than we do. Maybe one day quantity will beat quality.

While humans are still better at language and humor, computers can catch up faster than we think. Or not. I haven’t synthesized enough data to guess.

Language, jokes, and the meanings we do or don’t exchange for that ancient goal: know yourself. A computer cannot know itself. And it can’t tell jokes… yet?

Alexandra Paskhaver’s column is distributed by the newspaper syndicate Cagle Cartoons.

Back To Top