For computers discerning linguistic ambiguities, the struggle is real
Mr. Computer Head tries to understand: Jane had a picnic by the bank. Jane ate spaghetti with a fork.
"I'm in the doghouse with my editor."
There's nuance there. It could mean, I'm physically in a doghouse or I'm in trouble with her.
Much of what people need is tied up in language and having computers accurately interpret linguistics is important in the age of technology. Context shapes our interpretations of language, but computers struggle with those same skills.
Researchers in the natural language processing field created an online game, "Madly Ambiguous," to test a computer's ability to understand linguistic ambiguities, according to their whitepaper. Through the game, Mr. Computer Head was born.
"People are quick to overestimate how well computers can understand human language because it's so easy to underestimate just how complex our day-to-day language is," said Ajda Gokcen, a doctoral student in the Department of Linguistics at the University of Washington and researcher for "Madly Ambiguous," in an emailed statement to CIO Dive.
To computers, words and sentences are just sequences of letters but "computers have to meet us where we live with language and not the other way around," Michael White, associate professor in The Ohio State University's Department of Linguistics and researcher for "Madly Ambiguous," told CIO Dive in an interview.
People are increasing their reliance on AI-based or voice-enabled technologies and while their progress is profound, it needs polishing, according to Gokcen. When a system accomplishes a better understanding of language, it "directly creates a better experience for people."
Humans are naturally inclined to "tolerate" ambiguities because we can, for the most part, correctly understand the true meaning of an ambiguity, according to White. It's been argued that ambiguity exists in language because language didn't evolve directly to support communication. Others argue that there is a trade-off between ambiguity and efficiency, therefore "language can only become completely unambiguous by becoming intolerably inefficient," according to White.
Either way, this makes ambiguities essential to making human language work and computers need to adapt to it.
How computers process sentences
"Madly Ambiguous" defines lexical ambiguity as using a word, with multiple meanings, in a sentence in which both meanings make sense. The example the game gives is "Jane had a picnic by the bank." Bank can be defined as a financial institute or the edge of a river. Such ambiguity trips up a computer's understanding of a situation.
The game defines structural ambiguity by how a sentence is formed. The example is "Jane ate spaghetti with a fork" and then asks the user "would Jane likely be using the fork or eating the fork?"
As humans, we know the latter is not the case. However, if the sentence read "Jane ate spaghetti with meatballs," humans understand that her food contains meatballs, not that she is using one as a tool to eat with.
But it is human instinct to differentiate how Jane ate the spaghetti or what she ate it with. Computers do not possess the same instincts, at least not yet.
White, Gokcen and Ethan Hill created Mr. Computer Head to test a computer's ability to interpret the true meaning of a sentence and to interact with the user.
Mr. Computer Head is the game's opponent and works to identify different meanings in a sentence, or a word's semantic role "to describe the manner of the action" like if the sentence read "Jane ate spaghetti with gusto."
Mr. Computer Head can correctly assume Jane is eating her spaghetti with some excitement. The computer can also recognize that Jane is eating with a friend, not using the friend as a utensil if the sentence read "Jane ate spaghetti with Mary," according to the game.
Face off against Mr. Computer Head
Users are paired against the Mr. Potato Head-style character and it keeps tally of its wins and losses. Since its demo last year, Mr. Computer Head scores with 64% accuracy in basic mode and 70% in advanced mode, according to the research.
So now, after learning what ambiguities lie in language, users get the chance to create a sentence that Mr. Computer Head can't correctly interpret. And I obliged.
With the prompt, "Jane ate spaghetti with ____." I entered "dogs." Unfortunately, Mr. Computer Head's interpretation was not accurate. It incorrectly translated my sentence to mean "Jane had spaghetti and dogs."
That was "basic mode," which uses "part-of-speech tags and lemmas" to decide "what the most important word of the input is," according to the research. Mr. Computer Head then looks up the most important word in WordNet, which is a database in which linguistics group together words that mean similar things organized in a hierarchy of abstractness and specifics, according to White. Mr. Computer Head incorrectly assumed that "dog" was short for "hot dog."
But WordNet also doesn't doesn't really accommodate new words, or adjust meanings that have become "archaic" and can therefore misconstrue the meaning or intent of a word.
I then switched to "advanced mode" and instead of "dogs," I entered "Parmesan cheese." The computer correctly guessed that Jane was enjoying her spaghetti with Parmesan on it.
Advanced mode uses word embeddings that are "trained on" words in Google's word2vec tool, which gives a point of meaning to words and compares it to words that are already known, according to White.
Words with similar meanings are clustered together. For example, if dogs and cats tend to appear in the same semantic space in a sentence, they have a similar meaning to Mr. Computer Head, said White.
The again in advanced mode, I inputted "Jane ate spaghetti with her dog," instead of just an undefined dog. Mr. Computer Head correctly paraphrased the sentence to mean Jane was having her dinner "in the presence" of her dog. The computer was able to follow the expression. "You'll know the meaning of the word by the company it keeps," said White.
Still, technology, as highlighted by Mr. Computer Head, has a long way to go and so do the people creating it. "People making these technologies can't really fix society," said Gokcen, "but we are still responsible for how our systems can both help and harm people even if the issues originate outside our control."
Implicitly, this means experts behind the creation of intelligent systems need to be extraordinarily careful with how and what data is used for.
Follow Samantha Ann Schwartz on Twitter