To strengthen Alexa's ability to overcome language barriers, Amazon made a "machine-learned multilingual named-entity transliteration system," according to an Alexa blog. The company defines named-entity transliteration as "the process of converting a name from one language script to another."
Researchers created a Wikidata page to host a large enough dataset containing "name pairs" in different languages. The page contains different versions of a single name in varying languages. So far the researchers have paired English version names with their Japanese, Hebrew, Arabic and Russian counterparts with methodical approaches to suss out "noisy pairs."
The dataset was used for training machine learning systems using "traditional" and "more recent neural" approaches, according to the post. The best results were yielded by the Transformer, a neural-network architecture "that dispenses with some of the complexities of convolutional or recurrent networks and instead relies on attention mechanisms." Still, other factors could have played a part in performance like the language pairing.
Computers don't have the same abilities as humans in understanding nuances in human language. As computers and voice assistants are moved into more operational roles in a business, it's important they are able to address and communicate the correct information.
Alexa may boast about 45,000 skills, but people still shouldn't be so quick to overestimate how much and how well a computer can understand human language. To computers, a stream of words is not a sentence but just a sequence of letters.
Humans' natural tendency to tolerate linguistic ambiguities are unknown to computers, so their ability to adapt to them is strained. Responsibility is needed when training computers to acclimate to language ambiguities and barriers.
But Amazon has made strides in maturing Alexa's abilities. In July the company announced an expansion of its partnership with Accenture to help enable brands make software that works with Alexa.