In the wake of Huawei’s new chipset, aspiring tech firms should begin considering what language they program in very, very, carefully.
Huawei’s innovative new mobile personal assistant, a feature of the Kirin 970 chipset, was designed to respond systematically to three questions: Where is the user? Who are they, and what are they doing?
“Smartphones are smart, but they are not intelligent enough,”
says Richard Yu, Huawei’s Consumer Business Group chief executive.
For years, tech giants have poured significant amounts of money and effort in developing software that will enable machines to think more like people. As Edgy Labs Researchers have noted in the past, the development of more sophisticated digital assistants has been deemed as one of the most promising consumer trends.
How Do AI Learn Language?
When humans read in a second language, they’ve often spent hundreds of hours practicing beforehand.

Since AI can be programmed with common words of a language, and since they rely on processing power for speed, the single greatest challenge is building an AI that actually comprehends the context of our speech. The more an AI is better able to comprehend the meaning of new or undefined words the better it will be able to conduct searches that reflect the user’s intent in a relevant way.
The more an AI is better able to comprehend the meaning of new or undefined words the better it will be able to conduct searches that reflect the user’s intent in a relevant way.
Do AI Have Language Bias?
Anyone who has ever had to learn a foreign language knows how difficult a task it can be. As much as you memorize, it can feel like some things just pass you by.
After all, as Stanford reminds us, ambiguity is an inherent part of any language. But are there some languages that AI could respond better to?
In theory, yes. The less complex a language is (syntactically and lexically), the fewer rules AI have to learn.
Ultimately, the central function of language is the representation of information, normally for communicative purposes. This is true of formal languages (see: computer codes like C++, mathematics) as well as natural languages.
As Stanford University researchers explain, “in the case of computer languages, the communication is with a computer, rather than between people, but the language is still serving as a medium for conveying information from the programmer to the machine”.
Communication is successful when the interpreter (or in this case, computer) is able to assign unique denotation to each signal (or character). The clearer the denotation, the easier and more accurate the interpreter’s work is.
Why Mandarin AI is poised to outpace English rivals
When it comes to the issue of grammatical complexity, Chinese is really one of the simplest languages. Unlike most European languages, Chinese does not use cases or genders.
It also treats tenses in a very simplistic way. Furthermore, unlike other East Asian languages (see: Korean, Japanese) there’s no complicated honorific grammar to follow.
There’s no such thing as spacing–or explicit boundaries between words–in Chinese. But a method of Chinese word segmentation, the “first step” for any Chinese information processing system, was discovered all the way back in 1995.
In a joint study conducted by researchers at Tsinghua University and the City University of Hong Kong, researchers discovered Conditional F&BMM (Forward and Backward Maximal Matching). Using bigram statistics and linguistic rules for ambiguity resolution they were able to produce a 99.046% effective algorithm.
Comments (0)
Least Recent