<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=388299605380721&amp;ev=PageView&amp;noscript=1">
Why does Siri not understand me? AI and language comprehension
Blog featured image
Blog Author
Anna Corts

Get monthly notifications

Why does Siri not understand me? AI and language comprehension

Digital Agency | 17 minutes

One of the challenges that researchers face in Artificial Intelligence (AI) is being able to understand human language and simulate it through technology.

Think of the following situation: you ask for help from Google Assistant, Siri, Alexa or any other artificial assistant and the result you get is far from what you wanted to express. There are times that, depending on the way we speak or the words we write, are very difficult for personal assistants to understand, therefore, it is essential that the AI becomes more skillful when it comes to understanding the human language.
Google has always been concerned with improving that understanding gap in its searches, so today we want to talk to you about how smart systems work through the example of SyntaxNet and its analyzer, Parsey McParseface.

Can you imagine being able to build an application that understands human language? This is the project presented by Google in 2016, which consists of a model of neural networks with open source from which any developer can develop applications linked to natural language.

Wow...! But how does it work?
SyntaxNet with its analyzer, Parsey McParseface, combines automatic learning and search techniques to improve the learning of English and use the language with an accuracy of 94%, according to Google.

For us, humans, speaking is very easy. In fact, we do not think about it, and we only begin to notice that our linguistic system must be quite complicated when we face the task of learning a second or third... languages.
Humans learn to speak from the time they are children, increasing their vocabulary, modifying the meaning and restrictions and intentions of words and adapting the relevance of the words to use in different contexts. We learn from those who are close to us (from the context) and we understand more and more vocabulary (semantics) and structure (syntax) almost effortlessly.

Something similar happens with AI systems, although it is much more difficult to artificially encode all the complexity of human reasoning, since all truths are approximate and encounter many obstacles along the way; for example, the large number of meanings that a word can have or the intention of a sentence in a certain context.

Let's see how SyntaxNet works in the following example:
Given a sentence as input, the processor labels each word with a speech part label, which describes the syntactic function of the word and determines the syntactic relationships between the words in the sentence.

These syntactic relationships are directly related to the meaning of the sentence:ai-and-language-comprehension-01 (1)

This structure codes that Alice and Bob are nouns and Saw is a verb. The main verb saw is the root of the sentence and Alice is the subject (NSubj) of saw, while Bob is its direct object (DObj).

As you can see, Parsey McParseface analyzes this sentence correctly, but let's see what happens in a more complex example:

ai-and-language-comprehension-02 (1)

ai-and-language-comprehension-03 (1)

Any human can understand this phrase and imagine Alice driving a car down the street. But an AI system may be able to understand what for a human being would be an absurdity: that Alice is driving having the street inside her car. This is one of the main problems that makes the analysis so challenging: human languages show remarkable levels of ambiguity.

Human beings do remarkable work in dealing with ambiguity, almost to the point where the problem is imperceptible. The challenge is creating computers that are capable of doing the same.

SyntaxNet applies neural networks to the problem of ambiguity. Instead of simply taking the first decision at each point, multiple partial hypotheses are maintained at each step, and the hypotheses are only discarded when there are other hypotheses of greater rank under consideration.

The main source of errors at this point are examples such as the previous sentence, which requires knowledge of the real world (for example, that a street is not likely to be located in a car) and also deep contextual reasoning.
Machine learning (and, in particular, neural networks) has made significant progress in resolving these ambiguities; and, while the precision is not perfect, it is certainly high enough to be useful in many applications.

This is just one example of how investigations into AI are trying to improve intelligent systems: understanding the meanings and launching a better hypothesis every time. A great goal, right? And, thinking of how voice Search will affect digital marketing, the challenge increases. Here’s a look at 6 ways voice search will impact digital marketing:

  1. Complexity will increase: queries are much complex than they were even 3 years ago due to largely to changes in Search Engine Algorithms that have become less attuned to keywords and more capable of inferring meaning.

  2. Interactions will be shorter: people are usually run out of time, this is why quick interactions regarding needs, answers to usual questions… will dominate the communication between humans and assistants.

  3. Local search will rise: related to the previous point, shorter interactions are usually used for local things, such as, find the close restaurant, fastest way to go to somewhere… Many voice searches happen on the go.

  4. Audio-only interactions will increase: it will be a change in the way of information is presenting. Focusing more on the meaning and the value of the action, rather than images, infographics, and other visual media.

  5. Challenge to attain top ranking. Digital assistants only return the top result based on your “voice-search query”, you may not be able to view the lower rankings of possibilities.

  6. Conversational content will be important, such as semantics, pragmatics and the impact of content, situations and intentions. It will expect a conversational, human answer in return.

Surely we are closer to the day when it will be very easy to have an informal and natural conversation with our personal assistants. What is not in the slightest doubt, is that with these small-big steps, we have tools at our service, and free access, that bring us closer to an increasingly human Artificial Intelligence.

What about you? How do you live the experience of having conversations with chatbots, personal assistants, answerphones...?

More info: Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source

How does Google use natural language processing to reduce the time to insights. 


Similar Articles

Get monthly notifications

Jan 03, 2019 10:22:35 AM 17 minutes No