Can computers understand the different meanings of words like humans? | technology

Words have many meanings. We humans can define the meanings and connotations of words very well. But as they enter the age of artificial intelligence, can computers understand the different and complex meanings behind each word?

At the end of Alice’s Adventures in Wonderland – specifically in Through the Mirror and What Alice Found There – “Hammy Dumbee” sneered, “When I use a word, it means just what I meant, no less and no more not.” Alice replies, “So the question is how much you can make the words mean so many different things.”

One meaning or different meanings?

The words carry many meanings, which is known as “semantic ambiguity”. The human mind must analyze a complex web of information and use correct intuition to understand the exact meanings of these words.

Today’s search engines, translation applications and voice assistants are able to grasp and understand what we mean, thanks to language processing programs that give meaning to an astonishing number of words, without us explicitly telling them what they mean. These programs derive meaning from the statistics and algorithms they use.

Search engines and translation programs can understand what we mean (Pixabe)

But we are now facing a new era of artificial intelligence in which the machine and the computer were able to understand, analyze and predict complex data and predict its future outcomes. This is where another complex problem arises regarding the meanings of words understood by AI: Can it recognize different meanings of words?

This is why scientists are studying whether artificial intelligence can imitate the human brain by understanding words in the same way that humans do.

This was the subject of a research study conducted by researchers from the University of California, Los Angeles and the Massachusetts Institute of Technology, published last April 14 in the journal Nature Human Behavior.

Artificial intelligence that imitates humans

According to a press release published by the University of California, the study reported that AI systems can indeed learn very complex meanings. The study also indicated that the studied artificial intelligence system was able to encode the meanings of words in a way closely related to human estimates of the semantics of these words.

This approach can therefore assign as much information to each single word as the human brain, according to the MIT press release.

Language models gain meaning by analyzing how many times pairs of words are paired in different texts (Pixabe)

Language models gain meaning by analyzing the number of times pairs of words are paired in different texts. These models then use those relationships to assess the similarities between the meanings of the words.

For example, these models conclude that the word “bread” and the word “apple” are more similar to each other than to the word “notebook”. This is because “bread” and “apple” are often associated with other words such as “eat” or “snack”, as opposed to the word “notebook” which is not associated with them.

Language model comprehension test for words

The models were remarkably good at measuring the general similarity of words to each other. But most words carry many kinds of information, and their similarity depends on the quality of their evaluation.

“People can come up with different mental scales that help them regulate their understanding of words. For example, dolphins and crocodiles can be similar in size, but one is much more dangerous than the other,” says Gabriel Grand, the study’s leader from the Massachusetts Institutes of Technology.

The team tried to see if the models could pick up those nuances the way humans do. And if so, how do these models organize the information?

Language processing models use repetition statistics to organize words into a large multidimensional array (Pixabe)

To see how the words in this model correlate with human understanding of words, the team asked human volunteers to rank the words according to different scales (semantic scales): were the concepts conveyed by the words “big or small”, “safe or dangerous” ,” and “wet.” Or dry” etc? After the volunteers pinpointed the exact location of these words on those scales, the researchers tried to see if language processing models did the same.

Grand notes that language processing models use repetition statistics to organize words into a large multidimensional array. The more similar the words are to each other on some scales, the closer they are to each other within the matrix.

large dimensional arrays

He says that the dimensions of the surface of this matrix are large, and there is no inherent meaning to these words in the matrix structure. Grand adds that there are “hundreds of dimensions to some of the words embedded in the matrix, and we have no idea what those dimensions mean.”

The scientists looked at the semantic scales according to which volunteers were asked to rate words, and asked themselves whether these scales were also represented in language processing models. For example, the team examined the location of dolphins and tigers on the “size” scale, then compared the distance between them to the distance on the “danger” scale.

The language processing model organizes words in a way that gives them different kinds of meaning (Pixaby)

Through more than 50 combinations of classifications and semantic statistics, the researchers concluded that language processing models arranged words much like humans do. The models ranked dolphins and tigers as similar in terms of “size”, while they were far apart on the “danger” and “humidity” scales. The language processing model organized words in a way that gave them different kinds of meaning, and it based it entirely on the repetition of words in the context of the texts it learned from.

Interestingly, the language processing model classified the names “Betty” and “George” as similar in terms of the “old” scale, while they were far apart on the “sex” scale. The model also classified the words “weightlifting” and “fencing” as similar in that they are both “indoor” sports, while they were different in terms of the amount of intelligence required.

The team notes that this demonstrates the power of language. It is through these simple statistics that we can retrieve very rich semantic information, providing a powerful source of knowledge about things we may not have any direct cognitive experience of.

Leave a Comment