Science 3 min read

Challenging People's Belief Systems Impacts Deep Learning

Mtec2 | Shutterstock.com

Mtec2 | Shutterstock.com

New research suggests that changing a particular belief can affect one’s assurance in other firmly held beliefs. As we learn more about the human model for learning, how will that affect the way we study machine learning?

A recent paper published in the journal Science presents a new model for how we process information that goes against current belief systems. The Friedkin-Johnson model, as it is so called, was developed by scientists from the U.S., the Netherlands, Russia, and Italy. It suggests a way to understand human attitudes and the belief in empirical propositions.

Digging Into the Human Mind and its Belief Systems

According to Carter Butts of the University of California, the new model gives a perspective on how the acceptance of an idea or a set of ideas can affect the belief in other, more ephemeral concepts. For example, there are many who once thought that the world was the center of the universe. Even after Copernicus and Galileo, many different ideas had to be broken down before the idea of a heliocentric solar system could be accepted.

Many of our beliefs are stacked upon one another. This presents a network of interdependent beliefs. Understanding this network is key to understanding how some groups accept ideas that other groups think are absurd. It may also shed some light on how to identify the factors that enable or prevent such firmly held beliefs.

The research seeks to understand human belief systems, and how that belief can be changed. In essence, it seeks for a key to shattering beliefs that hold one back from learning more.

Human minds aren’t the only ones to understand. As artificial intelligence becomes more advanced, it is more closely resembling the human mind. As a result, we may be able to use this kind of research to break down some of the barriers that arise in machine learning technology.

Scratching the Surface of the AI Mind

If you listen to Butts, there are two factors to consider when groups don’t accept what others see as ‘common knowledge:’

1. Can you identify the factors that prevent groups from accepting what others see as true?

2. How can you use that information as a means to allow them to see what is actually true?

Many concepts that we understand and take for granted are part of complex belief systems. Consider how one interacts with the real world. If medical knowledge said that meat was absolutely necessary for good health, vegetarians would not likely switch over to meat overnight. Many would fight against the notion until presented with clear evidence that changes their assertions of a more healthy, cruelty-free diet.

AI faces a similar predicament. Imagine a child taught using only simulated pictures and videos. Developers then push the child to understand real-world images. To understand the difference between simulated images and reality, the child needs the kind of contextual evidence built up that supports the notion that one picture is fake while the other is real.

Understanding the inner working of the human mind gives us a window into teaching AI. It remains to be seen if AI will develop illogical beliefs like some humans do (or so we think), but perhaps unlocking the secrets of our own brains will help us avoid that development.

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let William McKinney know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

William McKinney

William is an English teacher, a card carrying nerd, And he may run for president in 2020. #truefact #voteforedgy

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.