The brain develops an intuitive understanding of the world around him to interpret the incoming sensory information. How exactly does this happen? Many scientists believe that the brain can use a process similar to “self-observant learning” in machine learning. This method allows models to learn on the basis of similarities and differences between images without tags.
Two new works from the Massachusetts Institute of Technology provide certificates in favor of this hypothesis. Scientists have trained neural networks to a certain type of self-observant learning. The obtained models generated activity patterns that were very similar to those that were observed in the brain of animals that performed the same tasks.
The results show that such models can study the representations of the physical world and use them for accurate predictions. The brain of mammals may use the same strategy, researchers say.
“Our results seem to suggest an organizing principle in different areas and scale of the brain,” says Aran Nyiebi, co-author one of the studies.
In the second study, scientists focused on special neurons called grid Cells that participate in navigation. They taught a machine learning model based on self-observation to fulfill the task of integrating the trajectory of movement and effectively represent the space. After training, the models formed lattice structures very similar to those that form Grid Cells in the brain.
These studies demonstrate the similarity between models of self-observant learning and the workings of the brain of mammals when performing cognitive tasks. The findings are expected to contribute to a better understanding of the principles of brain functioning, according to the authors.