Have you ever thought of restabilizing “artificial brains” by exposing them to the equivalent of a good night’s rest? Well, researchers at Los Alamos National Laboratory (LANL) recently found that the network simulations became unstable after continuous periods of unsupervised learning.
Incredible as it may sound, they were able to restore stability after exposing the networks to states that are analogous to the waves that living brains experience during sleep.
“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins.
Watkins also went on to add that the research team was fascinated by the prospect of “training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
In the early stages, the group grappled with the issue of stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.
As Los Alamos computer scientist and study coauthor Garrett Kenyon put it, “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
To stabilize the networks, the researchers decided to expose them to an artificial analog of sleep as almost a last ditch shot. Finally, they were able to get the best results when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes.
They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep. The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.