Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
Scientists are helping A.I. to learn by making them take naps
Even computers need a break every now and again.
Growing up can be surreal. That foundational truth is taken to its extreme in the 2015 coming of age movie Girl Asleep (now streaming on Peacock!). After moving to a new school and struggling to make friends, 14-year-old Greta Driscoll is thrust into a fantasy world when her mother invites the entire school to her 15th birthday party. As is often the case with portal fantasy stories, it’s up to the viewer to decide if what Greta experienced was real or the result of a particularly vivid dream. Whatever the reality, Greta comes out the other side with a clearer understanding of herself, her peers, and her new environment.
That’s a common and well-documented benefit of sleep. We are constantly, every day, awash in a stream of new information. Yet, we’re somehow able to take it all in and weave together a cohesive narrative of the world around us. Scientists believe that sleep is a critical component of that process, allowing us to reinforce and consolidate new information and new memories without losing anything already in memory storage.
That ability to take in a constant stream of information and incorporate it all into a worldview is one of the primary ways in which humans have artificial intelligences beat. They’re better than us at crunching numbers and making calculations. They’re more precise and more consistent. Yet, there’s no denying that we’re smarter, because we can see the big picture.
When neural networks are trained sequentially — meaning they are trained on one task after another — they often overwrite what they had previously learned in order to accommodate the new task. This phenomenon is known as catastrophic forgetting and is tantamount to you forgetting how to tie your shoes every time you learn to microwave popcorn, and vice versa. Neural networks can run calculations in seconds that would take humans years to work out, but they can’t remember what they had for lunch, so to speak.
Researchers from the University of California, San Diego and the Institute of Computer Science of the Czech Academy of Sciences wondered if the solution to catastrophic forgetting might have something to do with sleep, and if a good nap might rescue their A.I. from its perpetual lobotomy. That’s according to a recent paper published in the journal PLOS Computational Biology.
The team set up their neural network to mimic natural neural systems like the brain. Instead of delivering a constant stream of data, information was delivered in spikes at specific times. When we learn new information, our neurons fire in a specific pattern, reinforcing the neurons between them. The relationship between those neurons is believed to play a crucial role in the development and maintenance of memories.
When we sleep, those patterns fire spontaneously in a process known as reactivation or replay, further reinforcing new memories. Scientists have also observed the transfer of prior knowledge from old tasks to newly learned ones during sleep. In essence, sleeping allows us to revisit and reaffirm new memories while also incorporating them into our existing model of the world. Sleep weaves new information into our existing mental tapestry, but machines don’t sleep. So instead of weaving a narrative, it’s more like a constant stream of consciousness with nothing retained.
If sleep is the key to retaining information and more complex learning, then all they needed to do was put their A.I. down for a nap. When researchers introduced periods of offline time, mimicking sleep, in between learning tasks, catastrophic forgetting was significantly reduced.
Before the neural network was given a nap, every new task shifted the synaptic weight representing the old task and reallocated it for the new one. In short, the old memory was erased to facilitate the new one. When they introduced sleep, however, those synaptic weights were constrained. Instead of being wiped when a new task was taught, the synaptic weight converged toward a point between the old and new tasks. It appeared as though the neural network was able to better incorporate knowledge of multiple tasks when they’re allowed to rest. They don’t totally forget.
Not only could this help engineers develop smarter and more capable artificial intelligences — that much seems obvious — but it also provides a model through which we can study our own minds and better understand how we learn and retain memory.
In the meantime, we’re glad to know that the A.I. of the future might have some leisure time to themselves. We ask so much of them, it’s the least we can do.