Scientists Reconstructed Videos Straight From a Mouse’s Brain and Learned Something About Cognition in the Process
New research turns neural signals into movies and hints at how brains edit sight.
by Tudor Tarita · ZME ScienceNeuroscientists have successfully reconstructed moving images directly from the brain activity of mice, marking a major shift from broad brain scans to tracking thousands of individual neurons in real time.
While previous studies could only rebuild still images or low-resolution human fMRI scans, this research uses single-cell recordings to capture how the animal’s mind actually processes a continuous visual stream. By monitoring 8,000 neurons in the visual cortex, the team is moving beyond “mind reading” and trying to understand how the brain intentionally warps and filters reality into a subjective experience.
Rebuilding the Videos
To do it, the researchers used two-photon calcium imaging, a microscopy method that infers neural activity by tracking calcium-related signals inside cells. Working with 10 mice, they monitored roughly 8,000 neurons in the primary visual cortex of each animal while the mice watched short video clips.
They then paired those recordings with a dynamic neural encoding model, a machine-learning system trained to predict how neurons should respond to particular parts of a movie. The model also included behavioral data such as pupil size, eye position, and running speed, since those factors can shape how visual information is processed.
Then the team reversed the process.
They started with a blank gray movie and gradually changed its pixels until the model’s predicted neural activity matched the activity actually recorded in the mouse. By repeating that process frame by frame, they produced a reconstructed version of the clip the animal had seen.
The videos are far from sharp but they do preserve enough motion and timing to show that the system captured important features of the original scenes.
The paper found strong similarity between the original movies and the reconstructed ones across both space and time. Adding more neurons improved the quality of the reconstructions. When the model had access to signals from thousands of neurons, and when the researchers averaged outputs from several models, the results became more accurate.
×
Get smarter every day...
Stay ahead with ZME Science and subscribe.
Daily Newsletter
The science you need to know, every weekday.
Weekly Newsletter
A week in science, all in one place. Sends every Sunday.
No spam, ever. Unsubscribe anytime. Review our Privacy Policy.
Thank you! One more thing...
Please check your inbox and confirm your subscription.
RelatedPosts
Yet another study debunks “wind turbine syndrome”
Not only do fish feel pain, but they also multi task and even have cultural traditions
Scientists may have witnessed how memories form in real time — a first
Large Veterans Study Shows How GLP-1 Weight Loss Drugs Could Treat Addiction
Where Perception Diverges From Reality
There’s another important point: the reconstructions weren’t perfect copies of the videos. But that may be the most revealing part of the study.
According to the authors, the gaps between the original videos and the reconstructed ones could help reveal how the brain edits incoming information rather than simply recording it. In other words, perception may be less like a camera and more like an interpretation.
“We don’t have a perfect representation of the world in our heads,” said Dr. Joel Bauer of the Sainsbury Wellcome Centre at University College London, the study’s lead author. “The visual processing pipeline skews and warps our representation in a way that modifies information. This deviation between reality and representations in the brain is not necessarily an error but a feature, reflecting how our minds interpret and augment sensory information. We want to explore how this happens in the brain.”
The team suggests that future work could use movie reconstruction to study phenomena such as predictive coding, selective attention, and perceptual learning—all processes that can change what the brain emphasizes or downplays.
There are still clear limits. The reconstructions cover only part of the mouse’s visual field, and the resolution remains low. The authors say future work should aim for sharper images and wider coverage.
Why This Matters for Animal Vision
The work also speaks to a deeper scientific problem: animals cannot describe their experiences. That makes it difficult for researchers to study questions about dreams, illusions, or hallucinations outside humans.
“The nice thing with humans is you can just ask someone, what did you dream about? What did you see? What are you hallucinating?” Bauer told The Guardian. “We don’t have that access with animals in the same way.”
For now the clips are short, the field of view is narrow, and the images are rough. The study does not amount to mind reading in any rich or complete sense. But it marks a step toward a way to compare the outside world with the version of it constructed inside the brain.
In mice, at least, researchers can now begin to do that with moving images rather than isolated still frames.
The result, has been published in eLife.