A new study using made-for-canine movies provides a glimpse at how dogs look at the world, and what captures their attention.
Scientists used short films and brain scans to get an idea of how dogs’ minds reconstruct what they see.
Results of the small study conducted by Emory University and published in the Journal of Visualized Experiments suggested that dogs focus on actions, not who or what is doing the action. This is different than what similar studies suggest about humans, who focus on the object and action at the same time.
Scientists recorded functional magnetic resonance imaging neural data of two dogs as they had them watch videos in three 30-minute sessions, for a total of 90 minutes.
To make these short movies, researchers attached used a video camera stabilizer and shot everything at waist level, in an afford to provide a dog’s-eye view. The videos showed people petting dogs, as well as pets playing, cuddling, eating and walking on a leash. They also showed actions a dog might witness, such as a human sitting down, a deer crossing a path and a scooter passing by on a road.
They then analyzed the patterns in the neural data using a machine-learning algorithm.
“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” said Gregory Berns, Emory professor of psychology and corresponding author of the paper , in a statement on the university’s website. “The fact that we are able to do that is remarkable.”
Scientists said the technology used in the experiment previously enabled them to decode some human brain-activity patterns and read minds.
“I began to wonder, ‘Can we apply similar techniques to dogs?’” Berns recalled.
The experiment was conducted on two humans as well, lying in an fMRI and watching the same 30-minute videos in three separate sessions.
The results showed 99 per cent accuracy in mapping the brain data onto both the object and action-based classifiers.
Not everything was the same. Researchers said the results show that the model did not work for the object-based part of the experiment, but decoding the action classification for the dogs was 75 per cent to 88 per cent accurate.
The study suggests that humans’ brains and dogs’ brains work differently.
“We humans are very object oriented,” Berns said. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”
Berns said dogs’ and humans’ visual systems are also different as dogs see only in shades of blue and yellow with higher density of vision, so in his mind, it makes “perfect sense” that dogs’ brains would focus primarily on action.
“Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”
Those behind the study say they hope the research will further the understanding of how other animals perceive the world.
“While our work is based on just two dogs, it offers proof of concept that these methods work on canines,” said Erin Phillips, co-author of the paper, who did the work as a research specialist in Berns’ Canine Cognitive Neuroscience Lab . “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”