Tuesday, October 08, 2024
Share:

Study Says Your Dog Cares What You’re Doing, Not Who You Are



The next time you glance over to your dog, your mind might jump to the sentiment you hold for him or her, your dog’s importance to your family, perhaps even how your raised your young puppy to the canine they are today.

But, according to a new study published in the The Journal of Visualized Experiments, when Fido or Rover or Spot ย looks at you, he or she may be less interested in who you are than what you do next.

Scientists who studied and believe they’ve decoded visual images from a dog’s brain suggest dogs are much more attuned to actions in their environment rather than to who or what is doing the action.

Researchers from Emory University in Atlanta, Georgia, used an fMRI, which scans metabolic processes rather than anatomical structures, to record neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. The dogs’ responses were then fed into a machine-learning algorithm to analyze the patterns in the neural data.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” said Gregory Berns, Emory professor of psychology and corresponding author of the paper. “The fact that we are able to do that is remarkable.”

The research was prompted by recent advancements in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of perception. Beyond humans, and now the two dogs, the technique had been applied to only a handful of other species, including some primates.

“While our work is based on just two dogs it offers proof of concept that these methods work on canines,” said Erin Phillips, first author of the paper who served as a research specialist in Berns’ Canine Cognitive Neuroscience Lab. “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”

Berns and colleagues pioneered  techniques that trained dogs to walk into an fMRI scanner and hold completely still and unrestrained while their neural activity is measured. His team published the first fMRI brain images of a fully awake, unrestrained dog a decade ago. That work led to what Berns calls The Dog Project — a series of experiments exploring the mind of the oldest domesticated species.
Since then, Berns’ lab has published several research studies into how the canine brain processes vision, words, smells and rewards, including receiving praise or food.

At the same time, the technology behind machine-learning computer algorithms has improved to the point where scientists using the algorithms are now able to decode human brain-activity patterns. The technology identifies brain-data patterns while an individual is watching different objects or actions in a video.

A primary challenge in applying the technique to dogs was developing video content that a dog would find interesting enough to watch for an extended period. Emory researchers affixed a video recorder to a gimbal — a mechanism designed to hold a device such as a video recorder steady but still able to redirect itself — and selfie stick that allowed them to shoot steady footage from a dog’s perspective, about waist high to a human or a bit lower.

The research team created a half-hour video of scenes relating to the lives of most dogs, such as dogs being petted and receiving treats from people, scenes that showed dogs sniffing, playing, eating or walking on a leash and other scenes included cars, bikes and scooters driving by on a road, a cat walking in a house, a deer crossing a path, people sitting, hugging, eating or kissing and people offering a rubber bone or a ball to the camera.

The video data was segmented by time stamps and classified either by specific objects or actions in each video.

Only two dogs that had been trained for experiments in an fMRI had the focus and temperament to lie perfectly still and watch the 30-minute video without a break, including three sessions for a total of 90 minutes: Daisy, a mixed breed who may be part Boston terrier, and Bhubo, a mixed breed who may be part boxer.

“They didn’t even need treats,” said Phillips, who monitored the animals during the fMRI sessions and watched their eyes tracking on the video. “It was amusing because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.”

Two humans underwent the same experiment, watching the same 30-minute video in three separate sessions, while lying in an fMRI.
The results for the two human subjects found that the model developed using the neural net showed 99% accuracy in mapping the brain data by both the object- and action-based classifiers, but the model didn’t work in decoding the object classifiers for the dogs, but demonstrated a 75% to 88% accuracy when decoding the dog  action classifications.

“We humans are very object oriented,” Berns explained. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

Dogs and humans also have major differences in their visual systems, Berns added. Dogs see only in shades of blue and yellow but have a slightly higher density of vision receptors designed to detect motion.

“It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” he said. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”