A recent study shows… research proves… science tells us…
What? What does science prove about dogs? (It applies equally to cats, horses, rabbits and all the other companion animals too, but let’s keep it simple).
Or to put it another way, as behaviourists, dog trainers and pet owners, should we bother with reading any scientific studies? Isn’t owning and training pets an intuitive skill? We all know the magic old guy or gal that’s been doing it brilliantly all their life and they’ve never read a scientific paper, “self-taught, me – never read a book”, worn as a badge of pride.
And what do scientists know about dog training anyway – all white coats and randomised double-blinded placebo-controlled trials? But science is the only way we have of progressing our knowledge (unless you count witchcraft, astrology or “a bloke down the pub said…”). It is the enquiring mind that asks questions on our behalf. Necessary because only by challenging the status quo can we ever hope to improve. But how accurate, truthful and provable is it in relation to our pets?
Hmmm. Let’s start with what we know… We know that scientific study starts with a hypothesis. Somebody comes up with an idea that something may or may not be the case. They next test the hypothesis until it either holds or breaks. Then they publish the results in a peer-reviewed journal and we read them. We thereby know something new; a new “thing” is deemed to be true.
Except… (as any student of Ben Goldacre knows) there are a few flaws with the process. Firstly, why was the question asked? Why was the hypothesis put forward? Now, in cancer research the answer to that is obvious, but in dog behaviour?
Maybe (and only maybe), it is being funded by someone who hopes the result will come out in their favour and show their product to be worthy of purchase or investment. This doesn’t just mean the big pharma’s flogging unnecessary nutritional supplements, or a new “thing” to stop “that behaviour”, but could be as subtle as a Uni research department making interesting or controversial findings, or things that they would like to be true in the current political climate, so that their name becomes more known in the field and they attract more students. After all, “University Study Finds Nothing New” isn’t a great advert.
So most studies start from a position of bias, however unbiased the participants think they are trying to be.
Next there is the method; the way the hypothesis is tested. Trials and samples should be extremely large and extremely random, but this is very expensive, so often a cheap and convenient way is found, for example a telephone survey or approaching people walking their dogs in a park. This samples a portion of the population under study, but necessarily introduces more bias. For example you may think dog behaviour is being studied, but what is really being studied is the behaviour of dogs whose owners are interested enough to partake in a telephone study, or those that are walked in the park at a particular time of day (don’t get me started on internet studies where people are paid to take part!)
Other methods conduct complicated experiments on only a few subjects, but a few subjects allow a single strange dog or two to skew the result by far more than they should. Big studies should be more accurate.
Still other methods use opinion. These suffer hugely from contemporary heuristic biases – rules of thumb that, “we all use because we know them to be right” – but are they?
If I say, “I met a dog today that was highly strung and yapped at me constantly from behind the legs of its little old lady owner.” Was it more likely to be a Labrador or a Miniature Smooth Haired Dachshund?
What the heuristic focuses on is the “highly strung”, “yapped” and “little old lady” – then comes up with “Dachshund”, completely ignoring the fact that in the UK there are approximately 36½ thousand Labradors and fewer than 3 thousand Smooth Mini Dachsies. Purely on numbers alone it was by far more likely to have been a Labrador .
Which brings me on to statistics (or, lies and damn lies). Stats is hard; I can’t do it. But I know people who can make the best fist of a tenuous argument by, shall we say, “using” statistics. The thing to look out for is a p value of 0.05 – which is 5%. This is the value which is pretty much regarded as being able to be attributed to a random variation and therefore a meaningless result. If it is greater, authors claim “statistical significance”, meaning that their results can’t be just down to chance.
But really? Am I really convinced that the opinion of 55% of a group of people is of that much greater significance than the 45%? Hmmm… 60-40? 75-25? Maybe at 90-10 I might take notice. If we return to earlier comments we might be able to see why these, often tenuous, (even though statistically significant) claims are made.
And then they write-up the results to show themselves to be at the cutting edge of science, to look for more funding or students, to be famous, or simply because they don’t want to be seen to have wasted their time and effort… then get them published, after they have been sent out to peer-reviewers.
Of course they go to review anonymously. Of course if they are sent to a peer, the peer should know what research is being done and by whom, within their specialism (or they’re not much of a peer!) And of course the peer would like their next study to also be looked on favourably… Very little chance of any bias there then.
Then they send out the press releases and the blogs and the facetwits – which don’t have to be scientific but do have to be eye-catching. Sort of a sanitised, dramatised version of how the authors want to portray the work, dressed in its Sunday best. So you can’t expect them to be accurate representations of the findings (nor any other blogs that are based on opinion, including , ironically enough, this one!) Unfortunately they are often read and passed on as being gospel by lots of people who have never read the whole study. The popular press and internet is a huge source of mis-information and half-truths.
So is that it? Is all science about dogs (and other animals) worthless? Actually, no, I don’t think it is. It is all we have and I think there is some worth in every study, but you have to look for it. Forget the press release and the blogtwit and go to the study. And don’t just swallow it because “Professor XYZ” says it is so. Critically evaluate it. Rip it apart and put it back together again. If you don’t get the stats, look at the numbers – do they convince you? Who funded it and why? How many took part? Enough or not? What kind of biases could the method have introduced? And finally, read between the lines – is there an axe to grind? Who gets what out of this result being published?
What I usually find is that there are some interesting points that can be taken from many studies, but the take home-message is rarely the one on the tin. And the headline almost always has to be taken with a very large pinch of salt.
So that’s it then. Intuition wins the day. The self-taught are the holders of all knowledge. Except…
Not quite. Intuition can be useful, but mostly for reacting. Psychologists pretty much agree that we make decisions about half a second before we are consciously aware of them. But we make those decisions based on the same heuristic biases I referred to earlier. Labradors aren’t yappy!
Most of the time they hold true and provide what Daniel Kahneman calls “quick and dirty” solutions, but many times they are just plain wrong. In a population of over 30 thousand, some Labradors are yappy!
But some people are undeniably better than others at dog “intuition”. That’s because it is about practice. Kahneman’s fast-processing decision-making happens without our conscious effort and it can be incredibly precise. Examples are the cricket batsman who hits the ball with such precision, even though its projection cannot be calculated in the time available, or the American football quarterback who in a split second marks the position of every player on the field and calculates where to throw the ball for best effect. When asked how they do it, the answer is invariably, “I don’t know how I know, I just do.”
It’s the same when working with dogs, but it isn’t magic, it is practice. Malcolm Gladwell suggests that any task that uses mental effort requires 10,000 hours of practical experience to react unconsciously and correctly; to become expert enough in it to fast-process it subconsciously.
Aha! Anyone who has owned a dog for about 3½ years has had 10,000 hours of practice. But that only makes them expert in that dog, not dogs in general. This is the reason that absolutely everyone thinks they know about dog behaviour. Most owners don’t actually know about dog behaviour, they know about a dog’s (or at the best about half a dozen dogs’) behaviour.
You would need 10,000 hours with a range of dogs to start to understand dog behaviour intuitively (and then still be open to the suggestion that you don’t know it all!) That’s Monday to Friday, 50 weeks a year, for five years, training a range of dogs. Phew.
So dog (or insert animal of choice) owning, training and behaviourism is a bit strange. It is scientifically researched, but not well because no one wants to pay for it. You have to critically evaluate each and every study to realise their worth, and not everyone does that – it is considered to be a skill at level 7 of the educational framework (Master’s Degree). But if you have the ability and inclination, there is a wealth of information that can be used (and even better, a whole lot of myths that can be busted).
And the intuition, which everybody thinks that they have because they are intuitively right about the limited number of dogs that they have experienced, is actually quite rare.
So what to do for the best? Read every scientific study you can get hold of, but don’t believe anything until you’ve ripped it apart and picked over the bones – especially things that are very intuitive or counter-intuitive (and only start to trust your intuition when you’ve at least five years with a range of dogs under your belt).
That’s what every dog owner who needs help should be looking for, and every dog trainer and behaviourist should be aiming for.
3 replies on “The Science of Dog Training”
Well done Dave, excellent read. what more can I say, a true expert. So called “experts” should stop and read your comments. It was always a pleasure, although at times stressful, to work with you.
Cheers and good luck in the future
That which does not kill us makes us stronger (Friedrich Nietzsche) – I hope we both emerged a little stronger Rod
I thoroughly enjoyed your article because you put in words what I always felt.There is an intuitiveness to dog training, and this skill can only acquired after considerable time working with dogs.Some people call this intuitiveness “A feel.”
Also, I do believe that not all aggression is dominant based.However, I don’t buy into the theory that dominance has no relevance in dog behavior issues; at least not from my experience.
Another good point you made, which I will be doing more of than I have in the past, is really researching what is put out as gospel truth when it comes to dog behavior.
Again, thank you for an excellent article and I hope to be reading more of your ideas in the future