Search This Blog

Monday, August 27, 2018

Dogs and humans respond to emotionally competent stimuli by producing different facial actions

The commonality of facial expressions of emotion has been studied in different species since Darwin, with most of the research focusing on closely related primate species. However, it is unclear to what extent there exists common facial expression in species more phylogenetically distant, but sharing a need for common interspecific emotional understanding. Here we used the objective, anatomically-based tools, FACS and DogFACS (Facial Action Coding Systems), to quantify and compare human and domestic dog facial expressions in response to emotionally-competent stimuli associated with different categories of emotional arousal. We sought to answer two questions: Firstly, do dogs display specific discriminatory facial movements in response to different categories of emotional stimuli? Secondly, do dogs display similar facial movements to humans when reacting in emotionally comparable contexts? We found that dogs displayed distinctive facial actions depending on the category of stimuli. However, dogs produced different facial movements to humans in comparable states of emotional arousal. These results refute the commonality of emotional expression across mammals, since dogs do not display human-like facial expressions. Given the unique interspecific relationship between dogs and humans, two highly social but evolutionarily distant species sharing a common environment, these findings give new insight into the origin of emotion expression.


The common origin of emotions has long been a subject of scientific interest with different emotional responses producing a diverse range of communicative elements, especially through the face. Facial expressions are also correlates of internal state in both humans and other animals and so may be used, in part, to infer emotion alongside other component processes, such as physiological activation and behavioural tendencies.
Many studies use an holistic approach (i.e. categorizing the whole face as angry, happy, etc.) to classify the target facial expressions, which reflects the way the human brain processes faces, but can be problematic when examining the underlying mechanism of emotion perception across species. For instance, there is a diverse range of smiling faces with different visual characteristics and different emotional meanings in humans. As a classic example, the Duchenne smile (felt, true enjoyment) differs by one muscle contraction from the non-Duchenne smile (unfelt, usually produced in formal greetings). Moreover, during laughter and depending on the context, a blend of both Duchenne and non-Duchenne smiles is often observed. Hence, simply classifying a facial expression as “happy” is too simplistic and less meaningful for cross-species comparison. Furthermore, the same ‘holistic’ facial morphological configuration could have different functional meanings (i.e. result in distinctly different behavioural consequences) depending on the species. For example, the Play Face (PF) and the Full Play Face (FPF) are variants of the same facial expression, where the former presents an open mouth with lower teeth exposed, and the latter incorporates visible upper teeth. Both the PF and the FPF represent different degrees of playful expression in great apes (humans included). Conversely, in crested macaques, mandrills and geladas, the FPF is not just a more intense version of the PF, but instead is derived from convergence between the PF and the silent-bared teeth display (SBT), a facial expression observed in affiliative settings such as grooming. Additionally, the SBT indicates submission and appeasement in Barbary macaques, signals affinity and benign intentions in humans, and, in chimpanzees, is present in a range of situations from response to aggression to affinity contexts.
As an alternative to an holistic descriptive approach, the decomposition and objective description of distinct anatomical regions of facial features, such as occurs with the Facial Action Coding System (FACS), has been the golden standard to study human facial expressions of emotion across individuals of different races and cultures for several decades. Each of the discrete facial movements identified (Action Units, AUs) is the result of an independent facial muscle contraction that can produce several changes in appearance to the face, which in turn are used to identify which AUs are activated. Thus, FACS codes facial movements from a purely anatomical basis, avoiding circular reasoning or a priori assumptions of emotion meaning. Recently, FACS has been adapted to several non-human species, such as chimpanzees and orangutans, following the original methodology and has proven to be a successful tool for objectively investigating and comparing facial expressions of closely related species. For example, chimpanzees and humans share an identical facial muscle plan (differing by only one muscle), but chimpanzees display both homologous (e.g. play face and human laugh) and species-specific expressions (e.g. pant-hoot).
While the human prototypical facial expressions of emotion are well established, little is known about the quantitative and empirical nature of the emotional facial displays of the domestic dog, an evolutionarily remote, but socially complex species which often shares the human social environment and frequently engages in interspecific communication with an emotional content . To date, functional facial expressions in dogs have been largely discussed holistically in relation to their approach-avoidance value, for example, the “threat gape” in fight-flight situations, and the PF or the Relaxed Open Mouth (ROM) as a social communicative signal for play solicitation and within play bouts. With the development of the FACS for the domestic dog, it becomes possible to apply a bottom-up technique to investigate the composition and meaning of dogs’ facial expressions and, more importantly, to establish possible analogies with humans, with whom they socially interact.
Dogs and humans, like other mammals, have a homologous facial anatomy plan even though they belong to phylogenetically distant groups. Additionally, both species share a common mammalian neuroanatomy for the basic emotions such as fear and happiness, typically live in a common social and physical environment, are very facially expressive , and respond to the same or similar conspecific and heterospecific social cues. Consequently, the facial cues and expression of emotion in home-dwelling pet dogs provide a unique comparative model for the study of phylogenetic inertia (i.e. absence of expected change and/or adaptation to an optimal state given specific selection pressures in the current environment) versus evolutionary divergence (i.e. a set of changes brought about by selection pressures from a common ancestor resulting in homologies) versus evolutionary convergence (i.e. a set of changes from selection pressures acting in independent lineages to create similarity in the resulting analogies).

Here, we investigated the mechanistic basis of facial expressions in humans and dogs, by objectively measuring their video recorded facial actions during immediate reactions to emotionally-competent stimuli. The FACS and the DogFACS were applied in a range of contexts associated with four categories of emotional responses: a) happiness, b) positive anticipation, c) fear, and d) frustration . Instead of selecting the basic emotions that are known to produce universal facial signals in humans, we focused on emotions that are defined by evolutionary and biologically consistent criteria: 

1) essential for solving adaptive problems in mammals (e.g. fear of a threat prompts flight increasing survival), 

2) arise from corresponding physiological markers and 

3) correlate with specific neuroanatomical regions . This approach reduces anthropomorphic and anthropocentric bias in the selection and comparison of emotions, i.e. instead of trying to identify stereotypically human emotions in dogs, we focused on examining shared underlying mammalian homologies. Furthermore, for each category of emotion (e.g. fear), we used a range of contexts to generate the emotional response (thunderstorms, specifically avoided objects, etc.). This increased the likelihood of identifying the general facial responses to the emotional category of stimulus (e.g. general facial actions of fear), instead of behavioural motivations (e.g. facial actions displayed for thunderstorms, but not in other fear contexts). We only analysed spontaneous emotional reactions because posed responses could differ from spontaneous ones in duration, intensity, symmetry and form.