Products and strategies
We analyzed 48 cats (28 males and 19 girls). Twenty-9 (17 males and 12 girls, suggest age 3.59 years, SD 2.71 years) lived in five “cat cafés” (imply amount living together: 14.2, SD 10.01), in which people can freely interact with the cats. The other 19 (11 males and 8 ladies, imply age 8.16 years, SD 5.16 years) were household cats (imply range dwelling alongside one another: 6.37, SD 4.27). We tested family cats residing with at the very least two other cats because the experiment essential two cats as products. The product cats have been quasi-randomly picked out from the cats dwelling with the matter, on situation of a minimal time period of 6 months cohabiting, and possessing distinctive coat colours so that their faces might be much more quickly identified. We did not inquire the owner to make any modifications to h2o or feeding schedules.
For each individual matter, visual stimuli consisted of two photographs of two cats other than the issue who lived alongside one another, and auditory stimuli consisting of the voice of the operator calling the cats’ names. We requested the owner to get in touch with just about every cat’s identify as s/he would usually do, and recorded the connect with employing a handheld digital audio recorder (SONY ICD-UX560F, Japan) in WAV structure. The sampling amount was 44,100 Hz and the sampling resolution was 16-bit. The simply call lasted about 1 s, based on the size of cat’s title (mean period 1.04 s, SD .02). All seem information were being altered to the exact quantity with the aid of model 2.3. of Audacity(R) recording and editing computer software26. We took a digital, frontal encounter, neutral expression, coloration picture of every single cat towards a simple qualifications (resolution variety: x = 185 to 1039, y = 195 to 871) which was expanded or shrunk to fit the keep track of size (12.3″ PixelSense™ constructed-in show).
We analyzed cats individually in a common home. The cat was softly restrained by Experimenter 1, 30 cm in entrance of the laptop laptop (SurfacePro6, Microsoft) which managed the auditory and visual stimuli. Every single cat was tested in just one session consisting of two phases. First, in the identify stage the model cat’s name was played back from the laptop’s created-in speaker four situations, every separated by a 2.5-s inter-stimulus interval. All through this stage, the keep track of remained black. Quickly right after the identify section, the experience section commenced, in which a cat’s confront appeared on the observe for 7 s. The experience pictures were being ca. 16.5 × 16 cm on the keep an eye on. Experimenter 1 gently restrained the cat, on the lookout down at its head she under no circumstances looked at the keep an eye on, and so was unaware of the check affliction. When the cat was serene and oriented towards the keep an eye on, Experimenter 1 started off the identify phase by urgent a crucial on the computer. She restrained the cat right up until the conclusion of the title phase, and then unveiled it. Some cats remained stationary, whereas other folks moved about and explored the photograph offered on the keep track of. The trial finished soon after the 7-s encounter phase.
We done two congruent and two incongruent trials for each individual subject (Fig. 1), in pseudo-random buy, with the restriction that the very same vocalization was not recurring on consecutive trials. The inter-demo interval was at least 3 min. The subject’s behaviors have been recorded on three cameras (two Gopros (HERO black 7) and SONY FDR-X3000): a single beside the monitor for a lateral perspective, one particular in entrance of the cat to evaluate time searching at the observe, and just one recording the entire trial from guiding.
One particular cat concluded only the first trial in advance of escaping from the place and climbing out of reach. For the face phase we measured time attending to the watch, described as visible orientation towards or sniffing the check. Trials in which the issue compensated no attention to the keep track of in the experience period have been excluded from the analyses. In complete, 34 congruent trials and 33 incongruent trials for café cats, and 26 congruent trials and 27 incongruent trials for residence cats had been analyzed (69 trials excluded over-all). A coder who was blind to the conditions counted the variety of frames (30 frames/sec.) in which the cat attended to the watch. To look at inter-observer dependability, an assistant who was blind to the conditions coded a randomly chosen 20% of the films. The correlation involving the two coders was higher and beneficial (Pearson’s r (=) 0.88, n (=) 24, p < 0.001).
We used R version 3.5.1 for all statistical analyses27. Time attending to the monitor was analyzed by a linear mixed model (LMM) using a lmer function in a lme4 package version 1.1.1028. We log-transformed attention time to get close to normal distribution. Congruency (congruent/ incongruent), environment (cat café/house), and the interaction were entered as fixed factors, and subject identity was a random factor. We ran F tests using an Anova function in a car package29 to test whether effects of each factor were significant. To test for differences between conditions, an emmeans function in an emmeans package30 was used, testing differences of least squares means. Degrees of freedom were adjusted by the Kenward–Roger procedure.
In addition to attention to the monitor, we calculated the Violation Index (VI), which indicates how much longer cats attended in the incongruent condition than the congruent condition. VI was calculated by subtracting the mean congruent value from the mean incongruent value for each subject. Greater VI values indicate longer looking in incongruent conditions. Note that we used data only from subjects with at least one congruent—incongruent pair. Thus, if a subject had one congruent/incongruent data point, we used that value for analysis instead of calculating the mean. Data from 14 household cats and 16 café cats were analyzed. We ran a linear model (LM) using a lmer function in a lme4 package version 1.1.1028. Living environment (café/house) was entered as a fixed factor. To examine whether VI was greater than 0, we also conduct a one-sample t-test for each group.
Results and discussion
Figure 2 shows time attending to the monitor for each group. House cats attended for longer in the incongruent than the congruent condition, as predicted however, café cats did not show this difference.
LMM revealed a significant main effect of living environment ((rm X)2 (1) = 16.544, p < 0.001), and a congruency x living environment interaction ((rm X)2 (1) = 6.743, p = 0.009). The differences of least squares means test confirmed a significant difference between congruent and incongruent conditions in house cat (t (86) = 2.027, p = 0.045), but not café cats (t (97.4) = 1.604, p = 0.110).
Figure 3 shows the difference in VI between groups. House cats had a significantly greater VI than café cats (F (1,28) = 6.334, p = 0.017). A one-sample t-test revealed that house cats’ VI was greater than 0 (t(13) = 2.522, p = 0.025) whereas that of café cats was not (t(15) = 1.309, p = 0.210).
These results indicate that only household cats anticipated a specific cat face upon hearing the cat’s name, suggesting that they matched the stimulus cat’s name and the specific individual. Cats probably learn such name-face relationships by observing third-party interactions a role for direct receipt of rewards or punishments seems highly unlikely. The ability to learn others’ names would involve a form of social learning. New behaviors or other knowledge can also be acquired by observing other cats31. Recent study has reported that cats learn new behaviors from humans32. However, we could not identify the mechanism of learning. It is still an open question how cats learn the other cats’ names and faces.
Environmental differences between house cats and café cats include how often they observe other cats being called and reacting to calls. Contrary to human infants who are able to disambiguate the referent of a new word among many potential ones33, cats might not do that at least in this study. Saito et al. showed that café cats did not distinguish their own name from the name of cohabiting cats whereas household cats did so, in a habituation–dishabituation procedure25. We extend this finding by showing that café cats also do not appear to learn the association between another cat’s name and its face.
We also asked whether the ability to recall another cat’s face upon hearing its name was limited to conspecifics. How about human family members? In Exp.2 we used household cats and re-ran the same experiment using a family member’s name and face.
A limitation of Exp.1 was that we could not analyze the effect of the duration of cohabiting with the model cat because this differed across cats, and in some cases the information was lacking (i.e., it was hard to track the exact length of time subject and model cats lived together, as the owner quarantined cats that didn’t get along with others.). We predicted that the longer the cat and human had lived together, the stronger the association between name and face would be, due to more opportunities to learn it.