We introduce an experimental paradigm for studying the cumulative cultural evolution of language. In doing so we provide the first experimental validation for the idea that cultural transmission can lead to the appearance of design without a designer. Our experiments involve the iterated learning of artificial languages by human participants. We show that languages transmitted culturally evolve in such a way as to maximize their own transmissibility: over time, the languages in our experiments become easier to learn and increasingly structured. Furthermore, this structure emerges purely as a consequence of the transmission of language over generations, without any intentional design on the part of individual language learners. Previous computational and mathematical models suggest that iterated learning provides an explanation for the structure of human language and link particular aspects of linguistic structure with particular constraints acting on language during its transmission. The experimental work presented here shows that the predictions of these models, and models of cultural evolution more generally, can be tested in the laboratory.
The production of this forecast is supported by the Institute's Corporate Members: Bank of England, HM Treasury, Mizuho Research Institute Ltd, Office for National Statistics, Santander (UK) plc and by the members of the NiGEM users group.
The majority of languages with a dominant word order use either SOV or SVO (Dryer, 2013).The improvised gesture paradigm, in which participants use only gesture to convey information, is increasingly being used to investigate this asymmetry.In one of the earliest studies of this kind, Goldin-Meadow et al. (2008) claimed that Agent-Patient-Action, (here represented as APV but typically equated with SOV), reflects the 'natural' order of elements in improvised gesture.Other authors argue that APV is the natural order only for some types of event and that constituent order in improvised gesture reflects certain properties of an event, such as its temporal structure (Christensen et al., 2016) or the semantic relation between entities and actions (Schouwstra & Swart, 2014).Meir et al. (2017) suggest that gesture order is conditioned on saliency: human entities are more cognitively salient than inanimate entities and are therefore expressed first.Here we investigate the role of saliency in more detail.We present evidence that manipulating the visual saliency of the agent can influence the relative order of other constituents.Twenty-eight participants were shown pictures of scenes in which a human agent performed an action on an inanimate patient, for example, a man kicking a large potted plant (Fig. 1(a)).They were instructed to describe each scene using only improvised gesture and no speech.Participants were randomly assigned to one of two conditions: the 'generic' condition in which agents represented generic humans such as a man or a woman, or the 'character' condition where more visually salient agents were presented, such as a pirate or a punk.Patients were inanimate objects of a similar size to the agents and were depicted in a state of falling as a result of the action.We found that in the subset of trials where the agent, patient and action were expressed exactly once, the predominant order in the character condition was AVP; in the generic condition the majority order was APV (Fig. 1(b)).However, looking across all trials, we found that participants were significantly more likely to omit the agent in the generic condition (62% of trials) compared with the character condition (17%) (p<0.001).This suggests that participants in the
Previous research has pointed at naturalness and communicative efficiency as possible constraints on language structure. Here, we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure.