Ancestry-constrained phylogenetic analysis supports the Indo-European steppe hypothesis
Abstract











Discussion of Indo-European origins and dispersal focuses on two hypotheses. Qualitative evidence from reconstructed vocabulary and correlations with archaeological data suggest that Indo-European languages originated in the Pontic-Caspian steppe and spread together with cultural innovations associated with pastoralism, beginning c. 6500–5500 bp . An alternative hypothesis, according to which Indo-European languages spread with the diffusion of farming from Anatolia, beginning c. 9500–8000 bp , is supported by statistical phylogenetic and phylogeographic analyses of lexical traits. The time and place of the Indo-European ancestor language therefore remain disputed. Here we present a phylogenetic analysis in which ancestry constraints permit more accurate inference of rates of change, based on observed changes between ancient or medieval languages and their modern descendants, and we show that the result strongly supports the steppe hypothesis. Positing ancestry constraints also reveals that homoplasy is common in lexical traits, contrary to the assumptions of previous work. We show that lexical traits undergo recurrent evolution due to recurring patterns of semantic and morphological change.*
This article has three main goals. First, we show that statistical phylogenetic analysis supports the traditional steppe hypothesis about the origins and dispersal of the Indo-European language family. We explain why other similar analyses, some of them widely publicized, reached a different result. Second, for skeptics about phylogenetic methodology, we suggest that the agreement between our findings and the independent results of other lines of research confirms the reliability of statistical inference of reconstructed chronologies. Finally, for linguistic phylogenetic research, we argue that analyses grounded in the evolutionary properties of the traits under study yield more reliable results. Our discussion makes reference to ancestry relationships, for example between Old Irish and two modern languages descended from it, Irish and Scots Gaelic, and draws on what can be learned from direct observation of changes over historical time. In our phylogenetic analyses, we introduce ancestry constraints and show that they result in more realistic inferences of chronology.
Our article is organized as follows. We first give background information about the steppe and Anatolian hypotheses, and about earlier phylogenetic analyses (§1), and discuss lexical traits (§2) and linguistic ancestry relationships (§3). We then describe our data and some measurements made directly on the data (§4), explain our phylogenetic methods (§5), and summarize our experimental results (§6). Finally, we discuss the effects [End Page 194] of advergence (§7) and ancestry constraints in phylogenetic modeling (§8), followed by conclusions (§9) and appendices with details about methods and results.1
1. Indo-european background
1.1. The steppe and anatolian hypotheses
The relationships of Indo-European (IE) languages have been studied for over two centuries, but it is still disputed when and where their common ancestor Proto-Indo-European (PIE) was spoken, and how they spread before they first appeared in historical records about 3,700 years ago. Two hypotheses dominate discussion.
According to a traditional hypothesis (Gimbutas 1973, 1997, Mallory 1989) accepted by many linguists (Ringe 2006, Parpola 2008, Fortson 2010, Beekes 2011), PIE was spoken in the Pontic-Caspian steppe, north of the Black and Caspian Seas. The steppe hypothesis associates IE language spread with the diffusion of cultural innovations relating to pastoralism, including horse domestication, wheeled vehicles, and the weaving of wool from woolly sheep. Analyses of archaeological data from this point of view suggest a PIE dispersal date c. 6500–5500 bp , probably in the first half of that period.2 It is now also widely assumed that Anatolian was the first branch to separate from PIE.3 Within the framework of the steppe hypothesis the common ancestor of the non-Anatolian languages, Proto-Nuclear-Indo-European (PNIE), might then have been spoken c. 6000–5000 bp .
According to an alternative hypothesis proposed by Renfrew (1987), IE languages spread into Europe with the diffusion of agriculture from Anatolia; see also Renfrew 1999, 2000a,b, 2001, 2003. This mechanism is plausible, since clear cases of language dispersal with the spread of agriculture are known elsewhere in the world (Bellwood & Renfrew 2002, Diamond & Bellwood 2003, Bellwood 2004). Given that farming reached southeast Europe by the seventh millennium bce (van Andel & Runnels 1995, Perlès 2001, Bocquet-Appel et al. 2009), the Anatolian hypothesis implies a PIE dispersal date c. 9500–8000 bp.4 [End Page 195]
In principle, evidence bearing on IE origins and dispersal may come from archaeology, genetics, or linguistics. At present, genetic data is insufficient to resolve the matter, since ancient European DNA and comparison of ancient and modern DNA confirm not just immigration from the Near East at the time of the farming dispersal, but also later population movement from northern Eurasia that is consistent with the steppe hypothesis; see Brandt et al. 2013 and Lazaridis et al. 2014. In practice, discussion has mainly focused on archaeological and linguistic arguments.
Several arguments have been advanced in favor of the steppe hypothesis; a recent review is byAnthony and Ringe (2015). The first argument is from archaeological analysis. For example, based on correlations among archaeological data, documented cultural practices, and vocabulary, researchers have argued that Proto-Indo-Iranian was spoken c. 4300–3700 bp in an area of central Asia around the Aral Sea (Lubotsky 2001, Witzel 2003, Kuz’mina 2007).5 This includes the Sintashta culture of the steppe to the north, whose economy was pastoral and whose cemeteries contain horse sacrifices and chariots (Anthony 2009), as well as the more urbanized Bactria-Margiana Archaeological Complex to the south (Hiebert 1994). The latter may have been the staging ground for Indo-Iranian dispersal. If this argument is correct, then from what we can infer about cultural interactions in this region, Indo-Iranian speakers probably entered the area from the steppe. This line of reasoning locates speakers of an Indo-Iranian precursor north of the Caspian Sea c. 5000–4500 bp , close in time and place to PNIE if the latter was spoken in the steppe c. 6000–5000 bp . In other words, the diffusion of cultural traits that are observed in the archaeological record (and in some cases reported in later textual sources) correlates well with the chronology of the steppe hypothesis. Similar analyses link IE-speaking European populations with culture changes that can be identified as moving from the steppe and eastern Europe within the chronological framework of the steppe hypothesis.
A second argument is based on inferences about environment and material culture from reconstructed vocabulary. For example, wheeled-transport vocabulary is reconstructed for PIE (Mallory & Adams 1997,2006, Parpola 2008) or PNIE (Darden 2001). Since wheeled transport was invented long after farming reached Europe, if PIE or PNIE had such vocabulary it cannot have been spoken by early farmers. It is fair to say that most of those writing from a linguistic perspective, though not all (Krell 1998, Clackson 2000), have been impressed by the extent of the evidence. The arguments are in any case based on an assemblage of individual points, each of which needs careful evaluation.
A third argument is based on linguists’ subjective impression that early IE languages are more similar grammatically and phonologically than would be expected from the Anatolian chronology; see Table 1. After 4,500 or more years of divergence on the Anatolian chronology, some grammatical patterns remain intact with only a few changes in each language; see, for example, Hittite [eːsmi, eːsi,
] = Sanskrit [ásmi, ási, ásti]. Similarly, only a few sound changes distinguish Hittite [χanti], Sanskrit [ánti], and Greek [ánti]; such examples can be replicated throughout the grammar and lexicon. The [End Page 196]divergence time posited by the Anatolian hypothesis is roughly twice that of the present-day Germanic, Romance, or Slavic languages, but many linguists have a subjective impression that the differences inTable 1 are not twice as great. Yet we have no generally accepted way to quantify impressions of similarity in phonology or grammar, or to show that examples like those in Table 1 are representative, so this argument remains impressionistic.6
Two further arguments probably support only a weaker position, namely, that PNIE was spoken in the Pontic-Caspian steppe. One concerns evidence for early contact between IE and early western Uralic languages (Joki 1973, Koivulehto 2001, Janhunen 2009). Given the location of Uralic in northern Eurasia, such contact must have occurred north of the Black and Caspian Seas. The evidence for contact between early Uralic languages and the Indo-Iranian branch of IE is uncontroversial, supported by dozens of unambiguous loanwords, and accepted by specialists (Rédei 1986, Lubotsky 2001, Mallory 2002). There is similarly clear evidence for contact with Balto-Slavic (Kallio 2005, 2006, 2008) and Germanic (Hahmo et al. 1991–2012). The evidence for contact with PIE itself is weaker (Kallio 2009), perhaps because Uralic languages spread from the east into northern Europe and Proto-Uralic itself was not spoken in proximity to the steppe.7
Finally, some morphological evidence suggests that the Greek, Armenian, Balto-Slavic, and Indo-Iranian subfamilies form a clade within IE (Ringe et al. 2002). Since Greek is spoken to the west of Anatolia, and Armenian and Indo-Iranian to the east, it is hard to construct a diversification scenario consistent with the Anatolian hypothesis in which these languages remained in contact after PNIE, unless the latter was itself spoken on the steppe. The steppe hypothesis makes this easier: the four subfamilies in question remained in proximity, after the departure of Tocharian to the east and Italo-Celtic, Germanic, and others to the west.
Two main arguments support the Anatolian hypothesis. First, as originally noted by Renfrew (1987), the spread of agriculture provides a plausible mechanism for large-scale language dispersal, one with clear parallels elsewhere. The language dispersal mechanisms required in the steppe hypothesis are less well understood, partly because pastoral subsistence economies are not as common worldwide. Second, beginning with Gray & Atkinson 2003, the Anatolian hypothesis has been supported by research using[End Page 197] statistical methods adapted from biological phylogenetics (Atkinson et al. 2005, Atkinson & Gray 2006, Nicholls & Gray 2008, Gray et al. 2011, Ryder & Nicholls 2011, Bouckaert et al. 2012,2013). This is also the focus of our research.
Methodological differences between fields contribute to the present impasse. Arguments for the steppe hypothesis are mostly qualitative rather than quantitative, and come from traditional lines of reasoning in historical linguistics and archaeology. In contrast, a crucial argument for the Anatolian hypothesis is quantitative, relying on statistical methods that originated in another discipline. Thus some researchers have expressed skepticism about chronological inference with statistical methods (Clackson 2000, 2007,Evans et al. 2006, McMahon & McMahon 2006), while some advocates of such methods have written that historical linguistics methods ‘all … involve intuition, guesswork, and arguments from authority’ (Wheeler & Whiteley 2014). We hope our work can contribute to a rapprochement between the two research traditions.
1.2. Indo-european phylogenetics
IE linguistic phylogeny has been studied for many decades (Meillet 1922, Porzig 1954, Birnbaum & Puhvel 1966), but statistical phylogenetic research is relatively recent in IE (Tischler 1973). Dyen and colleagues (1992) used lexicostatistics to produce a classification of IE languages by analyzing a word list of eighty-four modern languages and 200 basic meanings compiled by Isidore Dyen. Their method assumed a similar overall rate of lexical change in all languages. An alternative approach to classification that dispenses with this assumption was employed by Ringe and colleagues (2002) to address the issue of higher-order structure in IE. They analyzed a data set created by Don Ringe and Ann Taylor, consisting of phonological, morphological, and lexical traits from twenty-four predominantly ancient and medieval languages. These two works yielded two word lists with cognate coding; one or both, or both combined, were used in all subsequent work.
In 2003, Gray and Atkinson presented the first Bayesian phylogenetic analysis of IE chronology; they used the Dyen word list, supplemented with Hittite, Tocharian A, and Tocharian B data. Historically attested events were used to date various linguistic splits, and rate smoothing over the branches of the inferred tree was used to relax the assumption of a constant rate of change. As in all subsequent analyses prior to our work, the inferred root age supports the Anatolian hypothesis. Nicholls and Gray (2008) reworked Gray and Atkinson’s analysis by replacing the trait model, which permitted multiple gains in the same lexical trait, with one that did not; they also performed a separate analysis on the lexical traits of a subset of the languages from the Ringe-Taylor data set. Ryder and Nicholls (2011) then added a model of lexicographic coverage that enabled them to work with all twenty-four languages in the Ringe-Taylor data set, many of which, like Oscan and Old Persian, are scantly attested. Bouckaert and colleagues (2012) performed a phylogeographic analysis: its goal was to infer the geographical location of PIE, but embedded in it was a phylogenetic analysis that superseded previous work in most respects.8Most notably, the inference software supported many different trait models, including the single-gain trait model devised by Nicholls and Gray, and the data was based on a harmonization of the Ringe-Taylor and Dyen data sets. Bouckaert and colleagues (2013) addressed an error in the coding of the data without altering their general conclusions. [End Page 198]
Figure 1.
Analysis A1 summary tree. Modern languages with no ancestors in the data set are excluded. This tree shows median posterior node heights, median posterior branch rate multipliers (width of horizontal lines), time constraints on ancient and medieval languages (bright red bars), clade constraints (vertical black bars), and posterior clade probabilities less than 98%.
We now briefly preview our results. Using the same model and data set as Bouckaert and colleagues (2012, 2013), but with incremental changes to both, we found a root age that strongly supports the steppe hypothesis. The key difference was that we constrained eight ancient and medieval languages to be ancestral to thirty-nine modern descendants. Using ancestry constraints is similar in spirit to stipulating the known dates of historical languages, or stipulating uncontroversial clades that are not the object of inquiry. The ancestor-descendant relationships we posit are uncontroversial, but could not be inferred by the model. Figure 1 shows the result of an analysis with ancestry constraints and other refinements, where the only modern languages included are those with documented ancestors. Figure 2 shows a similar analysis with modern languages [End Page 199] from all IE subfamilies. Figure 3 shows the inferred IE root ages in selected studies, beginning with Gray & Atkinson 2003 and ending with our work.
Figure 2.
Analysis A2 summary tree. Modern languages are included from all IE subfamilies. See Fig. 1 caption to interpret graphical elements.
2. Lexical traits
Linguists infer relationships from morphological, phonological, and lexical traits. Morphological and phonological traits, however, are interdependent [End Page 200] in ways that are poorly understood.9 For this reason, and because large lexical data sets are available, most statistical work on language relationships analyzes lexical traits. There are at least two types of lexical traits, which we call cognate traits and root-meaning traits. They have not been distinguished in previous phylogenetic research.
Figure 3.
Inferred IE root age distributions in selected studies. GA: Gray & Atkinson 2003; NG1, NG2: Nicholls & Gray 2008, using Dyen and Ringe-Taylor data sets; RN: Ryder & Nicholls 2011; B: Bouckaert et al. 2013; C: analysis A1 corrected root age from our work (§7.1). Plotted are the 95% highest-density interval (vertical lines), the mean (NG, RN) or median (B, C) if known, and intervals for the steppe and Anatolian hypotheses (dashed lines).
Languages share a cognate trait if they share cognate words, that is, words descended from the same ancestral word form. For example, English and German share a cognate trait because timber and Zimmer‘room’ are descended from Germanic *timra- (derived from a PIE root *demh2- ‘build’); likewise, GermanGast ‘guest’ and Latin hostis ‘stranger, enemy’ define a cognate trait because they are descended from a form *ghosti- (Bammesberger 1990, Ringe 2006). Cognate words need not have the same meaning. Cognate traits are widely studied in comparative and historical linguistics, but are only occasionally used in statistical phylogenetic studies (e.g. Gray & Jordan 2000).10
More often, the data consists of root-meaning (RM) traits, which encode whether the most semantically general and stylistically neutral word for a given meaning is based on a given ancestral root; meanings are often chosen from a ‘Swadesh’ list of one or two hundred basic meanings. Such traits are the basis for most IE analyses, including ours. For example, since English feather is derived from a PIE root *pet- ‘to fly’, English has a trait [*pet-, ‘feather’]; because Latin serpens ‘snake’ is derived from *serp- ‘to creep’, [End Page 201] Latin has a trait [*serp-, ‘snake’]. Languages can share an RM trait based on forms with different derivations, like English feather < *pet-trā (or *pet-rā) and Latin penna ‘feather’ < *pet-nā. These words share a root but are not cognate words, since they were derived with unrelated suffixes *-trā (or *-rā) and *-nā and so cannot descend directly from the same ancestral word form. By contrast, because the cognate words timber and Zimmer have different meanings, they do not define a shared RM trait.
2.1. Homoplasy and drift
Cognate and RM traits evolve very differently, especially with respect to homoplasy or independent innovation.11 Except in borrowing between languages, cognate traits ordinarily come into existence only once; this is the basis of the comparative method (Meillet 1925, Weiss 2014). Therefore models of trait evolution that do not permit homoplasy are well suited to cognate traits. But because the mechanisms of change underlying RM traits include semantic change and the derivation of new words from existing forms, RM traits are subject to at least two distinctive kinds of homoplasy. In describing them, we adaptSapir’s (1921) term drift, which refers to the predisposition to undergo certain changes given certain precursor traits.
First, RM traits arise not only when word forms come into existence, but also when they change meanings. For example, Old English (OE) timber probably originally meant ‘building’ (like Old Saxontimbar); to model its shift in meaning to ‘timber (material for building)’, the trait [*demh2-, ‘building’] would be said to be replaced by a trait [*demh2-, ‘timber’].12 Meaning changes fall into recurrent patterns across languages (Heine & Kuteva 2002, Traugott & Dasher 2002, Urban 2014). If the same meaning change affects the same root in related languages, a homoplastic RM trait results. For example, in a crosslinguistically common shift (Wilkins 1996), reflexes of PIE *pod- ‘foot’ came to mean ‘leg’ independently in Modern Greek and modern Indic and Iranian languages. Two other examples from our data set are given in 1.
- (1).
- a. Old Irish seinnid meant ‘play or strike an instrument, sound’ but has shifted in Modern Irish and Scots Gaelic to mean ‘sing’. The ancestral root *swenh2- referred to producing sound or music more generally, but the same semantic shift to ‘sing’ is seen in Persian xvāndan.
- b. Many languages distinguish a stative ‘sit’ verb (‘be in a sitting position’), from a change-of-state one (‘sit oneself down, take a seated position’), but shifts between the senses are common. In PNIE, the root *h1eh1s- expressed the stative sense, while *sed- expressed the change-of-state sense (Rix et al. 2001). Change-of-state *sed- came to express the stative sense independently in Armenian, Balto-Slavic, Celtic, Germanic, and Italic, a shift not shared by Greek or Indo-Iranian and therefore independent in at least some of the branches where it happened.
Recurrent meaning changes like these have been called ‘rampant’ in language (Ringe et al. 2002). For such changes we use the term semantic drift. [End Page 202]
A second source of homoplasy in RM traits is derivational drift. This refers to change that occurs because certain roots are semantically well suited to provide certain derivatives. For example, constructions that mean ‘cause to die’ are a recurrent source of verbs for ‘kill’ (Buck 1949). Therefore, as descendants of PIE *gwhen- ‘kill’ fell out of use, causative derivatives of PIE *mer- ‘die’ were used in this meaning. This happened independently in Irish and in modern Indic and Iranian languages (Rix et al. 2001), yielding a homoplastic RM trait.13 Three other examples from our data set are given in 2.
- (2).
- a. Because words for ‘animal’ often evolve from expressions meaning ‘having breath’ and so forth, PIE h2enh1- ‘breathe’ is seen in several otherwise unrelated ‘animal’ terms. Latin animal itself is a derivative of anima ‘spirit’, a derivative of h2enh1-. In Indo-Iranian, Persian jānvār ‘animal’ and related forms descend from *wyāna-bāra- ‘having a spirit’, in which *wyāna- is a derivative of h2enh1-. And though not the basic word for ‘animal’, Tocharian B onolme ‘living being’ is also a derivative of h2enh1- ‘breathe’ (D. Adams 2013).
- b. Basic words for ‘live’ include derivatives of the PNIE root *gwyeh3-: Greek
, Classical Armenian keam, Latin vīvō, and so forth, all of which are primary verbal formations. Basic Celtic words for ‘live’ in our data set are derivatives of an adjective *gwih3-wo- that was derived from the same root, that is, a construction ‘be alive’. It is natural to derive a stative verb ‘X’ from a construction ‘be X’ (with a stative adjective X).
- c. In the meaning ‘snake’, reflexes of the PIE noun *h1ógwhis are widespread (Ancient Greek óphis, Vedic Sanskrit áhi-, etc.). A verb *serp- ‘crawl’ also often refers specifically to the motion of a snake. Derivatives of *serp- came to be the general term for ‘snake’ in Albanian, in Latin (and modern Romance languages), and in modern Indo-Aryan (IA) languages, for example, Hindi
, Assamese xāp. The homoplastic nature of such cases is shown by the fact that the various word forms that acquire the new meaning are often formed with different derivations. For example, though Albanian (Tosk) gjarpër ‘snake’ and Latin serpens are both based on the PIE root *serp-, the Albanian noun is formed with a suffix *-ena- (Orel 1998) and the Latin noun with a different suffix *-ent-. The word forms themselves do not go back to a single ancestor.
In short, though some analysts erroneously assume that RM traits are homoplasy-free (Atkinson et al. 2005:204, Gray et al. 2011:1094), semantic and derivational drift are endemic in RM data sets. This claim is further supported in §3 below and quantified in §4.2.14 [End Page 203]
2.2. Precursor traits and advergence
A precondition for semantic and derivational drift is the presence of precursor traits: those that tend to give rise to semantic changes or new derivatives as described in §2.1. These are also heritable RM traits. For example, a language must have a precursor trait like [*pod-, ‘foot’] in order to innovate [*pod-, ‘leg’] via semantic drift; it must have a precursor like [*serp-, ‘crawl’] to innovate [*serp-, ‘snake’] via derivational drift. In traditional terms, Ancient Greek poús must have meant ‘foot’ (not ‘mouth’ or ‘stone’) in order to gain the meaning ‘leg’, and Latin serpō must have had a suitable meaning for a derivative to mean ‘snake’. Some precursor traits, like [*pod-, ‘foot’], are in the basic-vocabulary data set, but most, like [*serp-, ‘crawl’], are not. Basic words do not make a closed semantic system: the semantic sources of basic vocabulary are often elsewhere.
A precursor trait may cause a modern descendant of an ancestral language to gain a corollary trait in parallel with a nondescendant. For example, Latin homō ‘person’ and Old Irish duine ‘person’ are both derived from PIE *dhǵhom- ‘earth’ (Ernout & Meillet 1951, Matasović 2009); the derivation is like that ofearthling. Thus Latin and Old Irish share a trait [*dhǵhom-, ‘person’].15 Semantic shift in both Romance and Modern Irish produced a corollary trait [*dhǵhom-, ‘man’]: reflexes of homō shifted from ‘person’ to ‘man (male person)’, as in French homme, and a parallel change gave Modern Irish duine ‘man’.
Interacting with precursor-driven drift is a further phenomenon, which Renfrew (2000a) calledadvergence: ‘the process of mutual influence when two separate languages, which are in fact genetically related through descent from a common ancestor, occupy adjacent territories and continue to interact’. In other words, diversifying languages that remain in contact in a dialect network tend to share innovations (Geraghty 1983, Ross 1988, Babel et al. 2013). Three examples in our data set are given in 3. In 3b–c, note that the West Scandinavian language Norwegian has had intensive and sustained contact with East Scandinavian.16
- (3).
- a. Latin edere ‘eat’ was replaced in that meaning by reflexes of mandūcāre ‘chew’ (e.g. French manger) in all Romance languages outside of Iberia. Since these languages are contiguous but paraphyletic, contact must have played a role in causing them to gain the trait [*mand-, ‘eat’], but this also occurred because Latin had a precursor trait [*mand-, ‘chew’]: words for ‘chew’ are a regular source of words for ‘eat’ (Buck 1949), and in colloquial speech mandūcāre was occasionally used to mean ‘eat’ (Glare 1982).
- b. Germanic languages generally have the cognate of English eat in the sense ‘eat’, including Old West Norseeta, preserved in Icelandic; but Danish and Norwegian use spise.
- c. For the sense ‘narrow’ the Old West Norse word was mjór ‘thin, narrow’ or þröngr ‘narrow, crowded’, but Danish, Norwegian, and Swedish have smal. [End Page 204]
We use Renfrew’s term in a broader sense. When closely related languages accumulate the same innovations, not present in their common ancestor, it can be hard to determine how much of this is due to drift (because the languages share precursor traits) and how much is also caused by sociolinguistic interaction. We call all such developments advergent here.
2.3. Borrowing
While our analyses (like many phylogenetic studies) use lexical data, the full picture of a language family’s phylogeny is also based on other data. In fact, lexical data is usually judged less reliable than other linguistic data as an indicator of phylogeny (Campbell & Poser 2008, Ringe & Eska 2013). Any linguistic feature can be borrowed given sufficiently intense contact (Thomason 2001), but vocabulary is most readily borrowed. Loanwords are thus often excluded in analyses whose goal is to determine relationship, including those that seek to infer phylogeny and chronology together (Gray & Atkinson 2003,Bouckaert et al. 2012, 2013). But in analyzing chronology alone, loanword exclusion is not motivated. First, the replacement of one word by another in the same semantic slot is the same kind of change whether the new item is a native word or was borrowed at some previous time. Loanwords do not normally enter a language in the basic vocabulary; they may enter as stylistic alternants, only later settling into their eventual semantic slot. For example, the French borrowing animal took several hundred years to become the most general and neutral English word in the meaning ‘animal’ (Kurath & Kuhn 1952–2001).
Moreover, known borrowings are not evenly distributed across IE languages. For modern languages like English and French and medieval languages like Old Irish and Old Norse, we know most languages they have been in contact with and we can recognize borrowings. But for ancient languages like Vedic Sanskrit and Ancient Greek, given our poor knowledge of their linguistic milieux it is almost impossible to distinguish borrowings from distributionally limited inheritances. For example, the Ancient Greek word for ‘woods’ is húlē. This word has many derivatives within Greek but no secure cognates elsewhere in IE, nor any obvious Greek morphological source (Chantraine 1999, Beekes 2010). It might have been borrowed from a language that Greek came in contact with, or it might be an internal creation whose history is obscure; we cannot know. Such cases also arise at the subfamily level. For example, the basic Germanic word for ‘drink’ (English drink, Old Norse drekka, etc.) has no IE cognates and no known origin. The uneven distribution of known loanwords is quantified in §4.2.
If known loanwords are excluded from the data, then the measured rate of change, which is inferred largely from modern and medieval languages, will be lower (cf. Greenhill et al. 2009). Since unidentified ancient loanwords cannot be excluded, the unattenuated diversity in ancient languages, combined with the lower inferred rate of change, will cause the root of the tree to be placed farther in the past. Thus it is more principled not to exclude tagged loanwords, even though in practice the effect of doing so is quite small (see Fig. 5 below, analysis A4).
3. Ancestry relationships
Table 2 lists the eight ancient and medieval languages in our data set that are known to be the ancestors of thirty-nine modern languages in the data set, and for which we implemented ancestry constraints.
A logically possible alternative is that the later languages are descended not from the putative ancestor, but from another variety spoken at the same time. For example, perhaps Modern Irish and Scots Gaelic are descended not from Old Irish, but from an undocumented variety that had already significantly diverged from it.
This alternative is the only interpretation of the phylogenetic trees given by Bouckaert and colleagues (2012, 2013), whose analyses include ancestral and descendant languages [End Page 205] but do not constrain their relationship except that each ancestral language forms a clade with its descendants. In each such clade their results show extensive lexical change on the branch leading from the common ancestor to the ancestral language, which is then crucially not ancestral to the later descendants. For example, their maximum clade credibility tree shows Old Irish evolving for over 500 years after its common ancestor with the other Goidelic languages (for a similar effect see our Fig. 6 below).
Table 2.
Ancestry relationships in our data. Sources summarize the relevant descent relationships
The phylogeny implied by Bouckaert and colleagues’ results would mean either that the putative ancestral languages in Table 2 were highly diglossic, with undocumented colloquial varieties possessing many basic-vocabulary differences from documented varieties, or that similar basic-vocabulary differences characterized undocumented regional dialects of the languages in Table 2. As far as we know, these claims have not featured in the scholarly literature; there are several good arguments against them, which we summarize in this section.
First, where we know about regional dialects of the ancient and medieval languages in Table 2, there is no evidence for significant basic-vocabulary differences. For example, the Ancient Greek dialects were diverse and numerous and are well documented throughout the Greek-speaking world, since the written language was not standardized until the Hellenistic period. But even thorough descriptions of Ancient Greek dialect variation (Buck 1955, Colvin 2007) do not mention relevant differences in basic vocabulary; new lexical traits in Modern Greek are the result of historical change over two millennia. Admittedly, we do not know about the regional dialects of every language in Table 2, but it would be tendentious to propose the needed highly divergent dialects only in those cases where we are ignorant.
Second, where we know about social variation in the ancient and medieval languages in Table 2, there is no evidence for significant basic-vocabulary differences. The best-understood case is Latin, where evidence of the colloquial language is found in private letters, contracts, graffiti, and other documents, as well as literary depictions and grammarians’ comments (Väänänen 1981, Herman 2000, J. N. Adams 2007, 2013, Wright 2011). Colloquial speech in any language has distinctive properties, but the evidence does not show that literary and colloquial Latin would differ in a Swadesh list. As Clackson and Horrocks (2011:236) describe Latin in the first three centuries ce , ‘there was so much geographical and social mobility’ that ‘local differences in speech tended to become levelled, and long-term divisions in the language were kept to a minimum’.17 [End Page 206]
To confirm the implausibility of a Romance ancestor contemporary with Latin but lexically quite divergent from it, we examined all cases in our data set where a Latin word was replaced in its meaning category by another word in at least twelve of the fourteen Romance languages we analyzed. On the hypothesis of extreme diglossia, these would be cases where colloquial Latin speech was already lexically distinct. There are thirteen examples. Three are in grammatical (specifically deictic) vocabulary: Latin hīc ‘this’, ille ‘that’, and ibi ‘there’ were replaced by grammaticalized phrasal expressions in Romance languages. All three words cited are documented in Latin sources of all styles and time periods; even in colloquial Latin, the vocabulary replacements had not happened (Lodge 1924–1933, Väänänen 1981, Glare 1982). The remaining ten examples are in content vocabulary. As seen in Table 3, in each case the record is clear that while the semantic change that eventually led to replacement was sometimes nascent in Latin, even in colloquial sources the original word was the most general and stylistically neutral word in the relevant meaning category. Literary and colloquial Latin had the same basic vocabulary.
Table 3.
Romance advergence. Shown are all ten content-word meaning categories in our data set where a Latin (L) word was replaced in its meaning category by another word in at least twelve of the fourteen Romance languages we analyzed. Data fromVäänänen (1981:75–84) and Glare (1982).
[End Page 207]
Two other arguments concern homoplastic traits, that is, traits shared by languages in more than one IE subfamily but not by a putative ancestor in one subfamily. In some cases we have textual data showing how word meanings shift. In such cases it is possible to show that the homoplastic traits arise independently through internal processes or borrowing. For example, English has a homoplastic trait in the word belly, since its Irish cognate bolg also means ‘belly’ but the OE word for ‘belly’ was wamb. The Bouckaert et al. phylogeny might explain this as an inheritance from the common ancestor of Celtic and Germanic, transmitted through an unattested OE sibling. But text evidence shows that OE belg meant only ‘bag, bellows’ (Cameron et al. 2007) and its ME successor belī ‘bellows’ came to mean ‘stomach’ (by 1225) and then ‘abdomen, belly’ (by 1395; Kurath & Kuhn 1952–2001). This pattern of metaphorical and then metonymic extension is semantically natural, so there is no reason to attribute the ‘belly’ sense to an otherwise undocumented variety of English. Similarly, English has a homoplastic trait in the word fat‘(animal) fat, grease’ (cf. German Fett). From an original adjectival use ‘fattened, fat’ in both Old and Middle English (Cameron et al. 2007, Kurath & Kuhn 1952–2001), this has come to refer to animal fat only in Modern English through a metonymic shift (perhaps promoted by advergence involving other Germanic languages). There is no reason to posit an undocumented OE variety in which this word already had the modern use.
Finally, even where we may lack detailed text evidence for the key stages, the homoplastic traits in question have explanations involving well-established patterns such as derivational and semantic drift. For example, ignoring borrowed words, our data set includes twenty-five traits found in modern IA languages and in non-IA languages, but not in Vedic Sanskrit (the language of the Vedas, and the most archaic attested form of Sanskrit). If Sanskrit were not the ancestor of IA languages, we could assume that these traits were inherited from an earlier IE stage via an undocumented Vedic Sanskrit sibling. Certainly Vedic had unrecorded variation, inferred from later evidence, but this has crucially not been identified for basic-vocabulary traits.18 We examined all modern IA homoplastic traits in our data set, setting aside known borrowings. All twenty-five examples reflect derivational or semantic drift.19 If they were inherited from an unattested Vedic Sanskrit sibling, there would be no reason to expect Sanskrit to have a precursor [End Page 208] for each trait. But in fact, each homoplastic trait has a plausible precursor in Sanskrit, and each historical change from precursor to corollary trait is plausible.
Eight traits in IA languages are homoplastic due to derivational drift. A good example involves the meaning ‘kill’ (also mentioned in §2.1). In this meaning the basic Vedic Sanskrit verb was han- < PIE *gwhen- ‘kill’. But in several modern IA languages (e.g. Hindi mārnā) the basic verb for ‘kill’ is derived from the PIE root *mer- ‘vanish, die’, as in some non-IA languages (e.g. Scots Gaelic marbh). The IA verbs are derived from a causative form of that root (‘cause to die’), which is not reconstructed for PIE (Rix et al. 2001) but did exist in Vedic Sanskrit; it meant ‘cause to die’ and was not the basic word for ‘kill’.20 The change is a natural one that is liable to occur independently. The remaining seven examples of IA homoplasy due to derivational drift are given in 4: in 4a is a verb derived from a noun; the other examples are nouns and adjectives derived from verbs, usually denoting prototypical agents, instruments, or undergoers.21
- (4). Indo-Aryan homoplastic traits: Derivational drift
- a. tooth → bite (Vedic dánt- ‘tooth’ → Romani W dandel; cf. Persian dandan giriftan, etc.; the basic Vedic verb for ‘bite’ was daṃś-)
- b. be hot → warm (Vedic tap- ‘be hot’ → Romani W tato, etc.; cf. Avestan tapta-; the basic Vedic word for ‘warm’ was uṣṇá-)
- c. crawl → snake (Vedic sarp- ‘crawl, creep’ → Hindi
, etc.; cf. Latin serpēns, etc.; the basic Vedic word for ‘snake’ was áhi-)
- d. decay → old (Vedic jar- ‘decay, wither’ → Gujarati jūnũ, etc.; cf. Modern Greek géros, etc.; the basic Vedic word for ‘old’ was sána-)
- e. die → man (Vedic mar- ‘die’ → Romani W murš; cf. Persian mard, etc.; the basic Vedic word for ‘man’ wasnár-); or a combination of derivational drift die → person in 4f below and semantic drift person > man in 5c.iv below
- f. die → person (Vedic mar- ‘die’ → Kashmiri murth; cf. Avestan
, etc.; the basic Vedic word for ‘person’ was púruṣa-)
- g. see → eye (Vedic kaś-/cakṣ- ‘see’ → Bengali cok(h); cf. Persian čašm, etc.; though infrequent in the Ṛg Veda, the basic Vedic word for ‘eye’ was ákṣi-22 )
An additional seventeen traits in IA languages are homoplastic due to semantic drift. A good example involves the meaning ‘leg’ (also mentioned in §2.1). In this meaning the basic Vedic Sanskrit noun was jáṅghā-, but several modern IA languages have a [End Page 209] word for ‘leg’ (e.g. Bengali pā) whose PIE ancestor is reconstructed with the meaning ‘foot’ because that is the only meaning any reflex has in any ancient IE language. Outside of IA, the meaning ‘leg’ is found in modern languages like Greek (pódi) and Persian (pā), but Sanskrit pád- means only ‘foot’. The trait [*pod-, ‘leg’] shared by several modern IE languages represents a crosslinguistically well-documented metonymic shift ‘foot’ > ‘leg’ (Wilkins 1996). In the Middle IA language Pāḷī, janghā means ‘leg’, and pada still exclusively means ‘foot’ (Davids & Stede 1921–1925, Turner 1962–1966). This shows that this semantic change postdates the Middle IA period, and that the trait [*pod-, ‘leg’] is not an archaism. The remaining sixteen examples of IA homoplasy due to semantic drift are given in 5. Where there is clear evidence, a Middle IA form (from Pāḷī or Prakrit) is given to confirm that the meaning attested in Vedic was preserved in a later period, and that the semantic innovations seen in one or more modern IA languages had not yet taken place.
- (5). Indo-Aryan homoplastic traits: Semantic drift
- a. Aktionsart shifts: Change-of-state and activity verbs become stative.
- i. tremble > fear (Vedic tras- ‘tremble’ > Romani W trašel; cf. Persian tarsīdan, etc.; the basic Vedic verb for ‘fear’ was bhay-, still continued by Pāḷī bhay- ‘fear’ vs. tasati ‘tremble, fear’)
- ii. perceive > know (Vedic jñā- ‘perceive, recognize’ > Hindi jānnā, etc.; cf. Ossetic (Digor) zon-, etc.; the basic Vedic verb for ‘know’ was ved-)
- iii. fall asleep > be sleeping (Vedic svap- ‘fall asleep’ > Hindi sonā, etc.; cf. Avestan xvafsa-, etc.; the basic Vedic verb for ‘be sleeping’ was sas-)
- b. Metonymic and metaphorical shifts
- i. hide > bark (Vedic cárman- ‘hide’ > Kashmiri
; cf. Ossetic (Digor) c’arɐ; the basic Vedic word for ‘bark’ was tvác-, still continued by Pāḷī tacō ‘bark, skin’ vs. camma ‘leather, shield’)
- ii. hide > skin (Vedic cárman- ‘hide’ > Hindi cām, etc.; cf. Sogdian crm, etc.; the basic Vedic word for ‘bark’ was tvác-)
- iii. cloud > sky (Vedic nábhas- ‘cloud’ > Kashmiri nab; cf. Old Church Slavonic nebo, etc.; the basic Vedic word for ‘sky’ is dyáus-)
- iv. bathe > swim (Vedic snā- ‘bathe’ > Romani najol; cf. Latin nāre, etc.; the basic Vedic verb for ‘swim’ was plav-, still continued by Prakrit pavaï ‘swims’ vs. ṇhāi ‘bathes’)
- v. forest > tree (Vedic vána- ‘forest’ > Sindhi vaṇu; cf. Avestan vana, etc.; the basic Vedic word for ‘tree’ was
, still continued by Prakrit vakkha ‘tree’ vs. vaṇa ‘forest’)
- vi. heat > sun (Ved gharmá- ‘heat, warmth’ > Romani W kham; cf. Old Irish grían, etc.; the basic Vedic word for ‘sun’ was súvar-)
- vii. cord > rope (Vedic raśmí- ‘cord’ > Panjabi rassī, etc. < Sanskrit raśmí; cf. Zazaki resen, with a different suffix, equivalent to Sanskrit
(Mayrhofer 1989–2001); the basic Vedic verb for ‘rope’ was rájju-)
- c. Semantic generalization (i–iii) and specialization (iv–vi)
- i. blow > breathe (Old Indic
‘blow’ > Romani W phurdel, etc.; cf. Classical Armenianp‘c‘em; the basic Vedic verb for ‘breathe’ was an-)
- ii. ancient > old (Vedic purāṇa- ‘ancient’ > Hindi purānā, etc.; cf. Old High German firni, etc.; the basic Vedic word for ‘old’ was sána-) [End Page 210]
- iii. crush > rub (Old Indic *mar- ‘crush, rub’ (Turner 1962–1966) > Nepali malnu, etc.; cf. Old Irishcon·meil, etc.; the basic Vedic verb for ‘rub’ was gharṣ-)
- vi. person > man (Vedic mánu- ‘person, (Primordial) Man’ > Assamese mānuh, etc.; cf. English man, etc.; the basic Vedic word for ‘man’ was nár-)
- v. cut > split (Vedic ched- ‘cut (off)’ > Romani U čhinel; cf. Latin scindit, etc.; the basic Vedic verb for ‘split’ was bhed-, still continued by Pāḷī bhindati ‘splits, breaks’ vs. chēdana ‘cutting’)
- vi. go > walk (Vedic yā- ‘go’ > Marwari jā; cf. Hittite iyatta, etc.; the basic Vedic verb for ‘walk’ was car-)
In short, the data supports our view that the ancient and medieval languages in Table 2 are the ancestors of their listed modern descendants. Contrary to the analysis of language relationships that is entailed by Bouckaert et al. 2012, 2013, it is not necessary or even plausible to assume undocumented colloquial or sibling language varieties with markedly distinct basic lexica.
4. Data and measurements
4.1. From word list to trait matrix
We drew our data from IELEX, a database created and curated by Michael Dunn. This began as a harmonization of the Dyen and Ringe-Taylor data sets (§1.2) and has since been edited and expanded. We analyzed a selection of IELEX data that we extracted on April 21, 2013, and edited to correct Indo-Iranian coding errors. This data set is available in the online supplemental materials (http://muse.jhu.edu.sci-hub.org/journals/language/v091/91.1.chang01.html).
IELEX contains words for 207 meaning classes in over 150 IE languages. Within each meaning class, words are grouped by their IE root, so it is easy to construct RM traits from the data. In preparation for analysis, we assembled trait matrices in which each column encodes an RM trait. Each cell indicates the presence (1) or absence (0) of a particular trait in a particular language (Table 4). An ideal database would provide exactly one word for each slot (each pairing of meaning class and language), but often there are multiple or none. When IELEX lists multiple words in a slot, the slot is overloaded, and we take each word to indicate a distinct trait (cf. Friulian in Table 4).23 When IELEX provides no words for a slot, the slot is empty, and we assume that the language in question could have any (none, one, or multiple) of the traits attested in the meaning class by any other language. The cells corresponding to an empty slot are marked hidden (written ‘?’, as for Hittite in Table 4).
Table 4.
Words from IELEX for ‘feather’ (left), coded as a trait matrix (right). The competing forms in Friulian result in an overloaded slot . The lack of a form in Hittite yields an empty slot ; each corresponding cell, marked ‘?’, is a hidden cell .
In IELEX, recognized loanwords are tagged, so we can choose to exclude or include them from the trait matrix. In principle there are many ways to implement loan exclusion ; [End Page 211] we followBouckaert et al. 2012 and put 0 in the cell of a tagged loanword (cf. German and Greek in Table 5, center). Note that this produces a different outcome from an empty slot (cf. Lycian). We implement loan inclusion by adhering literally to how RM traits are defined: if a language has a certain root in a certain meaning, then it has the trait, regardless of whether the word is a loan (Table 5, right).
Table 5.
Words from IELEX for ‘fruit’ (left), coded with loan exclusion (center) or loan inclusion (right). Loanwords are parenthesized.
Table 6.
Four data sets. An empty slot is when a language is unattested in a meaning class; an attestation is any cell in the trait matrix bearing 1; a hidden cell is any cell that corresponds to an empty slot; and B refers to Bouckaert et al. 2012, 2013.
Table 7.
Earlier languages in our analyses. Shown are language names we use, ISO 693-3 abbreviations, and in one case a language name used in IELEX. Cornish (*) is excluded from the narrow data set; languages with two asterisks (**) are excluded from the narrow and medium data sets; the broad data set includes all languages above (see text for explanation). Dates represent the earliest (or for Cornish latest) substantial attestation of basic vocabulary. Ancient Greek dates refer to the classical period, not earlier poetic dialects.
[End Page 212]
Table 8.
Contemporary languages in our analyses. Shown are language names we use, ISO 693-3 abbreviations, and in some cases language names used in IELEX. Languages with one asterisk (*) are excluded from the narrow data set; languages with two asterisks (**) are excluded from the narrow and medium data sets; the broad data set includes all languages above (see text for explanation). Because some IELEX word lists originated in the 1960s and some sources are from the mid-twentieth century or earlier, contemporary languages are constrained to 1950–2000 ce = 50–0 bp . The one exception is English, assigned a date of 2000 ce = 0 bp because some of the software that we use requires that at least one taxon be current.
From IELEX data we assembled three data sets (Table 6). The broad data set consists of ninety-four languages and 197 meaning classes. Of the 207 meaning classes in IELEX, three (‘blow’, ‘father’, ‘mother’) were excluded as being especially susceptible to sound symbolism, and seven others were excluded because they were unattested in more than 30% of the languages, probably due to the fact that they are not in the Dyen data set, which was constructed around the 200 items proposed by Swadesh (1952). We also assembled two smaller data sets because of the possibility that the many sparsely attested languages in the broad data set could bias the analysis. Our medium data set uses a subset of eighty-two languages and 143 meaning classes from the broad data set, chosen so that no language would have too many hidden cells (see the last row in Table 6). We assembled a narrow data set by removing from the medium data set any modern language that lacks an ancestor in the data set, leaving fifty-two languages. Finally, to follow up on the analyses of Bouckaert and colleagues (2012, 2013), we reconstructed the data set featured in their analyses, which they derived from IELEX at an earlier date. The languages we analyze are listed in Tables 7 and 8, and the clade constraints assumed in our analyses are listed in Table 9.
4.2. Data set measurements
Using our data sets, independent of phylogenetic analysis, we quantify claims made earlier: that homoplastic traits are common (§2.1), [End Page 213] and that known loanwords are unevenly distributed between ancient and nonancient languages (§2.3). We also show that overloading occurs in ancient and nonancient languages to similar degrees; this alleviates the concern that ancient language lists may be overloaded with words from too great a span of time, biasing our analyses. Finally we show that empty slots occur more often in less stable vocabulary; this is relevant to our argument that empty slots are to be avoided when assembling data sets for phylogenetic analysis (§6.1, analysis A3).
First, we measure homoplasy using the broad data set; results are shown in Table 10. In the data set, eight ancient or medieval languages are direct ancestors of thirty-nine modern languages. For each ancestral language, we find that 2–16% of the traits found in its descendants are absent in the ancestral language but present elsewhere in IE, even after excluding loans. Overall, 7% of traits in these modern languages are directly observed to be homoplastic.
Second, using the narrow data set, we divide the languages into two samples (ancient languages attested over 2,000 years ago, and others) and measure the frequency of tagged loanwords in each language (Table 11, left). As expected, loanwords are tagged at a significantly lower rate in ancient languages (p ≈ 0.014, Wilcoxon rank sum test). Borrowing rates in the later languages are consistent with those observed crosslinguistically [End Page 214] in Bowern et al. 2011, implying that the numbers of tagged loanwords in ancient languages are indeed unrealistically small.
Table 10.
Directly observed homoplasy in the broad data set. Shown for each ancestral language are: (a) the number of traits present in both a descendant and a nondescendant, but not in the ancestral language itself; (b) the number of traits present in any descendant; (c) their ratio; (d–f) the same after excluding recognized loanwords.
Table 11.
Loanword rates (percentage of forms tagged as loans) and average slot loads (average number of forms in nonempty slots) in the narrow data set, for five ancient (before 2000 bp)and forty-seven nonancient languages. The ancient languages are Hittite, Vedic Sanskrit, Avestan, Ancient Greek, and Latin.
Third, using the narrow data set, we measure for each language the average number of forms in nonempty slots (Table 11, right). The degree of slot overloading is not significantly different in ancient and nonancient languages (p ≈ 0.42, Wilcoxon rank sum test). This is due at least partly to the fact that the ancient-language data in IELEX originated with Ringe and Taylor’s carefully curated word lists (Ringe et al. 2002).
Finally, we use the broad data set to show that empty slots tend to occur in less stable vocabulary. For each of the 197 meaning classes, we plot the number of roots attested in it against the number of languages not attesting it (Figure 4). The former is a proxy for the lexical replacement rate in the meaning class. The correlation is significant (p ≈ 3.9 × 10−6) and the effect size is considerable (the slope is 1.5).
5. Phylogenetic methods
5.1. Background
Inferring the dates of reconstructed languages has long been associated with glottochronology, which holds that basic vocabulary replacement occurs at a universal constant rate and can be exploited to estimate the evolutionary time span between related languages (Swadesh 1952, 1955, Lees 1953). Glottochronology fell into disfavor as it became apparent that lexical replacement rates vary greatly from one language to the next, and between meaning classes. Yet the methodological crux of our work, ancestry constraints, was an element in the earliest glottochronology, which used ancestor-descendant pairs like Latin/French and Old/Modern English to measure rates of lexical replacement. Our innovation is to use ancestry constraints with a phylogenetic model; this overcomes many of the problems of traditional glottochronology. We discuss three such problems. [End Page 215]
Figure 4.
Fast-evolving meaning classes are less well attested. This is a scatterplot of all 197 meaning classes in the broad data set. Overlapping dots have been combined into larger dots.
Rate variation between languages
The central tenet of glottochronology, that the lexical replacement rate is a universal constant, has been refuted many times. Bergsland and Vogt (1962), for example, found that Riksmål Norwegian underwent lexical replacement at roughly five times the rate of Icelandic with respect to their common ancestor, Old Norse. Blust (2000) found a similar ratio for the fastest and slowest evolving languages in a survey of 224 Austronesian languages. However, it became apparent with the availability of large word lists that such rate variation could be characterized with statistical distributions. We follow previous researchers in assuming that the rates along each branch of the IE tree obey a log-normal distribution (Bouckaert et al. 2012). This choice is driven by mathematical convenience, but it is also motivated by the fact that a log-normal distribution can be prominently peaked (implying rate homogeneity) or fairly flat (implying substantial heterogeneity) depending on its parameterization, which is not set a priori, but is inferred from the data.
Meaning-class rate variation
In the ninety-four languages of our broad data set, there are over sixty roots in each of the meaning classes ‘near’, ‘hit’, ‘dirty’; and two or fewer in ‘two’, ‘three’, ‘four’. This implies a tremendous range in replacement rates across meaning classes. This has been confirmed by those working in the glottochronological tradition with Austronesian languages (Dyen et al. 1967) and those working with phylogenetic models and IE languages (Pagel et al. 2007). Both found that with lists of roughly 200 words, the lexical replacement rate was two orders of magnitude greater in the least stable meaning classes than in the most stable ones. Following a common practice in biology, we assume that rates of evolution vary from trait to trait according to a gamma distribution (Felsenstein 2004:217–20, Yang 1994). Like the log-normal distribution, the gamma distribution is mathematically convenient, and can be peaked or broad depending on its shape parameter, which can be inferred from the data.
Nonindependent development
To infer the distributions that characterize rate variation, a model should be provided with as many ancestor-descendant pairs as possible. However, such pairs tend to overlap. For example, as Swadesh (1955) noted, if both [End Page 216] Latin/French and Latin/Spanish were used to estimate the lexical replacement rate, the many centuries before the French-Iberian split would be counted twice. A phylogenetic model overcomes this problem by jointly inferring the tree topology, the split times, and the replacement rates along each branch. This makes it possible to aggregate information from, say, thirteen Romance languages without counting the span between Classical Latin and Proto-Romance thirteen times.
5.2. Phylogenetic model and inference
Our phylogenetic analyses were carried out with a customized version of BEAST (Drummond et al. 2012), a software program for Bayesian phylogenetic analysis via Markov chain Monte Carlo (MCMC) sampling of the posterior distribution.24 BEAST takes the following elements as input.
- • trait data , interpreted as the outcome of a diversification process. This data takes the form of a trait matrix, as described in §4.1.
- • phylogenetic model : A description of the mechanics of change whereby a single language diversifies into many. The phylogenetic model has many parts. The tree prior gives the probability of obtaining a tree with a specific topology and chronology, prior to seeing the data. The trait model describes how a trait evolves. The clock model describes how rates of evolution in different languages may vary.
- • constraints : Specific bits of prior knowledge that shape or narrow the set of trees generated by the tree prior. Clade constraints specify the subgroups that must be found in the inferred tree. Time constraints specify hard or soft time intervals for splits and leaf nodes. Ancestry constraints force some languages to be directly ancestral to other languages.25
Our analyses feature three kinds of trait models. The restriction site character (RSC) model describes traits that can transition an arbitrary number of times between presence and absence; the frequency with which a trait is present is a global parameter (see Appendix A).26 The covarion character model describes traits that can transition between being present and absent (like restriction site characters) but can also transition between being fast-evolving and slow-evolving (Appendix B). The stochastic dollo character (SDC) model describes traits that come into existence exactly once, which suits it to traits that cannot be homoplastic (Appendix C); it is ill-suited to modeling RM traits, especially when the data set contains ancestral languages. Regardless of the trait model used, we account for the fact that a trait may exist without being observed at any of the leaves of the tree (Appendix D).
BEAST outputs a trace of first-order parameters (Tables A1–A3), which includes tree topology and chronology. The first half of the trace is discarded, and the second half is [End Page 217] used as aposterior sample. From the trees in the sample we construct a summary tree (e.g. Figs. 1–2 above, Fig. 6 below).27
5.3. Evaluating the steppe and anatolian hypotheses
For each phylogenetic analysis, we run BEAST twice: once with trait data to produce a posterior sample, and once without trait data (removing it from the XML configuration file) to produce a prior sample, which is used in conjunction with the posterior sample to evaluate the steppe and Anatolian hypotheses.
To determine the degree to which the data D supports the steppe hypothesis over the Anatolian hypothesis, we calculate the following Bayes factor.
- (6).
This indicates the modeling improvement derived from constraining the root age tR to the steppe hypothesis interval ΩS = [5500, 6500] versus constraining it to the Anatolian hypothesis interval ΩA = [8000, 9500]. We interpret the Bayes factor conventionally: support for the steppe hypothesis is very strong if KS/A > 30, strong if 10 < KS/A < 30, substantial if 3 < KS/A < 10, and weak to negligible if 1 <KS/A < 3 (Jeffreys 1961). In order to compute KS/A we apply Bayes’s theorem to obtain the following.
- (7).
The two numerators are estimated by noting the fraction of the posterior sample for which tR ∈ ΩS and tR∈ ΩA. The two denominators are estimated by noting the fraction of the prior sample for which tR ∈ ΩSand tR ∈ ΩA.
Note that the prior sample consists of trees drawn from the tree prior, subject to the constraints. To the extent that the prior sample favors the steppe hypothesis (i.e. if the root age in this sample often falls into ΩS), the steppe hypothesis will be penalized in the Bayes factor. This guards against concluding in favor of the steppe hypothesis on the basis of a priori assumptions about the age of the root that are implicit in the tree prior. As an additional precaution, most of our analyses use the generalized skyline coalescent tree prior (Strimmer & Pybus 2001), which gives approximately the same weight to both hypotheses.
6. Experiments
Our phylogenetic analyses comprise four blocks:
- • Block A contains our main analyses A1–3 and other analyses that give similar results despite different experimental conditions (§6.1).
- • Block B contains analyses that replicate and follow up on the results of Bouckaert et al. 2012, 2013 (§6.2).
- • Block C contains analyses that quantify the effects of using a single-gain trait model, of lacking ancestry constraints, and of lacking a model of among-trait rate heterogeneity (§6.3).
- • Block D contains analyses that validate the time constraints that are placed on ancestral languages in our main analyses (§6.4). [End Page 218]
We perform a correction on the main analyses after discussing the effects of advergence on the inferred root age (§7.1).
6.1. Analyses A1–7: dating PIE
This section describes our main analyses A1–3 and supporting analyses A4–7, which give similar results despite different experimental conditions. They are summarized in Figure 5. The three plots for the corrected A1–3 analyses are discussed in §7.1.
In analyses A1–3, we estimate the date of PIE by running BEAST with ancestry constraints on our narrow, medium, and broad data sets. We use the generalized skyline coalescent tree prior and the RSC trait model.28 We assume log-normal distributed among-branch rate variation and gamma-distributed among-trait rate variation. We refrained from putting time constraints on unattested languages for reasons discussed in §7.2. All three data sets yield root age distributions whose median falls within the range for the steppe hypothesis, and whose 95% HPD interval does not overlap with the range for the Anatolian hypothesis. In each case the Bayes factor indicates strong or very strong support for the steppe hypothesis. The differences in the results for analyses A1–3 reflect differences in the construction of the data sets.
A1 differs from A2 in excluding modern languages that lack attested ancestors. Our intent was to minimize the effects of borrowing between, and advergent innovations within, IE clades that lack attested ancestors (British Celtic, Slavic, Albanian, Iranian, and Low German). The result is a lower inferred root age for A1 by 460 years.29
A3 differs from A2 in having more languages and more meaning classes (197 rather than 143), which results in a trait matrix with many more empty slots. A language with many empty slots will seem more conservative than it actually is due to the fact that our model will underestimate the number of unique traits in the language. There are two unrelated reasons for this. First, as discussed in §4.2, empty slots tend to occur in unstable meaning classes, which contain more unique traits than stable meaning classes do. Second, as discussed in Appendix D, our model does not account for unique traits that are not directly observed. Hittite, Tocharian A, and Tocharian B are particularly sparsely attested, and are basal in the tree. Since they are treated as more conservative than they really are, the root age is driven down.30
Analyses A4–7 are all based on A1, but differ in some respect. In A4, loans are excluded as described in §4.1, which increases the root age slightly. A5 operates on just the ninety-two meaning classes thatSwadesh (1955) deemed most suited to glottochronological [End Page 219]
Figure 5.
Root age prior distribution (tan) and posterior distribution (blue, rescaled by ½) with median posterior and 95% HPD interval (arrowheads); and root age ranges for Anatolian and steppe hypotheses (red). Root age statistics and Bayes factors are printed for each analysis.
[End Page 220]
analyses. A6 has a constant population coalescent tree prior (Kingman 1982). A7 uses a covarion trait model (see Appendix B) and is otherwise identical to A1. Bouckaert and colleagues (2013) found that the covarion model was more appropriate than RSC or SDC for IELEX data. Following them, we disabled the modeling of gamma-distributed among-site rate variation and fixed the covarion parameter δ1 to 0.5. The resulting median posterior root age is higher, and the distribution is wider.
To further validate our results, we removed the ancestry constraint for each of the eight ancestral languages in turn from analysis A1. The median root age fluctuated from 5950 to 6130 bp; the 95% HPD interval remained within the range [4850, 7480]; the Bayes factor KS/A fluctuated from 39 to 100.
6.2. Analyses B1–3: following up bouckaert et al. 2012, 2013
To ensure that our finding does not reflect differences in data sets used, we replicated work by Bouckaert and colleagues using their data set and experimental conditions. Their data set is described in §4.1. Tagged loans were excluded. Their main analysis had log-normal distributed among-branch rate variation and a covarion trait model. Among-site rate variation was not explicitly modeled, and the covarion parameter δ1 was fixed at 0.5. Using BEAST they found a root age posterior distribution with median 7580 bp and 95% HPD interval 9350–5970. In analysis B1 we removed the phylogeographical elements from the BEAST configuration file that was published with Bouckaert et al. 2013, and replicated the inferred chronology, which supports the Anatolian hypothesis (see Fig. 5).
In analysis B2 we reran B1 after discarding the six most sparsely attested languages. The resulting root age was low enough to give significant support to the steppe hypothesis. The six languages are listed here with the number of meaning classes (out of 207) attested in each: Lycian: 34, Oscan: 52, Umbrian: 57, Old Persian: 74, Luvian: 99, Kurdish: 100. One motivation for removing these languages is that poorly attested languages bias the inferred chronology (§6.1 above). We also found that at least thirty-three of the ninety-nine Luvian words in the data set were erroneously coded for cognacy: they should have been coded as having Hittite (or in a few cases Lycian) cognates, but were coded as unique traits.31 The miscodings cause Luvian to seem more innovative and resulted in a higher root age in analysis B1.
In analysis B3 we reran B2 with ancestry constraints, and obtained a result that significantly favors the steppe chronology. Since the B3 trait data is the same as that of Bouckaert et al., the result cannot be attributed to IELEX contributions made by some of us. Nor, aside from the shedding of the sparsely attested languages, can it be attributed to a different choice of trait model, languages, clade constraints, or time constraints.
6.3. Analyses C1–5: single-gain trait model
Analyses C1–5 serve to quantify the effect of imposing the SDC trait model on the narrow data set under experimental conditions that are typical of previous work. However, a direct comparison between analysis A1 and an analysis with SDC is not possible, since (i) SDC is strictly incompatible with ancestry constraints, and (ii) as implemented in BEAST, the SDC model does not support the modeling of among-trait rate heterogeneity. [End Page 221]
Figure 6.
Analysis C3 summary tree. Modern languages with no ancestors in the data set are excluded, but ancestry constraints are not used. There are time constraints on splits. See Fig. 1 caption to interpret other graphical elements.
Thus we also toggle these modeling elements to control for their contributions. As in previous work, loans were excluded and some splits bear time constraints, as illustrated [End Page 222] in the summary tree for analysis C3 (Figure 6).32 The way in which C1–5 relate to each other, and to analysis A1 and A4, is as follows.
- (8).
The results of analysis C1–5 are summarized in Fig. 5 above. C1 finds a higher root age than A4; this is an effect of time constraints on splits, as discussed in §7.2. Comparing C1 with C3 (or C2 with C4) shows that lacking ancestry constraints adds 1,200 years (or 1,110 years) to the root age. The reason for this is discussed in §8. Comparing C4 and C5 shows that using SDC adds another 170 years to the root age. Since SDC cannot posit multiple gains, it reconstructs trait gains nearer to the root than RSC would. This corresponds to a lengthening of branches that are closer to the root, with the root being placed farther in the past. Branches in leafward parts of the tree change little in length since they are subject to time constraints at leaves and splits.
Finally, we note that modeling among-site rate heterogeneity affects the root age in a systematic way. Comparing C1 with C2 (or C3 with C4) shows that modeling rate heterogeneity adds 690 years (or 780 years) to the root age. The low root age in C2 and C4 is artifactual, and C5 would find a higher root age if rate heterogeneity were modeled. We conclude this section with a mathematical description of this artifact, followed by a proof.
Under RSC, a trait that is present at time 0 will be present at time t with probability
- (9).
where r > 0 is the rate at which the trait evolves and α ∈ (0, 1) is the stationary distribution of trait presence. (For convenience, we use a different parameterization of the model from what is found inAppendix A.) Assuming rate homogeneity, the inverse function
- (10).
serves as an estimator of the evolutionary time spanned by two languages. The retention rate R is the fraction of traits of one language found in the other. Under conditions of rate heterogeneity, however, the retention rate is different from what is given in 9. With p(r) as the probability distribution over rates, the retention rate between two languages is expected to be as in 11.
- (11).
Nonetheless, an analyst may choose to ignore rate heterogeneity and apply the estimator in 10 directly to such a retention rate in order to estimate the time spanned by two languages. Using a nominal rate of evolution c, the analyst obtains 12.
- (12).
[End Page 223]
We claim that this results in progressively worse underestimates of t as t gets larger. In other words, as tgrows,
will fall. To see the relevance of this to dating PIE, consider that most calibration points are closer in time to the present than to PIE. Not modeling rate heterogeneity will thus shorten the rootward branches of the tree, lowering the rootage.
Claim. If Var(r) > 0 and t2< t1,> then 
.
Proof. Without loss of generality, let t1 = 1. Then the claim becomes
- (13).
Since fc is strictly decreasing for c > 0, this is equivalent to the claim
- (14).
We define
- (15).
and note that φt(R) does not depend on c. Thus 14 is equivalent to the claim
- (16).
Since φt(R) is strictly convex over the domain R ∈ [α, 1] when t > 1, the validity of this claim follows from Jensen’s inequality (Jensen 1906).33
6.4. Analyses D1–8: ancestral language time constraints
Following the example of Ryder and Nicholls (2011), we perform a series of analyses to validate the time constraints on ancestral languages. Analyses D1–8 are identical to A1, except in each a different ancestral language is allowed to float: the time constraint is removed, but the ancestry constraint is left intact. The Bayes factor between
(the model in A1) and each of
, …,
(the models in D1–8) indicates the extent to which the additional time constraint in A1 improves modeling of the data. Formally, it is a ratio of marginal likelihoods.
- (17).
This is equivalent to
- (18).
where ti is the time of the floating ancestor in
, and Ωi is the interval to which it is constrained in A1. We obtain the numerator (respectively, denominator) by noting the fraction of the posterior (respectively, prior) sample from analysis Di for which ti ∈ Ωi. The resulting Bayes factors and the prior and posterior distributions of ti are shown in Figure 7. That they are all greater than one suggests that the time constraints (but not necessarily the ancestry constraints) are plausible in relation to the data. The median posterior root age fluctuates from 5770 bp (Latin floating) to 6120 bp (Vedic Sanskrit floating).
7. The effects of advergence
7.1. Correction for advergence at the root
Readers familiar with IE languages may notice that many splits in our summary trees (Figs. 1–2) are conspicuously [End Page 224] younger than historical evidence allows. In Table 12 we compare eight split dates derived from historically attested events to those inferred in analyses A1–2. A2 dates are younger by 330 ± 165 years. A1 has just three of these splits since it operates on a smaller set of languages, but the split dates are on average 100 years younger still. This bias is an artifact of the interaction between advergence and the RSC trait model. Under RSC, advergent parallel gains that occur immediately after a split tend to be treated as a single gain that predates the split, which causes the ages of splits to be underestimated; this is illustrated in Figure 8. We do not know the conditions under which PIE split apart, but by extrapolating from the eight observations in Table 12, we can statistically characterize and correct for the effects of advergence associated with the first split in the tree.
Figure 7.
Prior (tan) and posterior (blue) distributions of the time of the floating language, and the interval it is constrained to in A1 (red). The Bayes factor Ki, indicates the extent to which time-constraining the floating language in model
improves the model with respect to the data.
Immediately after a split, advergent developments occur relatively often, because newly diverging languages share precursor traits that generate the same drift-like changes, and because they may also remain in close contact. The frequency of advergent developments tapers off as the child languages grow farther apart. To a first approximation, the node age will be underestimated by a time proportional to the number of advergent developments following the node. We assume that this perturbation of the node age has little effect on the rest of the tree since the inferred rates along adjacent branches remain accurate. (This is no longer the case if a time constraint has been placed on the split; see §7.2.) Since we observe almost no correlation between the split date and the size of the discrepancy in Table 12, we assume that advergence has tapered [End Page 225] off for most of the splits in the table, and that the inferred PIE root age is underestimated by a similar amount.
Table 12.
Split dates for selected IE subgroups. Listed for each split is a date for the earliest evidence of linguistic differentiation, followed by dates from analyses A1–2, in years bp .
To account for advergence, we augment our phylogenetic model to generate a corrected root age tcor in the following way. After drawing a tree T with root age tR from the tree prior:
- (i). Pick one of the eight (in the case of A1, three) nodes in Table 12 uniformly at random.
- (ii). Compute the discrepancy between its historically attested age and its age in T. (All relevant clades in Table 12 have been constrained to exist in T.)
- (iii). Add this discrepancy to tR to obtain tcor.
T is then used to generate the data as in any other phylogenetic model, and tcor is not used for any purpose except hypothesis evaluation. Generating samples from this augmented model is simple in practice: we augment each tree in the prior and posterior samples produced by BEAST with tcor according to the procedure just described. To compute KS/A, we use the same formulas in §5.3 after substitutingtcor for tR. All three corrected analyses offer at least significant support for the steppe hypothesis (seeFig. 5).
For IE, the uncorrected and corrected root age both have interpretations. The uncorrected root age corresponds to a time within the period during which pre-Proto-Anatolian and pre-PNIE were in contact, developing under the influence of advergence. The corrected root age is the analog of the historical dates in Table 12. These were derived [End Page 226]
Figure 8.
Advergence-induced inference artifact. In advergence, following a typical linguistic split (A) there are sets of homoplastic, advergent developments (linked blue dots), interspersed with nonhomoplastic developments (black dots). In the inferred tree (B), a more recent split, consistent with nonhomoplastic reconstructions of the homoplastic traits, is favored.
Figure 9.
Advergence interacts with time constraints on splits. Five nodes bear time constraints (red bars); the central node X is an unattested language and the other four are attested languages. In the actual tree (A), there are nonhomoplastic developments (black dots) and sets of homoplastic, advergent developments (linked blue dots). How the homoplastic traits tend to be reconstructed (B) corresponds to exaggerated among-branch rate variation.
Figure 10.
Jogging and inferred root age. In reality (A), Latin is ancestral to French and Old Irish is ancestral to Modern Irish, but in an erroneously inferred tree (B), the paths from Latin to French and from Old to Modern Irish jog rootward. An effect of jogging is that innovations between Latin and French, and between Old and Modern Irish (black dots), are reckoned as occurring over a greater evolutionary time interval; this results in a lower inferred rate of lexical change, and ultimately a higher inferred root age.
from either the earliest evidence of linguistic differences in the relevant diversifying populations, or the date when distinct subpopulations formed. Since both the steppe and the Anatolian hypotheses refer to the beginnings of PIE dispersal, it is better to evaluate the hypotheses using the corrected root age. The span between the corrected and uncorrected dates is analogous to the span between the first appearance of Latin regional [End Page 227] differences that were ancestral to later Romance isoglosses, around 250ce (Table 12), and the date after which Romance varieties were not mutually comprehensible, late in the first millennium ce .34
7.2. Time constraints on splits
When there is advergence after a split, placing a time constraint on the split may distort the inferred rates of change on adjacent branches (Figure 9). Under RSC, parallel advergent gains resemble a single gain on the rootward side of the split, so the rates on leafward branches are underestimated, and the rate on the rootward branch is overestimated if there is an attested rootward language also constrained in time. This leads to exaggerated among-branch rate variation. If there is no attested rootward language, the overall rate of change will be underestimated, and the root will be placed farther in the past (Garrett 2006:148, n. 6). For these reasons we avoid placing time constraints on splits in most of our analyses. Analyses with predominantly modern languages (Gray & Atkinson 2003, Nicholls & Gray 2008) or predominantly ancient and medieval languages (Nicholls & Gray 2008) cannot avoid having time constraints on splits, and may thus produce inaccurate chronological results.
8. Discussion and follow-up analyses
8.1. The purpose of ancestry constraints
We have shown that the ancestry relationships in Table 2 are justified by the linguistic facts (§3), and that ancestry constraints dramatically affect the root age in phylogenetic analyses (§§6.2–6.3). Now we turn to the question of why they have this effect, and the related question of why phylogenetic models typically fail to infer ancestor-descendant relationships from the data.
The answer to the first question is straightforward. Models that lack ancestry constraints will systematically analyze each ancestral language as coordinate with its descendants. Thus, for each ancestor-descendant pair, the path from one to the other jogs: it goes rootward before going leafward (Figure 10B). This jogging increases the evolutionary time separating the two languages, which leads to a lower estimate of the rate of lexical change and, ultimately, to an elongated tree. As noted in §3, the amount of jogging can be considerable. For example, in an analysis of ours that lacks ancestry constraints, jogging almost triples the evolutionary time spanned by Old and Modern Irish (Fig. 6).
This brings us to the second question: why does the model fail to infer ancestorhood, and in some cases fail so badly? At least two aspects of the phylogenetic model are to blame: the tree prior and the trait model. A realistic tree prior must assign a nonzero probability to the set of trees where one language is ancestral to another. (If an ancient language is newly discovered, it might be directly ancestral to a known language.) However, most tree priors in common use, including the generalized skyline tree prior that we and Bouckaert and colleagues (2012, 2013) use, assign an infinitesimal probability to that set of trees. Due to our Bayesian methodology, even if the maximum likelihood tree were a tree in that set, we would not find it, since the trees sampled from the posterior distribution would almost surely not be from that set.35
If the tree prior explains why there is always some jogging in the inferred tree, the trait model explains why it can be so extensive. The three trait models in this work [End Page 228] (SDC, RSC, and covarion) are poorly suited to modeling drift. As shown in §4.2, our data sets contain many straddling traits . These are traits that appear in the modern descendant of an attested ancestor, and also in a nondescendant of the ancestor, but not in the ancestor itself. Under the SDC trait model, a straddling trait forces the phylogenetic model to posit jogging so that the trait can be lost in the ancestral language while it is retained in the modern descendants. But even with RSC, descent relationships are hard to infer. Since precursor traits, like all RM traits, tend to be localized to one part of the tree, their corollary traits, which include straddling traits, tend to occur in closely related languages. Two closely related languages (one a descendant of an ancestral language, the other a nondescendant) will exhibit more straddling traits than RSC can account for, unless jogging is introduced into the tree. This account of jogging is supported by two experiments: §8.2 shows that the number of straddling traits in closely related languages is higher than predicted when ancestry constraints are in place, and §8.3 shows that an elevated number of such straddling traits produces jogging when ancestral constraints are removed. We have not probed the behavior of the covarion trait model as thoroughly, but there are theoretical reasons to believe that it is also poorly suited to modeling drift, as discussed in §8.4.
The foregoing implies that ancestry might be inferable with an appropriate tree prior and trait model, though drift might still produce data with insufficient statistical information for inferring descent relationships. If descent relationships are known from sources outside the data, it is best to use ancestry constraints.
8.2. Straddling traits analysis
In this analysis and those of §8.3, we construct triplets of languages and analyze each triplet in isolation. Each row in Table 13 corresponds to a triplet, which consists of an ancestral language A, a descendantB, and a close nondescendant C.
Table 13.
Analysis of drift and jogging on triplets of languages. For each ancestral language A, B is a descendant and C is a close nondescendant. There are N traits attested in one or more ofA–C (loanwords excluded); of these, n is the number of traits straddling A; ñ is the expected number of such traits based on parameters inferred in analysis A4; tA is the center of the time interval to which A is constrained; 
, 
are values for thetmrca of A and B under scenarios described in the text. Abbreviations: Arm: Classical Armenian, Av: Avestan, Grk: Ancient Greek, L: Latin, OE: Old English, OHG: Old High German, OIr: Old Irish, OWN: Old West Norse, Skt: Vedic Sanskrit.
[End Page 229]
In this analysis we investigate the discrepancy between the actual number of straddling traits and the expected number of traits under the phylogenetic model of analysis A4, in which ancestry constraints are used. If, as we theorize, precursor traits condition the gain of corollary traits, then parallel gains would tend to occur in closely related languages. We would expect the number of straddling traits in B and C to be higher than predicted by A4. In this analysis, as in A4, tagged loanwords are excluded so as to rule out borrowing as a confounding factor.
For each triplet A–C, we consider the N traits in the data set of A4 that appear in A, B, or C, after excluding meaning classes in which A, B, or C has an empty slot. Of these traits, we count the number of straddling traits: traits present in B and C, but not in A. In most cases this number n is greater than the expected number ñ. The latter is computed by constructing a rooted tree of the three languages, with the topology and chronology as they are in the summary tree of analysis A4. First-order parameters are set to their median posterior values: π1 = 0.0103, ρ0 = 7.66 × 10−6, σ = 0.351, α = 1.004. With branch and trait rate multipliers integrated out, we compute the fraction of observed traits that are expected to be straddling, which is scaled by N to obtain ñ. For ancestral languages with one or two descendants, Table 13 lists all triplets; otherwise it lists the two triplets with the lowest and highest n/N ratio.
Except when the ancestral language is Classical Armenian, we find that the number of straddling traits is greater than expected, which indicates an elevated number of parallel gains between closely related languages. This discrepancy is greatest of all for Germanic triplets, where A and C are especially close, and advergence between the descendants of A and C boosts the number of straddling traits.
8.3. Maximum likelihood analyses
In these analyses we continue to analyze the triplets of languages in Table 13 as we investigate the impact of straddling traits on jogging. We expect that jogging will increase with the number of straddling traits. We begin by constructing a rooted tree with the following topology.
- (19).
We set the times of the leaves to the center of the time interval to which they are constrained in analysis A4, and infer the split times via maximum likelihood. The data consists of seven counts, one for each pattern of attestation (three counts for traits attested exclusively in one language; three counts for traits attested in two languages; a count for traits attested in all languages), summing to N. Note that one such count denotes the number of straddling traits. Branch and trait rate multipliers are integrated out. This analysis is done twice.
- (i). First-order parameters are set to their median posterior values in analysis A4, as described in §8.2.
- (ii). The same analysis is repeated with the count for straddling traits increased by one.
In Table 13, 
refers to the time of node J in scenario i. The amount of jogging in scenario i, or the length by which the path between A and B increases, is thus. 
Under the first scenario, we find that the number of traits straddling an ancestor correlates [End Page 230] poorly with the amount of jogging, which depends also on the other pattern counts. However, we find that adding a straddling trait, as in the second scenario, causes the amount of jogging to increase substantially unless the pattern counts are such that the amount of jogging is pinned at zero, as is the case for the Latin/Spanish/Old Irish triplet.
Figure 11.
Jogging vs. number of straddling traits. For three selected triplets, we plot tR and tJ as the number of straddling traits is varied between zero and ten. Clear dots indicate the actual number of straddling traits in our data set. The dashed line indicates tA.
To enlarge on the relationship between jogging and the number of straddling traits, we vary the number of straddling traits and plot the inferred tJ and tR for three selected triplets (Figure 11). We observe that tJrises with the number of straddling traits, unless it is pinned at zero, as is the case for the first many points in the third plot. The inferred tR is realistic in the first two plots, but not in the third. In all cases, however, the effect of straddling traits on tR is relatively small in the vicinity of the actual number of straddling traits observed.
8.4. Properties of covarion traits
When we compare the mathematical properties of covarion traits to the hypothesized behavior of RM traits, two incongruities stand out. First, the covarion model describes a time-reversible trait, whereas in the general case, a precursor trait can give rise to a corollary trait, but not vice versa. (Most types of semantic change are unidirectional.) For a further illustration of this point, consider the following unrooted tree with equal-length branches, labeled with observed values of an RM trait at languages A–D.
- (20).
When a time-reversible trait is made to evolve along a tree with a strict clock, the root can be placed anywhere on the tree without altering the tree’s likelihood (Felsenstein 2004:204–5). But our account of drift (and basic linguistic intuition) suggests that it is more likely for the tree to be rooted at A than at B. If rooted at A, the parallel gain in B and C can be explained as drift. If rooted at B, then its loss in D and reappearance in C require two separate explanations. [End Page 231]
A second mismatch between the covarion model and RM traits concerns the fact that the latter tend to be localized: generally a trait can be gained only where its precursor trait is present. For instance, consider traits that are absent in OE and present in Modern English. We would expect to see more such traits in Old High German than in Hittite. This is borne out in IELEX data: of the thirty-five traits that are absent in OE but present in Modern English as nonloanwords, three are also in Old High German (‘man’, ‘blow’, ‘fat’) and none are in Hittite. The covarion trait model, however, would predict the opposite: unless a language descends from OE, the farther it is from OE, the more such traits it is expected to have. We conclude this section by sketching a mathematical proof for a more general form of this claim.
Claim. Let (Xt)t≥0 be a continuous-time Markov chain (CTMC ) that represents a covarion trait, with states C0, C1, H0, H1 as defined in Appendix B and a transition rate matrix
- (21).
where π0, π1, δ0, δ1, a, s > 0 and π0 + π1 = 1 and δ0 + δ1 = 1. If the trait is absent at time zero, the probability that the trait is present at time t grows monotonically with t. In other words, Pr{Xt ∈ {C1, H1} | X0 = C0} and Pr{Xt ∈ {C1, H1} | X0 = H0} are both nondecreasing with respect to t.
Sketch of proof. For convenience we define a second CTMC (Yt)t≥0 that indicates whether the trait is hot or cold: Yt = 1 if Xt ∈ {H0, H1}, or else Yt = 0. Note that Y is time-homogeneous with the transition rate matrix in 22.
- (22).
We also define a family of random variables (Zt)t≥0 that indicates whether the trait is present: Zt = 1 if Xt∈ {C1, H1}, or else Zt = 0. Conditioned on Y, Z is a nonhomogeneous CTMC whose transition matrix is given in 23.
- (23).
Thus, conditioned on Y and on the trait being absent at time 0, the trait is present at time t with probability
- (24).
which is a nondecreasing function of t. Removing the conditioning on Y involves summing over the possible paths of Y in the time interval [0, t]. Regardless of how this is done, the summands are all nondecreasing functions of t, and thus the sum must also be a nondecreasing function of t.
Remark. We can apply this to the example of a trait that is missing in OE and present in Modern English by saying that X0 is the state of the trait in OE, that Xt is the state of the trait in some nondescendant of OE, and that t is the evolutionary time spanned by these two languages. Since this trait is present in Modern English, we know the distribution of X0. The probability that the trait is present in a nondescendant Pr{Zt = 1 | Z0 = 0} grows monotonically with t.
Remark. An RSC trait is equivalent to a covarion trait with a = 1. Thus the foregoing holds for RSC traits as well. [End Page 232]
8.5. Implications outside of indo-european
For IE, with a long and rich linguistic history, we have shown that it is desirable to model known relationships between ancestor and descendant languages. This raises obvious questions for language families with few or no documented ancestor languages. Such families are the norm, and from the earliest days of lexicostatistics linguists have hoped to understand their prehistory through statistical analysis of linguistic data. We share the view that methods like those we and others use can answer questions about chronology. Such studies have been done elsewhere, in fact earlier for Austronesian (Gray & Jordan 2000) than for IE (Gray & Atkinson 2003). Using methods similar to those we criticize for IE, this Austronesian work (Greenhill & Gray 2005, 2009, Gray et al. 2009, Greenhill et al. 2010) has reached results that we understand to support the consensus of archaeologists and historical linguists. It is natural to wonder about the difference.
A variety of factors could be responsible for differences between Austronesian phylogenetic analyses (where we have no reason to doubt the inferred chronology) and previous IE analyses. First, differences both in the nature of Austronesian and IE documentation and in the history of scholarship over the last two centuries make it far less likely that related words in any two IE languages have escaped notice. For example, the PIE root *h2enh1- ‘breathe’ has derivatives in words like Persian jānvār and Scots Gaelicainmhidh, both meaning ‘animal’; in each word the only trace of that root is in the first vowel and n.Without information about early Indo-Iranian and Celtic, an analyst might not recognize this shared RM trait. It seems possible that distant homoplasy obscured by phonological and morphological change is less often recorded in the Austronesian data set.
Second, Austronesian and IE ‘roots’ are not the same linguistically, and may generate different patterns of derivational or semantic drift. We have tried to understand the evolutionary properties of RM traits in IE; we do not know whether Austronesian traits have a significantly lower incidence of drift-induced homoplasy. Third, the time constraints on early Austronesian splits are tighter than on early IE splits; this may have constrained chronological inference in a beneficial way.36 Fourth, the Austronesian and IE data sets are different; even a small number of errors can make a significant difference. And finally, we would not exclude the possibility of fortuitous interactions between data gaps, trait models, and other components of phylogenetic analysis. We do not know enough to say which of these (or other) factors play a role. We simply note that the principles behind a crosslinguistically reliable toolkit for inferring relationships and chronology are underdeveloped, and we hope that future work can reconcile the differences noted here.
9. Conclusions
Our most important conclusion is that statistical phylogenetic analysis strongly supports the steppe hypothesis of IE origins, contrary to the claims of previous research. This in turn contributes to the study of Eurasian linguistic prehistory, indicating that IE language dispersal was not driven by the spread of agriculture.
Our work made crucial use of documented ancestor-descendant relationships to estimate the rate of lexical trait evolution. Descent relationships can be obscured by drift in [End Page 233] lexical traits, and are hard to infer with the trait models and tree priors commonly used in Bayesian analysis. We thus constrained the IE tree to reflect uncontroversial ancestry relationships. Closely related languages that remain in contact can evolve in parallel, biasing chronological inference, but even when we corrected for this effect, we found robust support for the steppe hypothesis.
To defend the Anatolian hypothesis in light of these results, it would be necessary to challenge the reality that underlies ancestry constraints. For example, given a homoplastic trait like [*pod-, ‘leg’] that is present in Modern Greek and modern IA languages, but not their ancient ancestors, one could claim that the PIE word for ‘foot’ also meant ‘leg’ in a common ancestor of Greek and IA, but that this meaning is undocumented in the copious textual record of Ancient Greek and Sanskrit. Such a claim is unsupported; it is more sensible to assume that a typologically common shift happened independently in languages with the ‘leg’ meaning. Our approach, which is based on the findings of diachronic semantic typology and IE lexicography, does not require positing unobserved dialects and word meanings in otherwise well-documented languages.
We have also made methodological contributions to the field of linguistic phylogenetics. We introduced ancestry constraints, which may be useful in analyzing other language families with a long written history (e.g. Afro-Asiatic and Sino-Tibetan). For analyzing lexical data in any family, we highlighted the distinction between cognate and root-meaning traits. The former correspond to traditional units of analysis in historical linguistics, while the latter (derived from basic vocabulary lists) are more often analyzed in statistical phylogenetics and have different evolutionary properties. We hope that many kinds of linguistic traits will be studied in future phylogenetic research, but it is important to understand the patterns of change that are characteristic of each of them.
Because previous statistical phylogenetic research supported the Anatolian hypothesis, linguists who find that hypothesis implausible for other reasons may dismiss statistical analyses that purport to determine ancestral chronology. For example, one recent IE textbook writes that ‘the jury is still out on whether phylogenetic dating can help solve the problem of how old the IE language family is’ (Clackson 2007:19); another does not mention phylogenetic arguments at all in discussing the Anatolian hypothesis (Fortson 2010). Both are excellent books whose aporia, we speculate, reflects uncertainty about a new method whose results may have seemed implausible, perhaps combined with the lingering bad odor of glottochronology. A final conclusion of our work is therefore that statistical phylogenetic analysis can yield reliable information about prehistoric chronology, at least where all of the available data is taken into consideration. Desiderata for future work include a stochastic model of lexical trait evolution that accounts for the effect of precursor traits and for advergence in closely related languages, to be evaluated against existing models. Meanwhile, we hope our results contribute to a synthesis of methods from linguistics and other disciplines seeking to understand the prehistoric spread of languages and their speakers.
[wchang@gmail.com] (Chang)
[david.lw.hall@gmail.com] (Cathcart)
[chundra@berkeley.edu] (Hall)
[garrett@berkeley.edu] (Garrett)
[david.lw.hall@gmail.com] (Cathcart)
[chundra@berkeley.edu] (Hall)
[garrett@berkeley.edu] (Garrett)
[Received 10 June 2014;
accepted 22 September 2014]
accepted 22 September 2014]
Appendix A
Full model
Our phylogenetic model is a patchwork of many elements. To save the reader the trouble of tracking down their mathematical descriptions, or of reading BEAST source code, we give an overview of the model behind analyses A1–3. Table A1 lists the first-order parameters of the model. A priori, these parameters are independent of one another.
Parameter T is a binary tree with L leaves, where L is the number of languages, both ancestral and nonancestral. Each node has a time value that is not greater (not older) than its parent’s time, unless the node is an ancestral language, in which case it has the same time value as its parent. T is drawn from a distribution with the unnormalized density in A1. [End Page 234]
- (A1) p(T) ∝ f0(T) · f1(T) · f2(T) ··· fC(T)
The factor f0(T) refers to a marginal distribution of an ordinary tree prior for L languages. In this marginal distribution, the time of each ancestral language is equal to the time of its parent (see n. 25 for a description of the implementation). The factors f1(T),f2(T), … ,fC(T) represent constraints on the tree. Each constraint fc(T) involves two elements: a set of languages
and a function gc: (−∞,∞) → [0, ∞), so that:
- (A2)
When gc is a positive constant, the constraint merely ensures that a clade for
is present in T. When gc is a nonconstant function, it can be used to ensure that the TMRCA of
falls in or near a specific time interval. When
contains just one language, the constraint acts to bound the time of a leaf node. Constraints are also used to ensure that ancestral languages are positioned correctly in the tree. Let A be an ancestral language and let
be the set of A’s descendants. We use one constraint to ensure that
is a clade, and a second constraint to ensure that
is a clade. The latter constraint can also be used to bound A in time.
In the universe of the model there are S traits, not all of which are observed. Each trait is present at the root with probability π1. As trait i goes along branch j, it evolves according to the instantaneous rate matrix
- (A3)
where π0 = 1 − π1 and ρij is the expected number of transitions per unit time for trait i on branch j. In a small amount of time ∆t, the probability of transitioning is ρij∆t/(2π0) if the trait is absent (0) and ρij∆t/(2π1) if the trait is present (1). The rate ρij is the product of three model parameters.
- (A4)(1)
The base rate ρ0 is the mean rate at which traits evolve; it is a first-order parameter. Each trait multiplier si is drawn from a gamma distribution with a mean of one.
- (A5)
Each branch multiplier bj is drawn from a log-normal distribution with a mean of one.
- (A6)
The trait multiplier si is different for each trait but the same for all branches, while the branch multiplier bj is different for each branch but the same for all traits.37
After all S traits have been evolved to the leaves, those that are 0 at all leaves are discarded. N traits remain. These are formed into an L × N binary matrix, y. The cell in the lth row and nth column (yln) is 1 if language l has trait n, or else 0. Hidden cells are generated by positing another L × N binary matrix q independently of the rest of the model. The prior distribution of q is unknown. The data x, which is an L × N matrix with three possible values in each cell, is generated deterministically from y andq: a cell xln is marked as hidden (‘?’) if qln = 1, or else xln=yln.
At this point, a careful reader will notice that this generative process could result in a column without ones in x, which implies that a trait can enter the data without being observed in any language. This issue is taken up in Appendix D.
Appendix B
Covarion traits
Some of our analyses use BEAST’s binary covarion trait model instead of restriction site characters. This adds three first-order parameters (Table A2). We write a for the parameter bcov.alpha, to avoid confusing it with the shape parameter for gamma-distributed rate variation. For convenience we define δ0 = 1 − δ1 and, as before, π0 = 1 − π1. [End Page 235]
Table A2.
Additional first-order parameters for a covarion trait model. See also Table A1.
In the binary covarion model, there are two states for trait presence, a ‘hot’ state (H1) and a ‘cold’ state (C1). Similarly, there are two states for trait absence, H0 and C0. The transition rates between cold states are a fraction a of the transition rates between hot states, as diagrammed in A7.
- (A7).
The transition rates are set so that, a priori, the expected fraction of traits that are hot at any point in the tree is δ1, and the expected fraction of traits that are present is π1. In BEAST, the rates are normalized so that the expected number of transitions between presence (of either kind) and absence (of either kind) is ρij, just as with restriction site characters. This results in the instantaneous rate matrix in A8.
- (A8).
Appendix C
Stochastic dollo characters
Some of our analyses use the SDC trait model instead of RSC. These analyses are parameterized differently, with first-order parameters as listed in Table A3.
The base rate ρ0 and the stationary frequency of trait presence π1 are replaced by a global birth rate λ and a per-capita death rate µ. Since SDC posits an infinite number of traits (only a finite subset of which are present at any of the leaves), the parameter S is no longer necessary. The shape parameter for the gamma distribution α is also gone because, in BEAST, among-site rate variation is not supported in the SDC trait model.
With SDC, the tree obtained from the tree prior is augmented with an extra branch that extends from the root to a node positioned at infinity (eternity past). The branch multiplier for this branch is one. For other branches it is drawn from a log-normal distribution, as described in Appendix A.
The global birth rate is the rate at which traits are born on the augmented tree. Over a duration ∆t on branch j, the mean number of traits born in that stretch of the tree is ∆tbjλ. After a trait is born, it dies at the instantaneous rate of bjµ, where bj is the branch multiplier of whatever branch the trait is on. Over a short duration ∆t, the probability of the trait dying on branch j is ∆tbjµ. Though an infinite number of traits are born in the augmented tree, only a finite number N survive to any of the leaves. These N traits are formed into the N × L matrix y. The data x is generated from y via the process described in Appendix A.
The SDC model as implemented in BEAST hews closely to the original description of SDC by Nicholls and Gray (2008), except that λ is integrated out in order to speed up inference (Alekseyenko et al. 2008). It lacks the model of lexicographic coverage proposed by Ryder and Nicholls (2011).
Appendix D
Ascertainment bias correction
The following discussion applies to the RSC or covarion trait model. For an account of the ascertainment model used with SDC, see Nicholls & Gray 2008. Ascertainment bias correction is how BEAST accounts for the possibility that some traits are unobserved at all leaves in the tree, and thus unascertained, that is, censored in the data. [End Page 236]
A Bayesian account of BEAST’s ascertainment model requires us to posit S traits, of which N are observed. We write qn for the probability that a trait will have the outcome of the nth observed trait, conditioned on the first-order parameters given inTable A1 or A2, and on the branch rate multipliers b. Let q0 be the probability that a trait will not be observed at the leaves of the tree, conditioned on the same things. Then, the probability of the data D is as in A9.
- (A9).
S is a nuisance parameter and can be summed out. Using an improper prior of p(S) = 1/S, one obtains an unnormalized likelihood function as in A10.
- (A10).
Conditioning on N then causes the factor of 1/N to fall away, giving a normalized likelihood function.
- (A11).
This was what was proposed by Felsenstein (1992) as a method to account for unascertained traits; it is also what is implemented in BEAST.
In BEAST, however, this method has not been extended to work with hidden cells. When a language is sparsely attested, the probability of an unascertained trait (q0 above) ought to grow, since there are now two reasons for a trait to be unascertained: it could be truly absent from all languages; or it could be unattested due to poor lexicographic coverage. Not accounting for the latter will result in sparsely attested languages being treated as more conservative than they are. As an illustration of this effect, suppose that there is a language A that is unattested in 100 out of a total of 200 meaning classes, and that A attests ten traits that no other language has. In reality, A must have roughly twenty unique traits, half unattested, but since this possibility is ignored, A is treated as having only ten unique traits, and thus as more conservative than it is.
Comments
Post a Comment