A Defence of Mickey Mouse Linguistics, or: Some Frequent Misconceptions about Frequency

von Stefan

Today I’m writing in English for a change because I’d like to comment on a series of English blog posts by Pieter Seuren, the first of which you can find here. Although these posts were published a couple of months ago, my attention was drawn to them only now in a discussion about the nature of the mental lexicon. Both in the article linked above and in a series of blog posts about John Taylor’s book „The Mental Corpus“, Seuren criticizes the „frequency craze“ in (parts of) present-day linguistics. In doing so, he falls prey to some common misconceptions about usage-based theories of language. Since my own work is situated in a decidedly usage-based framework and since I have written a quite favorable review of Taylor’s monograph, I figured it made sense to dedicate a blog post to the question why I practice „Mickey Mouse Linguistics“ (as Seuren calls it) and why I think that this is an insightful way of studying language. I cannot provide a detailed analysis of Seuren’s arguments here. Instead, I will focus on some of his key arguments and show why I do not consider them entirely convincing (to say the least).

Contested Notions

To set the stage (especially for the one or two readers who are not linguists themselves), let me elaborate on one point in which I concur with Seuren. He is absolutely right that linguistics is not a uniform discipline any more and that some key notions are heavily debated. He is also right that some developments, for better or worse, were driven more by personal animosity rather than academic ambitions. This, however, shall not concern us here. Although Seuren’s own argumentation tends to get personal at times, as well, I will try to concentrate on the arguments themselves rather than on any personal quarrels. (For an informative account of the latter, see Harris 1993.)

„Lexicon“, „Grammar“, „competence“, and „performance“ are only some of the hotly debated notions in present-day linguistics. And, even more importantly, there is „a bewildering diversity of opinions regarding the notion of language„, as Seuren puts it. In his view, this is regrettable. In my view, it is an essential part of coming to grips with the crucial question what it actually is that we as linguists are investigating. Is it an innate rule system? Or is it a network of form-meaning pairings emerging from generalizations over actual usage events? Is it a combination of both?

Obviously, answering this question is by no means a trivial matter. Your (tentative) answer does not only depend on your conceptualization of language – equally important is your notion of science (presupposing that, as a linguist, you want to work scientifically). I will not go into this discussion here but only roughly outline my own stance throughout the remainder of this blog post. But to do so, I first have to discuss some of those aforementioned contested notions.

Let’s start with the notion of rule, which, to some extent, lies at the heart of this debate. For Seuren, abandoning the notion of rule is „absurd“:

Nowadays, one is free to say that there is no language system, only language use—a perverse form of neopositivism, which accepts only observable data and rejects any notion of a ‘formal’ or ‘abstract’ (i.e. non-observable) underlying system of rules or principles causing the observable data. (Among large groups of linguists, the words ‘abstract’ and ‘formal’ have become shibboleths for anything bad.)

To see how absurd this is, consider traffic. Nobody confuses actual traffic with the traffic system and its rules, and everyone understands the relation between the two. But some linguists hold that no distinction is to be made between forms of linguistic behaviour and a set of principles guiding that behaviour: linguistic behaviour is guided by frequency measures only —whatever that may mean. Applied to traffic, this would mean, presumably, that what guides actual traffic is the result of the statistics of collisions and casualties, which would supposedly tell us what form of behaviour minimizes the chances of an accident. As a matter of fact, that strategy would work well, as in the end the number of traffic participants would be so low that accidents would indeed be much less likely to happen. The same in language: miscommunication would be so frequent that language learners would soon stop using language altogether as a medium. In fact, of course, in speech as much as in traffic, there is overwhelming evidence for the existence of underlying formally precise systems of rules and principles guiding actual behaviour.

In my view, the traffic metaphor is flawed for several reasons. First, traffic is governed by laws, which exist in written form (and are reinforced by state authority). In the case of language, the rough equivalent would be a grammar book – in the literal sense, not in any metaphorical reading. However, this is not what Seuren has in mind when talking about the linguistic rule system, which he calls „abstract“ and „non-observable“. That’s where the philosophy of science comes into play: In a falsificationist view of science as adopted in usage-based linguistics, anything that is „non-observable“ cannot be subject to scientific inquiry. Note, however, the crucial distinction between „non-observable“ and „not directly observable“ (which is probably what Seuren means by ’non-observable‘ in this context). Of course, we can draw conclusions about abstract generalizations that language users make (call them „constructions“, „rules“, „schemas“, whatever you prefer) from observable data. I will return to this point in due course.

The second reason why I consider the traffic metaphor flawed is the fact that it could, in principle, be a self-organizing system. I dare to suggest that if there were no laws at all governing traffic, we would neither be confronted with a large bunch of traffic rambos trying to dominate the streets nor with lots of cars carefully crawling along at walking pace. Instead, certain conventions (perhaps very similar to those codified in our traffic laws) would emerge from the actual behavior and experience of the traffic participants. In fact, there are already a number of „unwritten rules“ in traffic  that emerged as conventions and are not (yet) codified in laws.

Keller (1994) has famously used another traffic analogy to illustrate the nature of language, namely traffic jams. There are no laws that cause traffic jams to come into existence. Instead, they emerge as „phenomena of the third kind“, as Keller calls them. The idea is similar to the concept of „complex adaptive systems“ that has become popular in usage-based linguistics to describe the nature of language (cf. Beckner et al. 2009): The system’s properties at the global level emerge from complex interactions of multiple independent factors (so-called agents) at the local level. There are no rules governing the behavior of individual agents, although their behavior might seem rule-based to the observer.

E-Language and I-Language

As Taylor (2012) shows – in the book criticized by Seuren -, the notion of rule is closely connected to the distinction between „competence“ and „performance“, or between „I-Language“ and „E-Language“, as Chomsky’s closely related, but not entirely equivalent concepts are called. Usage-based linguistics denies this distinction – or, to put it more precisely: denies that this distinction can be made in a meaningful way. For one thing, all data were have are, of necessity, performance data. As soon as we try to figure out what exactly „competence“ or „I-Language“ amounts to, we are confronted with the problem to disentangle what is specific to „I-Language“ from what has been shaped by language experience, i.e. by „E-Language“. On a related note, this is also a hotly debated topic in the domain of language evolution when it comes to the question, again, what „language“ actually is and which evolved traits and capabilities that distinguish modern humans from their ancestors and their closest living relatives are specifically linguistic. This debate notwithstanding, the crucial question for our discussion is how a language is acquired.

To make one thing clear in advance: Even the fiercest proponent of a usage-based theory will concede that language learning draws on innate capabilities. However, again, usage-based linguists remain agnostic as to the question whether these capabilities are specifically linguistic. As Stefanowitsch (2011) points out, we cannot rule out the possibility that an innate „language faculty“ exists. But: How should we ever falsify this hypothesis? Usage-based theories do not reject the hypothesis of an innate language faculty because they are convinced beyond any doubt that it is wrong. They reject it because it is not falsifiable. Hence, in a falsificationist view (see above), it is not a valid scientific hypothesis.

For Seuren, by contrast, the existence of innate and universal aspects of linguistic competence seems to be beyond any doubt. However, his argumentation seems quite awkward to me:

As a first example, take the following sentences:

(1)     a.      While John stood on the balcony, he watched the crowd.

b.       While he stood on the balcony, John watched the crowd.

c.        John watched the crowd, while he stood on the balcony.

d.        He watched the crowd, while John stood on the balcony.

The remarkable thing is that in a, b and c the pronoun he can denote the same person as is denoted by the name John (its antecedent), while in d this is impossible. This is due to a constraint on sentence-internal anaphora resolution, which says, in principle, that a definite pronoun can be coreferential with an antecedent A in the same sentence (i) if A precedes the pronoun (cases a and c), or (ii) if A is in the main clause while the pronoun is in a subordinate clause (cases b and c). This means that if A follows the pronoun and is in a subordinate clause (case d), coreference is excluded. This constraint is probably universal. At last, I know of no counterexamples in any language. Anecdotal evidence comes from a little episode, a few years ago, when a young student from Bali happened to look at my laptop screen and saw these four sentences. Without my having put him on the track, he immediately jumped up at seeing sentence d. Apparently, therefore, this is something both innate and universal. The Balinese student had not been taught this, nor could he have learned this as a result of input frequency. And there is no reason to assume that other people have.

It goes without saying that the anecdotal evidence Seuren cites is just that: an anecdotal piece of evidence, and a less than convincing one at that. There is no reason not to assume that the pecularities Seuren points out are conventions that are acquired through generalizations over specific exemplars of the constructions in question. If the native language of the Balinese student shares this conventions, it is hardly surprising that he detects the deviance in the fourth sentence without explicitly having been made aware of it. To be sure, the constraints on anaphora resolution are part of the Balinese speaker’s linguistic competence – but why shouldn’t they be acquired via generalizations over performance data?

More Than Frequency

Denying the validity of the competence/performance distinction is not tantamount to claim that „there is no language system“ – of course, there must be some kind of „language system“ since language must be represented in the mind and, ultimately, the brain in some way or another. The crucial debates, as pointed out above, revolve around the question whether there is an encapsulated system exclusively dedicated to language or if language draws on more general cognitive abilities. Over the past decades, cognitive linguists have been busy to collect evidence for the latter hypothesis. Frequency has played a major role in this enterprise, especially in corpus-based approaches to language acquisition and language change. But – and this is my major point: Usage-based linguistics is not all about frequency, nor is „frequency linguistics“ practiced as an end in itself.

Especially in the fields of Cognitive Linguistics and Construction Grammar, there is a broad consensus that language is grounded in interaction as well as in specific social and cultural settings. This very constellation already entails that linguistic knowledge is influenced by more factors than just frequency alone. For example, to use a notoriously vague term, utterances can differ in saliency according to the context in which they are used. If someone cites a Vogon poem at a funeral, the words of the poem might stick in my memory more strongly than they would when uttered in a different context. But how can we operationalize this thought experiment and turn it into a valid scientific hypothesis? We could, for example, take a number of subjects and expose them to linguistic utterances in a variety of different, experimentally controlled situations. Then we could test how well they remember the utterances. What do we do when all our subjects have completed the task and left the lab? Right: We count. Operationalizing scientific hypotheses always boils down to measuring frequencies in one way or another.

By contrast, the evidence that Seuren cites draws on an entirely different understanding of science. He keeps citing invented examples to show that the existence of abstract rules and principles is obvious from linguistic structure itself. In most cases, however, the objections against his account are equally obvious. Let us consider just one example:

Take the following two dependent clauses, one German, the other Dutch. Both share the meaning ‘… that it would have been raining today’:

(6)    a.          … daß es heute hätte regnen sollen/*werden.

b.          … dat het vandaag had zullen regenen.

[…]

Questions: why is werden ungrammatical in (6a) while zullen is fully grammatical in (6b), and why does the German verb cluster have a split branching directionality?

First of all, I am not sure if the sentences are semantically equivalent. I can’t judge the Dutch sentence, but concerning the German sentence, Seuren’s English translation seems wrong to me, as far as my intuitions go. sollen in (6a) is not a future marker, in contrast to werden. To me at least, it presupposes that someone (e.g. the weather forecast) asserted that it will rain. The German equivalent to ‚that it would have been raining today‘ would be dass es heute geregnet hätte, without an overt future marker. As Hilpert (2008: 134) shows,  the use of werden as a future marker in German is constrained. It is not entirely clear how exactly these constraints can be described, but there is no reason not to assume that they emerge from language use – in a double sense: On the timescale of historical language change, they can be considered a ’side effect‘ of grammaticalization.This is exactly what Seuren says in his answer: „German werden has undergone auxiliation (with loss of infinitives and participles), whereas Dutch zullen has not“. On the ontogenetic timescale of language acquisition, there is no reason not to assume that these constraints are acquird on the basis of linguistic input. If language users started to use constructions like …dass es heute hätte regnen werden with sufficient frequency, this ‚future conjunctive‘ construction would become grammaticalized and would therefore become a part of German grammar.

By the way, the split directionality exhibited in (6a) is not such a clear-cut matter. In fact, there are three possibilities, which are subject to diachronic change and dialectal variation: …dass es heute hätte regnen sollen, …dass es heute regnen hätte sollen, and …dass es heute hätte sollen regnen. Under the heading of „Serialisierung im Verbalkomplex“, this is one of the most often studied topics in German historical linguistics (cf. e.g. Krasselt 2013).

Of course, one might argue that these remarks miss the point because for modelling the competence of a speaker, the concrete ordering of the constituents is irrelevant – what is relevant is that the grammar determines a specific ordering of constituents. But from a usage-based point of view, the question is how these ordering constraints come about. According to Seuren, „no amount of input frequency rhetoric will provide the answer. The correct answer simply has to involve a precise, formal, and certainly ‘abstract’, system of rules and principles.“

We can certainly conceive of such an abstract system of rules and principles, which undoubtedly gives a neat description of the facts, i.e. of the linguistic knowledge of a speaker of present-day standard German (as opposed to, say, Early New High German or a German dialect). But usage-based theory, once again, aims to account for the emergence of this abstract system. In my view, Construction Grammar provides a model of linguistic knowledge that accounts neatly for facts such as those he mentions. Of course, the misunderstandings pertaining to Construction Grammar – and, most importantly, the notion of construction – are as manifold as are those pertaining to the role of frequency in usage-based linguistics. Therefore, I will now concentrate on the latter and perhaps come back to the former in a future post.

Seuren repeatedly accuses „frequency worshippers“ to ignore counterevidence:

The technique is simple. You show, for example, that input frequency plays some role in the language learning process of a young child: frequently heard words will be among the first to be produced. No problem (but so what?). Then you  extrapolate from that to language in general: a native language is learned merely on the basis of input frequency. The vast body of evidence showing that this is not so is simply ignored. Try that in the physical sciences!

Unfortunately, he doesn’t mention what exactly the „vast body of evidence showing that this is not so“ actually is. Almost all the evidence he cites throughout his four blog posts is entirely introspective – try that in the physical sciences!

Minds and Machines

But on a less polemical note, I have to concede that the notions of „evidence“ and „data“, to which I promised to return in due course, have been used quite liberally in linguistics across different „camps“. No matter if you read Noam Chomsky or Ron Langacker, you will always find made-up example sentences that are presented as evidence. As Hurford (2012) points out, there is nothing inherently wrong with that. Talmy (2007) even presents a eloquent defence of introspection as a method in (cognitive) linguistics. However, the question remains if we can trust our own intuitions – and there is overwhelming evidence that this is not always the case (cf. e.g. Gibbs 2006). To be sure, there are questions that can be fruitfully investigated using intuitive judgments and invented examples. But as soon as we try to draw generalizations from our introspective „data“, we’re at risk of positing rules and constraints where there are none. Hence, the trend towards empirical (i.e., frequency-based) methods should be welcome to anyone who wants to study language as it is – as opposed to language as an idealized construct as it exists in one linguist’s mind.

Seuren further insinuates that the „frequency craze“ might have ideological reasons:

A further point is to do with the ideological background to the stance taken by Taylor and his cognitivist friends. In order to account for phenomena like those shown above one needs fairly ‘abstract’ rule systems and formalisms—something the Taylors of this world abhor. Why they do so is unclear, at least when one restricts oneself to purely academic argument. It becomes clearer when one is prepared to consider reasons of a more ideological nature, such as an ideological wish to see the human mind as a simple device definable in terms of a physical counting apparatus, just as the information technologists of the 1950s and 1960s wanted to reduce everything to mechanically recordable statistical frequency measures. This means that, if this is the reason for the frequency fans to close their eyes for anything not reducible to frequency, the cognitive revolution of the 1960s has passed them by.

The cognitive revolution of the 1960s might have passed them by, the so-called second cognitive revolution certainly didn’t. And to the extent that it does reduce the human mind to a physical counting apparatus, it does so for reasons of theoretical parsimony (a.k.a. „Ockham’s razor“). And to the extent that the mind is conceptualized as a counting apparatus, it is, far from reducing it, seen as a quite powerful counting apparatus capable of complex categorization, conceptualization, schema abstraction, and the like.

To be sure, frequency plays a role on many levels when it comes to these capabilities. In the domain of categorization, depending on where we live, an apple or a fig will be a prototypical instance of the category „fruit“ for us, because we encounter one or the other more frequently. In the domain of pragmatics, depending on the people we frequently communicate with, we will encounter the f-word, or equivalents in other languages, more or less frequently. But we do not only take note of its frequency, but we are also aware of the social and discourse-pragmatic contexts in which its use is considered appropriate. This knowledge is abstracted from our experience of the usage of the word in concrete contexts.

One might object that this presupposes a „literally fabulous“ memory, as Seuren puts it. Well, let’s say that it requires an evolved capacity for „massive storage“ (e.g. Hurford 2012) and rich memory representations (Bybee 2010). But we do not need a „fabulous“ memory. No one – not even Taylor – holds that we remember each utterance verbatim, storing properties such as the context it occurs in on top of that. Instead, we do form abstractions and generalizations – and we do this quite automatically, either due to some innate capacities (which is, as pointed out above, a non-falsifiable hypothesis) or because we have learned it right from the cradle. In fact, Construction Grammar is all about this capacity for abstraction and generalization. Constructions are defined as form-meaning pairings at various levels of abstraction. It is still a matter of debate to which level of abstraction constructions can plausibly be posited (cf. Hilpert 2013), which is seen by critics as a major flaw of the theory. From a usage-based point of view, by contrast, this vagueness is an advantage rather than a shortcoming: Rather than positing a top-down definition which units can be seen as constructions, it has to be decided in a bottom-up fashion from the actual data.

You might call this „a perverse form of neopositivism, which accepts only observable data and rejects any notion of a ‘formal’ or ‘abstract’ (i.e. non-observable) underlying system of rules or principles causing the observable data“ (Seuren). As we have seen, the latter is not true: Usage-based theories do assume an abstract system involving what you might call rules, although usage-based linguists are sceptical of that term for good reasons. It is true, however, that only observable data are accepted – anything else would be absurd. Any theory of language must be consistent with the „observable data“, no matter if they are obtained by means of introspection or by empirical studies.

To return once more to the metaphor of the counting apparatus: Given our human vanity, it might be slightly disappointing for us to learn that the human mind is nothing more than that – call it a counting apparatus, call it a computer, whatever metaphor you prefer. After all, it’s just physical matter. But treating the human mind as what it most probably is doesn’t mean to „reduce“ it. And that, polemically speaking, the study of the human mind would be much more exciting if we posit rules and abstractions that are not observable from the actual data is not a scientifically valid argument for doing so.

That said, I have to concede that usage-based theories does rely on reduction, idealization, and simplification, as does any scientific theory. For example, Blumenthal-Dramé (2012) points out some common idealizations in connection with the key notion of „entrenchment“. In my funeral example above, I have already implicitly mentioned one of these idealizations: Frequency-based studies assume that all utterances have the same impact on language users. This idealization is problematic, but it is also necessary: Which reliable criteria could we establish to categorize usage events as for their „relevance“ and the impact they have on other language users? – As I said, every scientific theory has to work with idealizations and simplifications. It just has to be aware of them and of the problems they entail. At times, this might seem like „reduc[ing] natural human language—with all its fascinating complexity, its unexpected depths and mysteries, its never ending surprises—to cartoon proportions“ – hence Seuren’s title „Mickey Mouse Linguistics“. But again, treating language as what it actually is is not the same as reducing it.  Seuren’s theory involves simplifications, as well – for example, the assumption of a binary grammaticality distinction in our example (6) above, which does not take variation within the language community into account.

The domains we are investigating as linguists and cognitive scientists are extremely fuzzy. To handle this fuzziness, we have to „tame“ our objects of study, as it were. Formal theories, as proposed by Seuren, are one way to do this – and I do not deny that such theories can prove highly insightful. However, I hold that usage-based theories can – and do – make substantial contributions to our understanding of language and cognition, as well.

Misconceptions about Frequency

I promised in the title to comment on frequent misconceptions about frequency. I have already mentioned one: Many opponents of usage-based theories seem to assume that for usage-based linguists, linguistics (or even language) reduces to frequencies. As I have tried to show, this is not true, and to the extent that it is true, the notion of frequency does not amount to simple counting, but to frequency detection at different levels of abstraction. However, testing assumptions that can be considered valid scientific hypothesis involves operationalizing them in empirical, i.e. quantitative terms, which in turn involves – well, counting.

Another misconception is that „this makes the study of language look easy, since all you have to do is count.“ (Seuren) In a similar vein, Leiss (2009) insinuates that Construction Grammar doesn’t attract intelligent people because it’s all about compiling „lists“ of constructions. But the opposite is the case: „Counting“ is by no means an easy task. Everybody who has ever performed a corpus study knows how hard it is to decide which corpus might be the most representative one for, say, the time span and the register you want to investigate. Moreover, you have to make far-reaching decisions how you annotate and categorize your data. For example, if you investigate if the German future marker werden mentioned above occurs more often in sentences with human, animate, or inanimate subjects, you might find borderline cases: Is an orc or a zombie „human“ or merely „animate“? In actual corpus analyses, of course, the problems are much more subtle and complex than this constructed example might suggest.  In any case, you have to make far-reaching decisions and you have to justify them on theoretical grounds. Therefore, any corpus linguist or „frequency linguist“ needs a solid background in theoretical linguistics. This does not mean that she has to adopt all notions and concepts established in the history of linguistics. On the contrary – as scientists, we constantly have to question the theoretical notions we are dealing with, including traditional key concepts such as the competence/performance distinction or the idea of the mental lexicon. In addition, empirical studies allow us to keep our theoretical presuppositions to a minimum. Thus, we don’t have to waste time and effort to fit the data to our models. Instead, we acquire our knowledge about language from the object of study itself, i.e. from actual language data. Thus, empirical studies feed back into linguistic theory.

Of course, this approach does not buy us absolute truths. A hypothesis confirmed by one set of data can be fasified by another study. But this is what science is all about, isn’t it? Seuren ist right when he writes: „One should learn to be eager for counterevidence, not shun it.“ But this counterevidence should not rely on non-falsifiable theoretical preconceptions, nor on ideas that are endorsed just because they have become part and parcel of theoretical linguistics throughout the decades and are therefore perceived as „tried-and-true“ concepts.

I confess that this post is far more theoretical than a defence of usage-based linguistics should actually be. But for those interested in actual data, I recommend to read the relevant studies – e.g. those reviewed by Bybee (2010) and Taylor (2012) – in an unbiased way. Maybe you will find that there are worse role models for linguists than Mickey Mouse.

References

Beckner, Clay; Blythe, Richard; Bybee, Joan; Christiansen, Morten H.; Croft, William; Ellis, Nick C.; Holland, John; Ke, Jinyun; Larsen-Freeman, Diane; Schoenemann, Tom (2009): Language is a Complex Adaptive System. Position Paper. In: Language Learning 59 Suppl. 1, 1–26.

Blumenthal-Dramé, Alice (2012): Entrenchment in Usage-Based Theories. What Corpus Data Do and Do Not Reveal About the Mind. Berlin, New York: De Gruyter.

Bybee, Joan L. (2010): Language, Usage and Cognition. Cambridge: Cambridge University Press.

Gibbs jr., Raymond W. (2006): Introspection and Cognitive Linguistics. Should we Trust our own Intuitions? In: Annual Review of Cognitive Linguistics 4, 135–151.

Harris, Randy Allen (1993): The Linguistics Wars. Oxford: Oxford University Press.

Hilpert, Martin (2008): Germanic Future Constructions. A Usage-Based Approach to Language Change. Amsterdam, Philadelphia: John Benjamins (Constructional Approaches to Language, 7).

Hilpert, Martin (2013): Constructional Change in English. Developments in Allomorphy, Word Formation, and Syntax. Cambridge: Cambridge University Press.

Hurford, James R. (2012): The Origins of Grammar. Language in the Light of Evolution, Vol. 2. Oxford: Oxford University Press (Oxford Studies in the Evolution of Language, 15).

Keller, Rudi (1994): Sprachwandel. Von der unsichtbaren Hand in der Sprache. Tübingen, Basel: Francke.

Krasselt, Julia (2013): Zur Serialisierung im Verbalkomplex subordinierter Sätze. Gegenwartssprachliche und frühneuhochdeutsche Variation. In: Vogel, Petra M. (ed.): Sprachwandel im Neuhochdeutschen. Berlin, New York: De Gruyter (Jahrbuch für germanistische Sprachgeschichte, 4), 128–143.

Leiss, Elisabeth (2009): Konstruktionsgrammatik versus Universalgrammatik. In: Eins, Wieland; Schmöe, Friederike (eds.): Wie wir sprechen und schreiben. Festschrift für Helmut Glück zum 60. Geburtstag. Wiesbaden: Harrassowitz, 17–28.

Talmy, Leonard (2007): Foreword. In: Gonzales-Marquez, Monica; Mittelberg, Irene; Coulson, Seana; Spivey, Michael J. (eds.): Methods in Cognitive Linguistics. Amsterdam, Philadelphia: John Benjamins, xi–xxi.

Taylor, John R. (2012): The Mental Corpus. How Language is Represented in the Mind. Oxford: Oxford University Press.

Advertisements