“The so-called ‘poverty of stimulus argument’ is quite unconvincing, for it simply presupposes that the child’s linguistic data is poor.” Discuss.



(1)    The so-called ‘poverty of stimulus argument’ is quite unconvincing, for it simply presupposes that the child’s linguistic data is poor.
What is the logical form of (1)?
The conclusion is that:
(2)    The so-called ‘poverty of stimulus argument’ is quite unconvincing.
The argument is that:
(3)    The so-called ‘poverty of stimulus’ argument... simply presupposes that the child’s linguistic data is poor.
As a hypothetical the argument can be expressed as a modus ponens:
(4)    If the ‘so-called ‘poverty of stimulus’ argument... simply presupposes that the child’s linguistic data is poor, then the so-called ‘poverty of stimulus argument’ is quite unconvincing.
This can be expressed symbolically as:
(5)    (3)→(2)
If (3) is the case then (2) will also be the case; though (2) may be true while (3) is false, so merely disproving (3) is insufficient to disprove (2). This makes the falsifying of (2) rather difficult, in that the falsification conditions are not adequately stipulated in (1). I contend, however, that both (2) and (3) are false: the poverty of stimulus argument is in fact very convincing in the case of a child’s language learning; poverty of stimulus arguments do not ‘presuppose that the child’s linguistic data is poor’, but rather observe that the linguistic understanding a child demonstrates in the first few years of their life goes far beyond what has been discovered in centuries of study into human language. Arguments from the poverty of stimulus presuppose nothing of the ‘poverty’ of the data beyond what is demonstrated in any attempt to study it: the ‘linguistic data’ presented to the child is the same as that presented to the linguist; it is not so much that this data is poor in any objective sense, whatever that might be, if anything it is too ‘rich’, in that its formative complexities are not explicit in the data itself. In this essay I will discuss some examples of the complexity of natural language and argue that these features are best understood as resulting from the natural structure of the physical organ from which language emerges, the brain.


Contents:
I.                    Natural language grammars are not explicit.
II.                  Grammars are too complex for a child to discern from the phenomenal data alone.
III.                Linear sentence structures encode complex semantic relations.
IV.                Linguistic competence is the product of a specific language faculty.
V.                  ‘Knowledge of a language’ is a metaphor.
VI.                No amount of time can tell.
VII.              Natural languages are not objects of knowledge.
VIII.            Language is a feature of human nature.
IX.                 Natural languages are not taught.
X.                   The language faculty is perfect.


Natural language grammars are not explicit.
 “... the linguistic data normally available to the child... would be consistent with innumerable generalisations over and above the ones that speakers unerringly converge to.” [Chomsky, 2002. p6]
The problem of the ‘poverty of stimulus’ is that there is nothing in the data, where it is ‘consistent with innumerable generalisations‘, which determines which of the potential generalisations is ‘correct’; there is no negative evidence, or at least there is insufficient negative evidence to entail the specific grammars that ‘speakers unerringly converge to’. One example Belletti and Rizzi use to illustrate this point [Chomsky 2002 p6] is the inconsistency of reference of the pronoun ‘he’ in:
1.       John said that he was happy.
2.       He said that John was happy.
In 1, ‘he’ is co-referential with ‘John’: both the pronoun and proper name refer to the same object. In 2, ‘he’ refers to someone else; while it is possible to construe the ‘he’ in 2 as referring to John, it is counter-intuitive and requires a fairly complex explanatory scenario. Exceptional circumstances aside, speakers ‘unerringly converge’ on the interpretation of ‘he’ in 1 as co-referential, i.e. they read 1 as featuring only one ‘object’; the opposite is true for 2. Why should this be the case? Chomsky’s view is that this intuition is not accidental, nor is it a feature of the language in itself, but rather it is the natural product of an underlying ‘language faculty (LF)’ which is an abstraction from the biological structure of the mind/brain to a set of common features or rules of language acquisition and use, these are the rules of ‘Universal Grammar (UG)’. This feature of co-reference is just one of a number of features of languages that all language learners appear to understand and adhere to without sufficient instances of experience to form such specific generalisations. In other words, the corpus of information presented to a child is insufficient in isolation to account for the complex, specific abstractions that would constitute ‘knowledge of a language’. To present this more clearly, compare the task of a child to the task of a linguist: modern linguistics has hundreds of years worth of data and analyses to call upon in attempts to construct a grammar, or theory, of a language, and yet a fully explanatory, generative, grammar has yet to be settled upon for any or all natural languages; the child, by comparison, has a relatively small set of linguistic experiences from which to form ‘linguistic competence’, yet intuitively develops a competence with any natural language in an incredibly short space of time; further it’s reasonable to say that the child has a smaller conceptual intellect than that of the combined minds of all the linguists attempting to formulate such a grammar. It would seem sensible, then, to assume that the child has an innate capacity to learn languages. Such a capacity we could compare to a child’s innate capacity to understand spatial relations: Newton was forty-four when he published the Principia, but I presume that at the age of four if he saw an apple fall from a tree he wouldn’t have expected it to float.


Grammars are too complex for a child to discern from the phenomenal data alone.
In his 1957 work, Syntactic Structures, Chomsky demonstrated the inadequacy of ‘left to right’ grammars, such as the finite state model or ‘Markov process’, by reference to the theorem ‘English is not a finite state language’[1957, p21], proved in his 1956 essay “Three models for the description of language.” From this Chomsky developed a theory of a ‘phrase structure grammar’ complemented by ‘transformational rules’. This led to his conception of ‘surface structure’ and ‘deep structure’: surface structure is the apparent form of the language, the linear progression that is spoken, written, heard, read etc; deep structure is the underlying form, that part of the syntactic description of a sentence that ‘determines its semantic interpretation’.
We can characterise this process as beginning with a ‘thought’, or base component, which undergoes transformation such that it becomes a well formed formula of the language we are speaking in. To justify such a conception, consider sentences such as:
3.       Bill seems to be agitated.
4.       It seems that Bill is agitated
The surface structure in 3 suggests that ‘Bill’ is the object of ‘seems’ rather than ‘agitated’, whereas if we rephrase 3 in the passive 4, we see that ‘Bill’ is actually the object of ‘agitated’ and the subject of ‘seems’ is a dummy pronoun. We can further demonstrate that Bill is not the object of ‘seems’ by attempting to remove the apparent kernel of 3:
5.       Bill seems.
Plainly, 5 has no semantic value, although it follows the standard sentential construction of a noun phrase followed by a verb phrase. Compare this to the kernel of 4:
6.       Bill is agitated.
6 is a perfectly understandable sentence to anyone familiar with the language. We could express this relationship in terms of the scope of the smallest semantic units and represent the syntactic hierarchy in a phrase structure tree diagram. This structure would present ‘seeming’ as appearing higher in the structure than both ‘Bill’ and ‘agitation’; one could say the verb phrase ‘seems’ scopes over the well formed formula ‘Bill is agitated’. For the conjunction of ‘seems’ and 6 to be a well formed formula, however, ‘seems’ requires an object, so ‘Bill’ is displaced to the front of the sentence. While this operation gives ‘seems’ an object, however, it is taken away from agitated, leaving a ‘gap’.


Linear sentence structures encode complex semantic relations.
Subject-verb agreement, like subject-object agreement, is not immediately obvious from the linear presentation of sentences.
7.       *The threat to the presidents of the company are serious.
8.       The threat to the presidents of the company is serious.
9.       *The threat to the president of the companies are serious.
10.   The threat to the president of the companies is serious.
The examples are taken from a study by Franck, Vigliocco and Nicol [2002] that “found that agreement errors were more frequent following an intermediate modifier [7]... than an immediately preverbal modifier [9]”. The main verb of a sentence must agree with the ‘subject head’ of the sentence; what is not explicitly presented to the would-be speaker is which part of the sentence is the subject head. The occasion of this type of ‘error’ on the part of the language learner could be attributed to excessively complex sentence structures overloading the speaker’s memory- ‘if the verb comes long after the noun, there is no more mental energy left to remember what was the number of the subject’’ (Jespersen [1924], p. 345). Although the pattern found by Franck et al suggests that Jespersen’s answer is too simplistic and that these types of error “... provide further evidence for the psychological plausibility of assuming a stage during which hierarchical relationships among phrases are encoded. The position of a potentially interfering noun in this hierarchical structure would determine its influence over the agreement procedure: while such a noun situated high in the tree structure would interfere strongly with verb agreement, no interference would occur with nouns situated low in the structure.” [Franck, Vigliocco and Nicol, 2002]
Errors could also result from differences in the conjugational regularity of the languages comprising the speaker’s linguistic background: not all natural languages have the same specificity of subject-verb agreement. Such features appear to be somewhat contingent; the extent to which subject-verb conjugations vary in English has diminished over time: old and middle-English had more conjugational variation across subjects than modern English- where modern English doesn’t inflect the infinitive ‘have’ across first and second person singular instances, old and middle English used ‘hast’ for the second person singular. Furthermore different languages require different agreements on their verbs: Spanish, for example, inflects for mood, tense and aspect; some languages, such as Basque and Georgian, have polypersonal agreement, where the verb must agree with the subject, direct object and even the secondary object where applicable. While natural languages differ, however, it is always within a narrow spectrum of possible changes: certain structures cannot be found, or cannot be parsed, in any language.
We can begin to see the wealth and complexity of the information presented to the child, and consequently why we may be justified in assuming an innate framework in the child’s mind/brain that would organise such input such that the child becomes ‘competent’ with the language: that is, an innate language faculty.

Linguistic competence is the product of a specific language faculty.
“Many, like Chomsky, have been further impressed by the seeming observation that the data available to the child from adult utterances seems to be inadequate to account for the knowledge acquired... There is now no doubt about this: children take the clues available to them and use these clues to construct their own grammatical rules, rules which grow in sophistication as acquisition proceeds.” [LaL p. 132]
The mind is not a tabula rasa, no one can argue coherently for such a position, but what kind of tools does the mind possess at birth such that it can, in this case, acquire language? We can characterise a set of responses to this question as ranging between two poles not entirely unlike the two poles of the epistemological debate in the early enlightenment, empiricism and rationalism. All sober philosophers will acknowledge that a speaker’s ‘output’ will be the product of an integration between their mind/brain and the information presented to them; this integrated product comprises the speakers competence with the language, the lexicon from which their sentences are formed. The debate concerns which of these two factors, the nature of the mind/brain or the information presented to the language learner, most strongly determines the nature of the acquired lexicon. The debate is superficially ridiculous, we may as well ask ‘which came first, the chicken or the egg?’ Either pole of the debate can be reduced to linguistic solipsism, either in trapping the language in the head of every individual or in making it noumenal, the first incurring the other minds problem and the second the original question of epistemology. The ‘epistemic’ debate is a red herring, the genuine question concerns the extent to which the ‘language faculty’ is special, i.e., does it function according to general laws of cognition, or is it a specialised ‘module’?


‘Knowledge of a language’ is a metaphor.
 The epistemological metaphor above is not redundant: how do we know a language? The phrasing of the question is leading here. It would seem to suggest that a language is something, an object, in the world that we, as learners, inquire into. This would lead us straight to the ‘noumenal’ conception of language, but we are then presuming our answer in our question. This accident of language has been seized upon by Fodor [1983]:
“... what Chomsky thinks is innate is primarily a certain body of information: the child is, so to speak, ‘born knowing’ certain facts about universal constraints on possible human languages. It is the integration of this innate knowledge with a corpus of ‘primary linguistic data’ (e.g., with the child’s observations of utterances produced by adult members of its speech community) that explains the eventual assimilation of mature linguistic capacities.”
It should be immediately evident to any rational thinker that such a conception of the language faculty is ridiculous. To use Chomsky’s own analogies, would we argue that the eye contains a ‘body of information’ that delimits its visual field, or that an apple falls to the ground from a tree because of the knowledge it possesses? Fodor’s misconception of Chomsky’s position rests on a fundamental misunderstanding of the scientific process so basic as to be embarrassing; in fact the mistake lies at so profound a level of conceptual confusion that we do not even need to bring science into the argument: Fodor has confused  things with things that can be said about things. It is not as if when Kepler formulated the mathematical principles underlying the movements of celestial bodies, what he had discovered were propositions ‘known’ to the planet. By Fodor’s own admission “Chomsky is himself quite prepared to give up the claim that the universal linguistic principles are innately known.”  But he insists that Chomsky’s position, one he calls ‘Neocartesian’, is based on the “...story that what is innately represented should constitute a bona fide object of propositional attitudes; what’s innate must be the sort of thing that can be the value of propositional variable in such a schema as ‘x knows (/believes/cognizes) that P.’ The continuing insistence on the use of intensional verbs determines Fodor’s continuing misrepresentation of the nature of human inquiry. Chomsky is attempting to determine the nature of the language faculty by the, so far incomparably successful, reductive principle summarised by Einstein [1954 p282]:
“The grand aim of all science... is to cover the greatest possible number of empirical facts by logical deduction from the smallest possible number of hypotheses or axioms.”
Would Fodor accuse Einstein of presenting the universe as a ‘bona fide object of propositional attitudes’?

No amount of time can tell.
It has been contended[1] that Chomsky exaggerates the problem of the ‘poverty of stimulus’: a child spends a great amount of time in an active ‘language environment’ before it comes to speak. The implication of such arguments is that while it might be crazy to think that a child literally learns words at a rate of one an hour, learning an entire lexicon, canon, grammar and the notion of communication before they have mastered the seven times table is perfectly reasonable.
“A careful look at the interpretation of expressions reveals very quickly that from the earliest stages, the child knows vastly more than experience has provided... At peak periods of language growth, a child is acquiring words at a rate of about one an hour, with extremely limited exposure under highly ambiguous conditions. These words are understood in delicate and intricate ways that are far beyond the reach of any dictionary... Language acquisition seems much like the growth of organs generally; it is something that happens to the child, not that the child does.” [Chomsky 2000 pp. 7-8]
Chomsky uses the word ‘acquiring’, but it is impossible to tell at what rate, if at all, the child is acquiring the underlying concepts; what Chomsky is observing is that at these ‘peak periods’ the child is learning to use words ‘at a rate of about one an hour’. It may well be the case that the child has already become familiar with the words in hearing them spoken, she may even have begun to use them internally before venturing to speak them. However, even if we give the child an extra year, or two, or three, the ‘poverty of stimulus’ problem remains intact: the principles of human language are too complex to be discerned a propos of nothing from basic general learning principles attributable to a child.
This argument is not entirely novel, in Descent of Man Charles Darwin argued that one year old humans and dogs have the same linguistic competence, but that man differs from beast in “his almost infinitely larger power of associating together the most diversified sounds and ideas” [Chomsky, 2002, p46]. Darwin is broadly right in his distinction, but wrong in his observed similarity- “It is now understood that the linguistic achievements of infants go far beyond what Darwin attributed to them, and that non-human organisms have nothing like the linguistic capacities he assumed.” [Chomsky, 2002, p48] It is, however, easy to understand why Darwin may have thought along these lines as he predated (modern) cognitive science and psycholinguistics by a century; from ordinary everyday experience such a claim seems reasonable, for the dog and the pre-linguistic infant display comparable language understanding (though in the case of the dog, or possibly even in both, it may be that no linguistic process is undertaken whatever, and that all the information gleaned is entirely discerned from body language, tone of voice, context and so on). The very obvious difference is that the child, presuming it to have no serious pathological problems, will come to speak the language fluently within a few years, whereas the dog never will. If the difference were merely the result of the human’s ‘infinitely larger power of associating together the most diversified sounds and ideas’ then one would expect the human to display a more complex variant of that which the dog speaks, but plainly this is not the case, the dog does not come to speak a basic human language and the human does not speak an advanced form of ‘dog’. Further, much research has been done in the twentieth century into the linguistic capacities of animals, and the discoveries have borne out the distinction between human and animal linguistic capacities. What has been demonstrated, in examples such as Kanzi, a bonobo, and Rico, a border collie, is that many larger animals can understand symbols as standing for objects and can memorise a great number of these, some, as demonstrated by Louis Herman’s research with bottlenose dolphins, can even understand nouns paired with verbs in short but complex arrangements. However, interesting the similarities, however, the differences remain stark: no animal, no matter how much time and attention is put into its teaching, will ever be able to encode or decode some of the most common features of human language, such as inflections for tense or case. Animals simply do not have the resources to comprehend hypotheticals or subjunctive mood. What is more it is not merely that humans and animals share the same basic faculties but that those of the human are more advanced, the vast majority of the syntactic nuances of human language have no comparable analogue in the animal kingdom or elsewhere.


Languages are not objects of knowledge.
In Knowledge of Language Chomsky argued that the traditional philosophic conceptions of language were fundamentally mistaken. Philosophers and linguists had worked from the underlying assumption that languages were real objects of knowledge that their speakers had a partial and incomplete knowledge of: when a child came to learn a language its ‘mission’ was to discern the characteristics of this body of knowledge, in the same way a child may learn a game, such as chess or poker. There is a long-running joke popular among introductions to linguistics that languages have an army and a navy. Any serious research into dialects will show that the contingencies of geographic borders do not correspond to dialect borders: some dialects of northern Germany, for example, are much more similar to those spoken in Belgium, Switzerland and Holland than dialects spoken in southern Germany. This regionalisation in fact extends all the way down to individuals, for no two people will have precisely the same lexical canon or syntactic motifs. While most speakers in a ‘linguistic community’ will use a very similar language, there will be subtle differences between every speaker depending on background, environment, personality type and any other factors that determine the differences in individuals generally. To illustrate and clarify this point Chomsky made a distinction between I-languages and E-languages. I-languages are intensional, internal and individual. They are physical states of the mind/brain and as such may be genuine objects of linguistic, empirical enquiry. E-languages, by contrast, are abstractions over a collection of I-languages and as such are not genuine objects of empirical enquiry, they are at best ‘pseudo-objects’. When I speak to my friend in the pub, then, it is not that we communicate as a result of our both speaking the same E-language, for there is no such thing, but rather that our I-languages overlap making communication possible.
“[Michael] Dummett (1986) argues against internalist approaches to language that they fail to provide an account of notions like ‘language of a community’ or ‘community norms’ in the sense presupposed by virtually all work in the philosophy of language and philosophical semantics. These notions, Dummett claims, are required to provide a notion of a common public language which ‘exists independently of any particular speakers’ and of which native speakers have a ‘partial, and partially erroneous, grasp’” [Hornstein §4]
Dummett’s accusations, a typical defence of an externalist approach to language, are frankly absurd; in fact they extend no further than decrying Chomsky’s approach for not actively recognising precisely those aspects of thought on language he sought to escape. Dummett is stuck in the past. (Implicitly) externalist conceptions of language worked up to a point, but Frege, Russell, Wittgenstein and so on all failed to give a strong account of semantics within this model. If they had sufficiently explained semantics within the externalist framework this would have served as a strong argument against Chomsky’s position. There is no reason whatsoever, though, to preserve this model on the basis of some small progress which translates for the most part into the internalist perspective. It is not as if we would write off twentieth century logic: the work done by Frege, Russell and so on continues to inform both semantic and syntactic theorising, even if some of the more ‘metaphysical’ aspects of their thought are now considered to be ‘off the mark’. The ‘big questions’ explored in the early twentieth century have remained for the most part unanswered, in semantics particularly. What it is that allows for humans to use symbols and sounds to denote objects and make observations and so on remains largely a mystery. The great advantage of Chomsky’s relocation of the problem from an inquiry into a platonic object which no one person can have a complete understanding of to an inquiry into a feature of physical objects, brains, is that it allows for questions in linguistics to be problems rather than mysteries; an inquiry into a physical object can yield genuine discovery through empirical observation, it gives linguists something to be right or wrong about as their theories can potentially correspond to an object of empirical inquiry.


Language is a feature of human nature.
Chomsky has often made the analogy that just as birds have wings and can fly and fish have fins and can swim, so humans have a language faculty and can communicate their thought in this advanced system joining sound (or symbol) and meaning. As a mentalist and a proponent of the view that language is a feature of the mind which is only an abstraction over a physical organ, the brain, Chomsky locates this linguistic capacity in “The language organ, or “faculty of language” as we may call it, [which] is a common human possession, varying little across the species as far as we know, apart from very serious pathology.” [Chomsky 2002 p47] This ‘faculty of language’, then, is a feature of human nature. The subject of human nature is somewhat controversial in the history of philosophy: one may contrast the ‘essentialist’ view of Aristotle with the ‘existentialist’ view of Sartre. A sensible approach, however, lies somewhere between the two: there are features, faculties, traits and so on common to all humans (excluding cases of ‘very serious pathology’)and there are features, faculties, traits and so on which are more pronounced in some than in others; humans are both alike and different. One may wish to describe this as a ‘family resemblance’ amongst humans. Some predicates will apply to all humans, some to almost all humans, some to many, few, some and some to none. This is true also of human language: there are features of human language which are true of all languages and features which are only true of some. Those principles which describe all human language will be the principles of Universal Grammar, the fundamental features of the language faculty. These principles will describe the nature of the ‘language organ’ and will be entailed by its physical structure.


Natural languages are not taught.
“Every speaker implicitly masters a very detailed and precise system of formal procedures to assemble and interpret linguistic expressions...How can it be that every child succeeds in acquiring such a rich system so early in life, in an apparently unintentional manner, without the need of an explicit teaching?” [Chomsky, 2002. p5]
Languages are not taught to children in the same way that mathematics or science or the humanities are. If it were necessary to teach children natural language in this manner the task would be impossible for two reasons: firstly the principles that govern any natural language are not known in their entirety (and if Chomsky’s distinction between E-languages and I-languages is correct, which seems almost indubitable, then there is no actual object the principles of which could be known), making the teaching of them rather difficult; secondly even if these principles were known in their entirety the system is so vast and complex that it is very doubtful that a child will be able to master its intricacies before it has learnt to properly tie its shoelaces, as it in fact does.
Even in second language learning the teaching is not explicit in the same way that it is when studying history or economics or biology. The principles of the language to be learnt are never explicitly stated beyond a few incidental features such as whether the language has regular or irregular verb conjugation or whether its nouns have genders. For the most part students are encouraged to listen for the ‘gist’ of what is said, then through listening to examples of speech and practising the vocabulary regularly they become ‘familiar’ with the language, which ultimately leads to their competence with it. This suggests that a linguistic framework is present in all humans, and that when a new language is learnt it is like a new ‘dressing’ is acquired for this frame.


The language faculty is perfect.
In his recent work Chomsky has argued that, viewed from the minimalist perspective, language is ‘perfect’. This seemingly bizarre view stems from the minimalist program, which aims to reduce all empirical frameworks to the simplest explanatorily sufficient alternative, essentially the principle of Ockham’s razor. The simplest way to understand syntax is as a computational device interfacing between sound and meaning. Lexical items, atoms of meaning, are inputted to the syntactic module which transforms them into phonic objects, phonemes, which constitute the linguistic output. Language is ‘perfect’ in that viewed from this perspective we can understand the syntactic module as economically thrifty: it makes the minimum of syntactic operations necessary in order to convert semantic input to phonetic output. One of the developments in this minimalist framework has been a shift from trace theory to copy theory in sentences such as 3. The traditional account of 3 would be that ‘Bill’ is displaced from his verb, agitated, to the front of the sentence, preceding seems, leaving a ‘trace’ by the second verb. This was a rather ungainly explanation, however, as it required an extra, invisible sentential component which was present to both speaker and listener, yet not explicit in the conveyed material. This was a violation of the Inclusiveness Condition, which “... requires that an LF object be built from the features of the lexical items of the numeration.” [Bošković and Lasnik 2007 p343] The new solution offered was to replace trace theory with copy theory. The syntactic module has the verb phrase, ‘(to be) agitated’, and corresponding noun phrase, ‘Bill’, but it also has a verb phrase, ‘seems’, with no object. The syntactic module copies its initial object, Bill, and uses it as the object of both verb phrases.


Conclusion
‘The so-called ‘poverty of stimulus’ arguments are not unconvincing, in fact quite the opposite. It is not that the child’s linguistic data is poor, but rather that it is insufficient. The “‘gap’ between what in the way of information about the world is provided by sensory experience and what we end up knowing” [Cowie 1999 P31] is best explained by that mechanism or faculty through which speakers of a language interact with the ‘linguistic data’ presented to them: the brain. Further, minimalist inquiry into the structure of this faculty has yielded the simplest explanatorily sufficient accounts of the empirical phenomena to date. Insofar as our aim ‘... is to cover the greatest possible number of empirical facts by logical deduction from the smallest possible number of hypotheses or axioms’ then, I advocate the continuing procedure of naturalistic inquiry along the lines of the minimalist framework into those features of the faculty posited in response to the problem of the poverty of stimulus which, in conjunction with the ‘linguistic data’ presented to us, determine the nature of our linguistic competence.
Symbolically:
¬(3)&¬(2)&¬(1).


References.
Bošković and Lasnik. [2007]. Minimalist Syntax, The Essential Readings. Blackwell Publishing.
Fodor, J [1983]. Modularity of Mind, Cambridge MA: MIT Press.
Chomsky, N [2002]. On Nature and Language. Cambridge 2007.
LaL [1999]. Language and Linguistics, The Key Concepts (Second Edition). Routledge 2007.
Chomsky, N [1957]. Syntactic Structures, Mouton de Gruyter 2002.
Franck, Vigliocco and Nicol [2002]. Subject-verb agreement errors in French and English:
The role of syntactic hierarchy, http://www.psychol.ucl.ac.uk/language/papers/fvn_langcogprocesses.pdf (9/3/08)
Einstein, A. [1954]. Ideas and Opinions, New York: Bonanza Books.
Chomsky, N. [2000]. New Horizons in the Study of Language and Mind, Cambridge 2007.


[1] Catherine Osborne, during an informal talk at philosophy society, 2008










No comments:

Post a Comment

Labels

animals (1) art (1) blogs (2) class (2) comedy (11) economics (4) equality (2) facts (1) feminism (8) football (2) fox news (2) friends (1) globalisation (1) grass (1) history (1) homophobia (4) human nature (1) hypocrisy (1) immigration (1) income (1) Jon Stewart (1) kids (1) language (3) life (1) literature (1) love (2) marketing (3) masculinity (2) morality (10) music (9) narrative (2) news (2) nonsense (1) oppression (5) patriarchy (3) philosophy (18) poetry (11) politics (29) porn (1) prejudice (2) prose (3) prostitution (2) quotes (1) racism (3) redistribution (1) rights (1) satire (1) science (2) sex (3) stephen fry (1) stories (7) twitter (1) vetiver (1) video (2) war (1) wealth (1)