Marcelo Dascal

LANGUAGE AS A COGNITIVE TECHNOLOGY*

Marcelo Dascal

Tel Aviv University

Ever since Descartes singled out the ability to use natural language appropriately in any given circumstance as the proof that humans – unlike animals and machines – have minds, an idea that Turing transformed into his well-known test to determine whether machines have intelligence, the close connection between language and cognition has been widely acknowledged, although it was accounted for in quite different ways. Recent advances in natural language processing, as well as attempts to create “embodied conversational agents” which couple language processing with that of its natural bodily correlates (gestures, facial expression and gaze direction), in the hope of developing human-computer interfaces based on natural – rather than formal – language, have again brought to the fore the question of how far we can hope machines to be able to master the cognitive abilities required for language use. In this paper, I approach this issue from a different angle, inquiring whether language can be viewed as a “cognitive technology”, employed by humans as a tool for the performance of certain cognitive tasks. I propose a definition of “cognitive technology” that encompasses both external (or “prosthetic”) and internal cognitive devices. A number of parameters in terms of which a typology of cognitive technologies of both kinds can be sketched is also set forth. It is then argued that inquiring about language’s role in cognition allows us to re-frame the traditional debate about the relationship between language and thought, by examining how specific aspects of language actually influence cognition – as an environment, a resource, or a tool. This perspective helps bring together the contributions of the philosophical “linguistic turn” in epistemology and the incipient “epistemology of cognitive technology” It also permits a more precise and fruitful discussion of the question whether, to what extent, and which of the language-based cognitive technologies we naturally use can be emulated by the kinds of technologies presently or in the foreseeable future available.

KEYWORDS: language, cognitive resource, cognitive environment, cognitive tool, pragmatics, semantics, syntax, sequential ordering, formulaic expressions, information retrieval

Biosketch

Marcelo Dascal is a Professor of Philosophy and former Dean of Humanities at Tel Aviv University. He holds degrees in philosophy, electric engineering and linguistics, and a Ph.D. in the philosophy of science (Hebrew University, Jerusalem). He has taught and conducted research in major universities in Europe, the Americas, Australia, and Israel, has been a fellow of the Netherlands Institute of Advanced Studies and the Institute of Advanced Studies at the Hebrew University, and will be the Leibniz Professor for Advanced Interdisciplinary Studies at the University of Leipzig (Winter 2002). His main research areas are the philosophy of language and mind, pragmatics, the history of modern philosophy, and the study of controversies. He is the President of the New Israeli Philosophical Association and of the International Association for the Study of Controversies. MD has written, edited and co-edited about 25 books and more than 200 articles, some of them available in his website at Tel Aviv University. He is the founder and editor of the journal Pragmatics & Cognition and of several book series. MD was awarded the Humboldt Research Prize for 2002-2003.

 

 

1. Introduction

In our time we live surrounded by objects and devices we have created that are able to perform complex tasks, whose performance used to demand much concentration and mental effort by us. One can say that we have managed to transfer to these objects and devices some of the capacities that were considered – until not long ago – typical and exclusive of the human intellect. In this sense, we have created “cognitive artifacts” (Hutchins 1999) or, more generally, “cognitive technologies” (Gorayska & Mey 1996). These inventions save a considerable portion of our intellectual power, setting it free for the performance of higher cognitive tasks – those for which we have not yet been – and, as believed by some (e.g., Dreyfus 1971, 1992), will never be – able to invent mechanical, computational or other kinds of ersatz.

Behind every technology created by humankind – be it the wheel, agriculture, or the cellular telephone – there is, of course, a lot of cognitive effort. But this does not make it, as such, a cognitive technology in the sense in which I propose to use this expression. What I have in mind are the main uses of a technology, rather than the process of its creation or its effects. Primarily, the wheel serves for transportation; agriculture, for food production; the cellular phone, for communication. Secondarily, these technologies can also be useful for cognition: transportation allows us to participate in scientific conferences where we are supposed to learn something; nourishing ourselves gives us also mental energy; the cellular phone can occasionally serve to communicate cognitive content. Cognitive technologies, however, are those that either have been designed for cognitive uses or else have been appropriated for such uses. They can of course retain their original uses and have non-cognitive effects such as the production of jobs, the waging of war, or space travel.

By ‘cognitive technology’ (CT), I mean, thus, every systematic means – material or mental – created by humans that is significantly and routinely used for the performance of cognitive aims.1 By ‘cognitive aims’, I mean either mental states of a cognitive nature (e.g., knowledge, opinion, belief, intention, expectation, decision, plan of action) or cognitive processes (e.g., perception, memorization, conceptualization, classification, learning, anticipation, the formulation of hypotheses, demonstration, deliberation, evaluation, criticism, persuasion, discovery) that lead to cognitive states or help to reach them.2

Natural languages (NL), unlike formal languages created for specific purposes, can hardly be considered – as such – as prototypical ‘artifacts’, for they have not been purposefully ‘designed’. Yet they evolved – genetically and culturally – in view of certain human needs, and some of their features may have been appropriated (deliberately or not) in order to satisfy needs other than those whose pressure caused them to emerge in the first place. In-so-far as such needs are ‘cognitive’, it seems to me appropriate to view the corresponding features of natural languages and their use as ‘cognitive technologies’.

Researchers and developers in many fields of technology have been increasingly interested in natural languages. In an ad published in Time magazine a few years ago, the Daimler-Benz corporation asks “How will man and machine work together in the future?”, and replies: “With the spoken word”. The ad reveals that one of the corporation’s divisions “is currently researching and developing sophisticated new word-based systems”, which will bring to life the Biblically sounding prophecy: “Man will talk to Machine and Machine will respond – for the benefit of Man”. Language-based technology also occupies a central position in MIT’s multimillion “Oxygen” project, which is heralded as the next revolution in information technology – comparable to those achieved by the introduction of the computer and the internet. Oxygen will “address people’s inherent need to communicate naturally: we are not born with keybord and mouse sockets but rather with mouths, ears and eyes. In Oxygen, speech understanding is built-in – and all parts of the system and all the applications will use speech” (Dertouzos 1999: 39).

The intense current interest in NL-based technologies, however, is almost entirely focused on one of its functions – human-machine communication. Thus, in spite of its central position in the Oxygen system, the designers seem to confine NL to its communicative role in the human-computer interface.3 For example, the system is intended to be able to satisfy “people’s need to find useful informat” by being able to understand and respond adequately to a user who simply says “Get me the big red document that came a month ago” (Dertouzos 1999: 39). This would be no doubt a great communicative achievement, for it would require endowing machines with advanced syntactic, semantic, as well as pragmatic processing abilities. But no less important is to realize how fundamental for satisfying the cognitive “need to find useful information” is to fully use the syntactic, semantic and pragmatic potential characteristic of NL – which is in fact what humans do in order to find useful information. The Daimler-Benz ad seems to come close to realizing the cognitive potential of NN, for its heading declares: “Language is the picture and counterpart of thought”. Nevertheless, the research it announces is primarily concerned with the interface issue: “And the machines will understand the words and respond. They will weld, or drive screws, or paint, or write – they will even understand different languages”.

Such a focus on the use of spoken language as the upcoming predominant mode of human-machine interface orients research toward important issues such as auditory pattern recognition and automated voice production, identification of eventual sources of misunderstanding, , elimination of artificial constraints in communication (naturalness), effortlessness (no need of special learning), comfort (user-friendliness), ubiquity (widespread use of miniaturized devices, with minimal energy requirements, literacy requirements, etc.), security (through voice recognition), and, more recently, human-computer conversation in specific environments, the incorporation of non-verbal channels in human-computer communication, and social agent technologies.4 Most of these applications rely upon sophisticated cognitive means, namely those that subtend the communicative use of language. But they are not, as such, “cognitive technologies” in the sense defined above. These technologies are concerned with the role of NL in the cognitive processes themselves, regardless of whether and how they may be communicated to other humans or machines. It is to this cognitive use of NL, so far overlooked by researchers and developers of new technologies, that the approach here proposed wants to call attention. In my opinion, until this potential of NL is not tapped, the truly revolutionary effect of the technological appropriation of NL will not be achieved.

In terms of the above definition of CT, the question of whether certain aspects of NL are properly viewed as CT’s is independent of the current or prospective state of technological development. In other words, this question is independent of the question whether a presumed CT aspect of language can be implemented by computational or other devices, enhanced by such devices, or used in interfaces with them. These questions depend upon the design and development of artifacts capable of simulating, enhancing or making use of the cognitive-technological features of NL, rather than upon the existence and use of such features themselves. Of course, the better we understand the nature of NL’s cognitive-technological functions, the better we will be in a position to develop the corresponding artifacts. We may eventually reach the conclusion that not all of these functions are capable of being satisfactorily emulated by such artifacts. In this sense, the approach proposed here might provide some relevant input to the issue of “what computers can’t do”. I believe this approach will also be valuable for a better understanding of why a proper handling of the cognitive uses of language is crucial for the development of other, not necessarily linguistic, more efficient cognitive technologies, for the design of ‘humane’ interfaces, and – more generally – for the epistemology and philosophical anthropology of the “digital culture”. However, since my primary concern here is to show how several aspects of language and language use can be fruitfully conceptualized as cognitive technologies, the exploration of these further implications of such a conceptual shift will have to be left for another occasion.

In the next section, I present a number of parameters in terms of which a typology of cognitive technologies in general can be outlined. Section 3 summarizes the main antagonistic positions in the traditional debate about the primacy of thought over language or language over thought and proposes to re-frame this debate with the help of the notion of cognitive technology. Section 4 analyzes some examples of linguistic structures and language use as possible candidates of language-based cognitive technologies. In the Conclusion (Section 5), I point out some of the gains to be derived from viewing language as a cognitive-technological resource.

2. Towards a typology of cognitive technologies

In addition to their being directed at either cognitive states or processes – as pointed out in the Introduction – it is convenient to distinguish cognitive technologies according to other significant parameters.

2.1‘Strong’ and ‘weak’ cognitive technologies

Mental states can be distinguished (from each other) according to their ‘modal’ qualifications. For instance, an epistemic state can be certain or probable, intuitive or explicit, definitive or hypothetical, justified or accepted without justification, etc. Cognitive processes, in turn, can be oriented towards the achievement of mental states endowed with any of these modalities. A logical or mathematical demonstration, for instance, leads to an epistemic state of definitive certainty, whereas argumentation or deliberation may lead at most to a doxastic state of holding a belief which, although well grounded, is only provisional.5 Cognitive technologies vary according to the modal aims of their designers. When they choose those modalities one could call ‘strong’ (certainty, irrevocability of the conclusions, etc.), they usually seek to endow the proposed technology with an algorithmic decision procedure which is entirely error-free and therefore irrevocable in its results. When they content themselves with ‘weak’ modalities, they can employ less rigid algorithms (e.g., non-monotonic or probabilistic logics), which do not ensure that the results cannot be called into question and modified.

2.2 ‘Integral’ and ‘partial’ cognitive technologies

‘Integral’ technologies are those that provide for the full execution of a given cognitive aim, without requiring any human intervention. ‘Partial’ technologies are those that provide only ‘helps’ for the performance of a given cognitive aim. These helps make it easier for a human agent to perform the task, but cannot dispense with his or her intervention. Often the designers’ ambitions lead them to propose maximalist projects of the first kind; but quite often, when they realize the enormous difficulties involved, they are likely to be less ambitious and satisfy themselves with partial technologies. The failure of the projects of full ‘mechanical translation’ in the 50’s and 60’s thus led – not without first wasting hundreds of millions of dollars – to the more modest current projects, in which the technologies developed suggest to the human translator alternative translations. The translator not only has to choose the most appropriated one; s/he must also edit it quite heavily in order to finally produce an acceptable text.6

This ‘evolutionary’ pattern from more to less ambitious technologies for a certain cognitive aim is quite frequent. However, maximalist ambitions tend to reappear whenever new technological and scientific developments make the conditions seem ripe for achieving ‘integral’ aims.

2.3 ‘Complete’ and ‘incomplete’ cognitive technologies

One should further distinguish between the pragmatic notion of an ‘integral’ technology in the above sense and the notion of a syntactically and/or semantically ‘complete’ technology. The latter has to do with the ability of a technology to ‘cover’ completely a given domain or ensemble of ‘objects’ with respect to some desired property. For instance, if one creates an ‘alphabet of traffic signs’ in order to expressthrough the combinations of its signs all the instructions to be given to drivers, and if the alphabetical system in question has no means to express one of these instructions, it is incomplete. It may be incomplete either due to the insufficiency of its formation rules or due to that of its transformation rules.7 In so-called non-standard logics, degrees of completeness are distinguished, so that one can talk of ‘weakly complete’ systems, ‘very weakly complete’ systems, and so on.8

2.4 ‘Constitutive’ and ‘non-constitutive’ cognitive technologies

Some technologies are ‘constitutive’ of certain cognitive states or processes, where others are not. The former are such that without them certain cognitive operations cannot be performed. The latter, although extremely useful for the facilitation of the achievement of certain cognitive aims, are not a sine qua non for that. An example of the first kind could be the alleged necessity of super-computers in order to generate very large numbers so as to be able to decide whether numbers endowed with certain arithmetical properties exist; or, more generally, the alleged necessary reliance on computational technologies in order to prove certain theorems. An example of the second kind is the dramatic increment in the efficacy of many of our cognitive performances thanks to computers, in spite of the fact that the latter have not become indispensable for the former. It is not easy to discern whether a given technology is constitutive or not. The endless debate about whether language is a necessary condition for thought, mentioned in section 1, illustrates such a difficulty.

2.5 ‘External’ and ‘internal’ cognitive technologies

Cognitive technologies can be ‘external’ or ‘internal’. The former consist in physical devices or processes that are instrumental in achieving cognitive aims. The most ubiquitous example today of this kind of technology is the computer. But it is not the only one. Its predecessor the abacus, as well as paper and pencil, graphs and diagrams, and even the book can be included in this category. Discussions of cognitive technologies usually focus on such ‘external’ or ‘prosthetic’ – as they are often called – technologies. The significance of ‘internal’ technologies should not be overlooked, however.

‘Internal’ cognitive technologies, on the other hand, are mental procedures thanks to which we can improve our cognitive activity. In this category one should include, for instance, mnemonic techniques that improve our capacity of storage and access to information, formal methods of reasoning that permit to draw correct conclusions from given premises, definitions that clarify and fixate the meaning of concepts, and so on. Underlying these mental technologies there are – according to current belief – cerebral physical processes. But so far the mental has not yet been successfully reduced to its underlying neural layer. What characterizes the ‘internal’ technologies, even in cases where they employ external devices, is the fact that they are part and parcel of the cognitive processes themselves at the mental level, rather than attempts to reduce to or replace such processes by devices or processes operating at another level.

3. Language and thought: Re-framing the classical debate

Language has been – and still is – conceived as having as its primary function communication. In this respect, it serves to convey thought or other forms of cognitive content, but need not play any role in the formation of the thoughts it conveys. Descartes, who considered the ability to use language appropriately in any context to be a distinctive trait of humans (as opposed to animals and machines) and insisted that this ability shows that humans have minds, categorically ruled out the possibility that language may be constitutive of thought processes such as reasoning, as suggested by Hobbes. In the same vein, Turing (1950) considered that success in linguistic communication is a test for determining the presence of intelligence in a machine, but did not claim that this would also show that intelligence consists in the ability to manipulate linguistic symbols.9 Both Descartes and Turing assumed that the capacity to use language appropriately for communication requires high cognitive abilities, and therefore can testify to the existence of “mind” or “intellect”. To argue that language itself has a crucial function in cognition would not only violate Descartes’s mind-body dualism (which perhaps wouldn’t bother Turing), but would also seem to involve an egg-chicken circularity that would certainly bother Turing, as it bothered many others earlier.10

As opposed to such a view of the relationship between language and mind as purely ‘external’, the former having only an indicative role vis-a-vis the latter, other thinkers have argued that language is much more intimately connected with mental life. For such thinkers, language plays an essential role in cognition. They argue that it is constitutive of fundamental cognitive processes (Hobbes), necessary for their performance (Leibniz), responsible for their historical emergence (Condillac), determinant of their structure and content (Whorf), required for their explanation (Sellars), the behavioral substrate of thinking and other mental processes (Watson), an essential component of the social, cultural or ontological context where thought and other aspects of mental life take place (Vygotsky, Mead, Geertz, Heidegger), and so on.

The centuries-old debate on the nature of the relationship between language and thought was mesmerized by these polar positions regarding which one of them is, in some sense, “dependent” upon the other.11 Under close scrutiny, however, both sides in the debate acknowledge the existence of language-thought interactions that do not fit the sweeping versions of their claims. For example, avowed “externalists” like Bacon and Locke, undertake to criticize language as a dangerous source of cognitive mistakes and suggest methods (which gave rise to the attempt to elaborate “scientific” languages) to avoid such a danger. Yet, in so doing, they in fact admit that thought is not impervious to the influence of language. On the other side of the fence, Leibniz , who argued forcefully for the view that language and other semiotic systems are indispensable for even moderately complex forms of cognition, acknowledged the non-linguistic character of certain kinds of knowledge, such as “intuitive” and “clear but not distinct” knowledge.

As in many other debates in the history of ideas, the tendency to focus on mutually exclusive dichotomous positions renders them insoluble and to some extent sterile. I have suggested elsewhere that, instead of focusing exclusively on the “primacy” or “dependency” issue when discussing the relationship between language and thought, it might be more useful to envisage the details of how language is actually used in mental processes.12 The application to language of the notion of cognitive technology as defined above provides, I submit, a fruitful way of further exploring this suggestion.

4. Language as environment, resource and tool of cognition

Language’s presence in human life is overwhelming. Poets excelled in evoking the subtle ways in which words penetrate every corner of our mind,13 and – as we have seen in the preceding section – some thinkers have seen in language an essential and inevitable component of mental processes. This fact is not necessarily “good” or “useful”, if evaluated from the point of view of specific cognitive and practical aims. Hence the recurrent attempts to identify those aspects of language that are deemed to be “pernicious” and to propose a variety of “therapies” to filter them out. However justified it may be, such a critique testifies to the importance of language’s influence on cognition.

Without going as far as Heidegger, who claimed that language is “the house of being”, I would say it is certainly a major component of the context of thinking.14 Without going as far as Geertz (1973: 83), who claimed that language, being one of our key “cultural resources”, is “ingredient, not accessory, to huthought”, I would rather emphasize that it is a ready-at-hand resource that thinking can easily make use of. Without suggesting, as does Watson, that thinking is nothing but sub-vocal speech,15 I would claim that certain linguistic resources do become sharp cognitive tools that afford the emergence and performance of certain types of cognition.

The label “cognitive technology” is, of course, more directly applicable to those aspects of language that were shaped into cognitive tools, both because of their specific cognitive function and because they comport an element of “design”. But one should not overlook the fact that such tools emerge from a background where language’s potential and actual role as a cognitive environment and resource is unquestionable. In fact, the relationship between these three levels is dynamic and multi-directional. Just as “environmental” properties of language (e.g., sequential ordering) can give rise to resources (e.g., narrative structure) and thence to tools (e.g., explanatory strategies), so too a tool (e.g., a successful metaphor created in order to understand a new concept) can become a resource (a frozen metaphor) and then recede into the “environmental” background (e.g., by becoming incorporated into the semantic system as a lexical polysemy).

4.1 Language as environment

As an environment of thought, language, through its sheer overwhelming presence in the mind, influences cognition independently of our awareness or will.

Perhaps the most important of the environmental contributions of language to cognition derives from its general structural properties. Languages are articulated systems; linguists describe them as consisting in a “double articulation” comprising two different sets of basic units and principles of organization: units of meaning (lexemes or morphemes) and, say, units of sound (phonemes).16 These units, in turn, can be combined and recursively recombined in rule-governed ways at different levels – a mechanism that accounts for language’s impressive “generative” potential. This elaborate analytic-combinatorial system provides a natural model for other cognitive activities where the segmentation of complex wholes into parts, the “abstraction” of recurrent units and patterns from their actual context of use, and their use in any number of other contexts are performed. The application of such a model to cognitive needs other than strictly linguistic ones need not be deliberately undertaken, but the fact that we are familiar with it and master it perfectly in our daily language use certainly grants it a privileged position in our practice and conceptualization of how cognition in general does and should proceed. No wonder that Descartes, Leibniz, Locke and many other thinkers used this analytic-combinatorial model as the core of their epistemology, and that Condillac considered the availability of language as a sine qua non for moving from “sensation” to the higher level of cognitive ability he calls “analysis”, without which humans would not be able to generate distinguishable and recurrently usable “ideas”.

Another fundamental influence of the linguistic environment on cognition derives from the fact that language is a rule-based system. The power of the notion of “rule” is apparent in the attraction it exerts on a child’s mind, as soon as the child gives up its “egotistic” privilege of creating its own communicative symbols and submits to the socially imposed linguistic rules. Not only does the child attempt to impose absolute exception-free regularity on the linguistic rules themselves through the well-attested phenomenon of “over-generalization” (e.g., by “regularizing” irregular verbs: “eated” instead of “ate”, “shaked” instead of “shook”). It also projects this strict rule model onto other activities such as games where no violations of the rules are tolerated. The strong appeal of the “computation model of the mind”, as well as of its earlier counterpart, the mind-machine analogy, may ultimately derive from our familiarity with the machine-like rules of grammar.17

The sequential organization of speech – another structural characteristic of language – imposes upon oral communication a linear and unidirectional pattern. This pattern is imitated in cognitive processes, even when they are not communication-oriented. As a result, an ante-post, step by step ordering of thoughts acquires a privileged canonical status, where what comes “first” is assumed to be, in some sense, cognitively “prior” to what comes “after”. Such a priority may be interpreted as logical, epistemological, causal, psychological, or chronological, but the pattern is the same, and tends to be viewed as an indication that a cognitive process that follows it is “rational”. Obviously, this pattern does not fit all cognitive processes, some of which (e.g., associative thought) display rather a net-like structure.18 Speech permits deviations from linear and unidirectional thematic order (e.g., digressions, flashbacks), and writing and electronic text-processing provide further means for so doing (e.g., footnotes, hypertext). But the fact that such deviations are perceived as “exceptions” to the standard linear pattern implies that both linguistically and cognitively they must be sparingly used and their use needs to be especially justified. In this sense, the environmental influence of the linguistic sequential model obstructs, rather than helps, the performance of certain cognitive processes.19

The analytic-combinatorial, rule-based, and sequential models are not, however, the only ones the linguistic environment provides for cognitive imitation. Adam Smith observed that modern “analytic” languages stand to ancient “synthetic” languages, as far as their simplicity and systematicity is concerned, as early machines, which are “extremely complex in their principles”, stand to more advanced ones, which produce their effects “with fewer wheels and fewer principles of motion”. Similarly, “in language every case of every noun, and every tense of every verb, was originally expressed by a particular distinct word, which served for this purpose and for no other. But succeeding observations discovered, that one set of words was capable of supplying the place of all that infinite number, and that four or five prepositions, and half a dozen auxiliary verbs, were capable of answering the end of all the declensions, and of all the conjugations in the ancient languages” (Smith 1761, 249). But he immediately made clear that the language-machine analogy breaks down as soon as one goes beyond grammar: “The simplification of machines renders them more and more perfect, but this simplification of the rudiments of languages renders them more and more imperfect, and less proper for many of the purposes of language” (ibid.). Smith had in mind the expressive needs that language must provide for, such as eloquence and beauty or, more generally, its ability to express not only the “thought but also the spirit and the mind of the author” (Smith 1983, 19). It is for such purposes that the simplified machinery of “analytic” languages is inadequate due to their inherent “prolixness, constraint, and monotony” (Smith 1761, 251). Since even a paradigmatic analytic language such as English obviously overcomes these inadequacies and provides for the expressive needs mentioned (didn’t Shakespeare write in English?), it must do so – if we follow Smith’s argument – by evolving some “non-mechanical” means that compensate for its “mechanical” limitations.

Smith’s point can be generalized. First, the expressive needs for which more than the rules of grammar are needed comprise not only lofty literary-rhetorical ideals, but also down-to-earth everyday communicative needs. Second, there is no difference between analytic and synthetic languages in this respect; in fact, no known natural language can dispense with additional “wheels” and “principles of motion”, other than the syntactic and semantic ones, in order to fulfill its expressive and communicative duties. Such additions to the basic linguistic system range from syntactic rules that permit to “transform” or adjust the outpuof the basic syntactic rules without substantial meaning change to devices that allow one to say one thing and mean another. The former can be compared to the addition of epicycles to the Ptolemaic system in order to cope with the observed phenomena, without modifying its methodological assumption about the kinds of “wheels” and “principles” that are supposed to account for the machine’s “competence”.20 The latter, studied mainly by pragmatics and rhetoric, are generally believed to obey different kinds of “rules” – of an heuristic, rather than an algorithmic nature.21

A particularly significant feature of the pragma-rhetorical component of linguistic system is that it sometimes achieves its aims by resorting to explicit violations of the system’s rules – be they the algorithmic ones (as in metaphor, puns, and nonsense poetry) or the heuristic ones (as in conversational “implicatures”). A rule-based system that employs different kinds of rules and does not rule out, but rather permits and even exploits the violation of its own rules, is extremely valuable from a cognitive point of view. For it provides a living and effective model for many important cognitive processes that are open-ended, flexible, creative, and yet not aleatory. It also shows that there is an alternative to treating – as virtually all interfaces and applications to date do – rule violations as mistakes to be corrected (sometimes automatically, thereby irritating users).

Apart from its generic influence, the linguistic environment can have quite specific effects upon cognition, which should not be overlooked. An interesting case is the presumed role of language in causing deviations from logically valid forms of reasoning. For example, there is evidence that the evaluation of invalid syllogisms as valid has to do with an “atmosphere” effect, produced by the particular linguistic formulation of the premises. Thus, syllogisms whose premises were both affirmative and universal would tend to be viewed as having also an affirmative and universal conclusion, irrespective of whether the disposition of the subject and predicate terms in the premises logically warrant such a conclusion (Woodworth and Sells 1935; Evans 1982: 89-90). Similarly, a robust finding in studies of conditional reading using Wason’s well-known “selection task” is the linguistically-driven “matching bias”. The subjects in this task are given a conditional sentence referring to a set of four cards laid down on a table. Each card has a letter on one side and a number on the other. The subjects are asked to determine which cards they would have to turn in order to tell whether the sentence is true or false. The matching bias consists in the fact that subjects tend to pick up those cards that match those named in the sentence, regardless of whether they verify or falsify it.22 Further, allegedly pernicious, specific examples of linguistic influence on cognition will be mentioned in the next section.

4.2 Language as resource

Under this rubric I include those aspects of language that are regularly and, for the most part, consciously put to use for cognitive purposes, with minimal elaboration. They deserve to be considered “technologies” in-so-far as the choice of a particular linguistic feature stands in a means-end relationship with the cognitive purpose in view.

An example of a linguistic resource widely employed for an extremely important cognitive purpose is the use of words for gathering, organizing, storing, and retrieving information. This has been done for so long that it is taken for granted and we are unaware of its linguistic-cognitive underpinnings as well as of the fact that in its current uses – be it in printed indices or in computerized search engines – its potential is far from being fully exploited. For, whereas the value of words for tracing relevant information lies in their semantic content, most applications make use only of their graphic form in order to locate matching graphic strings that are assumed to lead to semantically relevant material. The cognitive burden to sort out truly relevant information is for the most part left to the user. Few systems make use of the truly relevant linguistic resource for information storage and retrieval, the resource humans naturally and effortlessly use, namely the rich semantic structure of natural languages.23

The semantic network of language is based on a set of semantic relations that connect expressions in a variety of ways – as synonyms, near-synonyms, paraphrases, analytic, super-ordinate, subordinate, belonging or not to a semantic field, antonyms, contrary, contradictory, etc. By structuring the “mental lexicon”, such a network is an inescapable resource the mind constantly resorts to in most of its cognitive operations, which rely upon conceptual similarities and differences. The semantic network also comprises information – such as connotations, prototypes, and factual information – that belongs rather to the “mental encyclopedia” but, being standard, widely known and normally activated in the understanding of linguistic expressions, counts as “semantically relevant”.24 This extension makes the network an even more useful and constantly used cognitive resource, only minimally exploited to date by technologies that make use of thesauri.

The possibility of precision afforded by natural languages should not make us overlook the wide variety of syntactic, semantic and pragmatic means they have for expressing indeterminacy – a cover term here used to refer to phenomena such as indefiniteness, ambiguity, polysemy, unspecificity, imprecision and vagueness. Although considered a hindrance from the point of view of certain cognitive needs, such linguistic means are, from the point of view of other cognitive needs, an asset. For instance, they are an invaluable – perhaps indispensable – resource for the cognitive processes that begin with a foggy initial intuition which they undertake to clarify in a stepwise way, or vice-versa, for those processes that seek to sum up the gist of a theory, an argument, or a story. They are also essential for conceptualizing those situations in which the mind hesitates between alternatives, none of which seem to fall clearly into well-defined categories. While we often wish everything could be clearly classified as either black or white, good or bad, true or false, we often stumble at borderline cases, which force our mind to abandon dichotomous thought and rather think in terms of gradual, continuous, and vague concepts (Gullvåg & Naess 1995).

Language also provides its users with a repertoire of ready-at-hand, more or less conventionalized patterns that can be put to use not only communicatively but also cognitively. This repertoire ranges from phrases and sentences to full-fledged discursive structures. It includes, among other things, formulaic expressions, conventional metaphors, proverbs, topoi25, argumentative formulae, dialogue patterns, and literary forms.25 These resources are ready-at-hand for organizing thought. The existence of argumentative canonical formulae structured by prepositions and adverbs such as if … then, but, either…or, therefore provides “directives” to the reasoner, which allow her to complete what is missing, to determine if something in her reasoning is irrelevant or redundant, etc. So too the current canonical form of the “scientific article”, say, in psychology, provides guidelines not only for the presentation of the author’s results, but also for the way in which his mental and practical steps leading to such results should be executed.26

Before concluding this sample of language-based cognitive resources, I want to mention a number of related linguistic devices that, I believe, are extremely important for cognition. Consider the parenthetical ‘I believe’ I have employed in the preceding sentence. Its position could have been occupied, instead, by ‘I know’, ‘I am sure’, ‘I have no doubt’, ‘I hypothesize’, ‘I submit’, ‘I argue’, ‘I contend’ or, with slight syntactical modifications, by ‘I wonder’, ‘I doubt’, ‘allegedly’, etc. Some of these expressions express propositional attitudes;others, the illocutionary force of speech acts. Both act as operators on propositional contents, which reflect the variety of different degrees of commitment, epistemic status, intentions, etc. with which the mind may relate to such contents. They thus belong to a family of expressions which perform a distinction between two layers of “content” – the one referring to or modulating the other. The most familiar linguistic devices of this kind are metalinguistic operators such as quotation, thanks to which natural languages can act as their own metalanguage. As a whole, these linguistic resources correspond to and reveal the inherent reflexivity of human mental processes, i.e., the fact that cognition is conscious of itself, and therefore involves “metacognition”.27 It seems to me that this is not a one-way road, leading from metacognition to its linguistic expression, but at least a two-way road, in which the existence of metalinguistic resources should also be credited with the enhancement of metacognitive awareness and its development. The mechanism of joint attention (towards perceptual objects, towards each other in an interaction), for example, which is a necessary ingredient of intentional communication (Brinck 2001), involves the recognition of the other’s attentional state, as well as awareness of one’s own. Similarly, the mother’s attribution of intentions to the infant has been suggested to play a decisive role in the infant’s development of her self-perception as an intentional agent (De Gelder 1981; discussed in Dascal 1983: 99ff.).

Let us now turn to the alleged negative effects of language as a resource. The careless cognitive use of linguistic resources has been blamed, throughout the centuries, for inducing all sorts of cognitive mistakes. The indiscriminate use of linguistically productive (and legitimate) patterns of word generation (e.g., white à whiteness) has been held responsible for yielding in fact vacuous terms (e.g., nothingness) which are the source of conceptual confusion and pointless dispute (Locke). The existence in language of general terms were blamed for inducing the false belief that there are general ideas and general objects (Berkeley). Natural language categorization, based on “vulgar” knowledge, was considered to be the most dangerous of the “idols” that threaten scientific thinking (Bacon). Vagueness was considered incompatible with logic and therefore utterly unreliable (Russell). Reliance on grammatical analogy was blamed for causing “category mistakes” and “systematically misleading” the understanding (Ryle). A list of “pseudo-problems” in which generations of metaphysicians were entangled were added to language’s long list of cognitive deficits (Carnap). Uncritical linguistic practice was singled out as the most dangerous cause of – whether deliberate or not – of cognitive distortion, manipulation, and ultimately “un-sanity” (Count Korzybski and the General Semantics movement).28 And so on. This small sample of criticism certainly shows that language’s influence on cognition can be indeed pernicious. But it also highlights the extent and variety of this influence. The lesson to be drawn, as in many other cases, is simply that we must be aware of this variety, extent, and sometimes insidious nature, so as to rely upon the linguistic environment of thought only judiciously.

4.3 Language as tool

A language-based cognitive technology can be viewed as a tool when it is the result of the engineering of linguistic resources for a specific cognitive task. Let us consider some examples.

The linguistic resource of explaining the meaning of one term by correlating it with a string of other terms that “define” the former has been sharpened into the powerful cognitive tool of formal definition. This tool permits the creation of special terminology (new terms for new concepts, or redefinition of existing terms) or of new notational systems.29 Usually the model of definition adopted in these cases is the “classical” one, i.e., the specification of necessary and sufficient conditions. But natural language semantics also provides other models of capturing concepts, e.g., in terms of similarity to a prototypical member of the denoted class or in terms of clusters of properties which are hierarchically organized in terms of their centrality of weight, although none of them is per se necessary or sufficient. Such “non-classical” models are characteristic of so-called “natural kind” terms (Achinstein 1968). The elaboration of each of these kinds of “definition” yields different types of linguistic tools or technologies that are fit for different cognitive purposes.

The various forms of indeterminacy available in natural languages can be shaped into cognitive tools. For example, the linguistic possibility of generating scales of quantifiers, making them as subtle as desired (e.g., everyone, virtually everyone, almost everyone, most of the people, the majority of the people, some people, nearly nobody, virtually nobody, nobody, etc.) can give rise to rigorous systems of quantification other than the standard one. The same is true of linguistic tense systems that can be elaborated into a variety of temporal logics. And vagueness has been elaborated semantically into “fuzzy logic” that permits to reason rigorously with vague concepts (Zadeh 1975; see also Black 1963), as well as pragmatically, into a dynamic interpretive tool for gradually increasing precision until what appears to be an agreement or disagreement is shown to be in fact a pseudo-agreement or a pseudo-disagreement (Naess 1966).

Formulaic expressions can become powerful cognitive tools. A remarkable example, analyzed by Reviel Netz (1999), is the role of linguistic formulae in ancient Greek mathematics. Netz shows that, in contrast to deduction in modern mathematics, where one resorts to typographic symbols, thus opting for exploiting a visual resource, Greek mathematics made use of formulaic expressions of a linguistic resource presumably of oral origin. He analyzes in detail Book II of Euclid’s Elements, identifying and sorting out the 71 such formulaic expressions, i.e. highly repetitive and standardized phrases and sentences, which make up for most of the text. He argues that deduction as a cognitive tool may have been made possible due to the systematic use of such formulae: “The constant re-shuffling of objects in substitutions may be securely followed, since it is no more than the re-fitting of well-known verbal elements into well-known verbal structures. It is a game of decomposition and re-composition of phrases, indeed similar to the jigsaw puzzle of putting one heroic epithet after another, which is to a certain extent what epic [Homeric] singers did)” (Netz 1999: 161). If we jump from mathematics to religion, we may find in the Hindu mantra a similar phenomenon. The nature and role of mantras is quite controversial, as is patent from the papers in Alper’s (Ed., 1989) collection. Some scholars even doubt their linguistic nature, and most view them as belonging to the religious ritual, where – according to some – they are akin to prayer. Nevertheless, there is no doubt that, at least in some of their variants, they are self-addressed linguistic or quasi-linguistic tools whose main purpose is to play a definite role in their user’s mental life. This is the case, for instance, in Yogi meditation; and a classical text presumably of the 3rd or 4th century, the Arthaśāstra, goes as far as attributing to it impressive intellectual effects: “a mantra accomplishes the apprehension of what is not or cannot be seen; imparts the strength of a definite conclusion to what is apprehended, removes doubt when two courses are possible [and] leads to inference of an entire matter when only a part is seen” (quoted by Alper 1989: 2).

Literary resources can also develop into cognitive tools par excellence. Tsur & Benari (2001) have shown how a specific poetic device – ‘composition of place’ – employed in meditative poetry, is designed to overcome the linear and conceptual character of language so as to convey “such non-conceptual experiences as meditation, ecstor mystic insights” and thus to “express the ‘ineffable” (Tsur &. Benari 2001: 231). In the same vein, Zamir (Forthcoming) shows, through a close reading of Hamlet, how literature is able to create awareness of “ineffable” content.30 In both cases, I would suggest, literary tools not only express or induce certain mental states, but also in a sense create the conditions for the very existence of these states in the first place.

As a last example of a linguistic resource that gives rise to a cognitive tool, I would like to mention the dialectical use of dialogue structures. Ever since Plato, philosophers developed what can be seen as a genre – the “philosophical dialogue” – in order to expound their ideas. Its formal structure, however, carries with it cognitive requirements quite different from other forms of exposition. To expound one’s ideas for a specific interlocutor and to defend them against her specific objections – even if both the character and the objections are fictional creations of the writer – requires techniques of persuasion, argumentation and justification other than those used in a linear text that addresses a generalized, non-present, and unknown reader. Other dialectical techniques develop independently, in oral rather than in written form. In the Middle Ages, codified forms of debate such as the disputatio and the obligatio evolved and success in them became part of the academic requirements to get a university degree. But the cognitive implications of these practices transcended both pedagogical needs and the Middle Ages. For the basic idea that a rational debate should obey a set of principles that define the duties of the “defendant” and the “opponent”, the types of moves they are allowed to perform, and what will count as a “victory”, remains in force in fact up to this day – even though the content of such principles have changed. There is no space here to trace the development of dialectical techniques, which involves an interesting interplay between logic and rhetoric, culminating with “dialogical logic” on the one hand and “the new rhetoric” on the other. What is important to realize, for the purposes of this paper, is that the ensemble of techniques thus developed transcends pedagogical, expository, or communicative ends, for it becomes a powerful tool for actually implementing the idea that at the core of rationality lies the practice of critical thought.31 In this sense, a system of “electronic argumentation” should be designed not only to improve one’s ability to express oneself (Sillince 1996), but also as a tool to improve one’s ability to think rationally.

5. Concluding remarks

In this paper I have proposed to look at language not only as a communicative interface between cognitive agents, but as a technology involved in cognition itself. I surveyed instances of how language functions as an environment, a resource, and a tool of cognition. Some of these examples are more easily acknowledged as “cognitive technologies” than others, but all of them share the main characteristics I have attributed to this notion. They contribute systematically and directly to cognitive processes and to the achievement of cognitive aims. And all of them are clearly language-based.

In terms of the parameters presented in section 2, most of the examples of language-based cognitive technologies discussed are “internal”, and await the eventual development of “external” counterparts; in spite of optimistically exaggerated claims of some designers, virtually all of the extant such developments are “partial” rather than “integral”; some of the language-based cognitive technologies are useful for “strong” cognition, others for “weak” cognition, and still others for both; very few purport to be “complete”; and only a few of them have been suggested to be “constitutive”.

By emphasizing the direct contribution of language-based technologies to cognition, I want to stress that they are not mediated by the communicative use of language – the kind of use that monopolizes the attention of designers of human-computer interfaces. I obviously do not deny the importance of the latter, but I think the justified desire to develop humane interfaces and, in general, humane technologies, requires a better understanding of how the human mind makes use of and is affected by naturally evolved or designed technologies. In this respect, this paper should be seen as a contribution to the incipient field of an “epistemology of cognitive technology” (Gorayska & Marsh 1996). By focusing on language, it connects this field with one of the main philosophical achievements of 20th century thought, the “linguistic turn”, which transformed language into the fulcrum of research in philosophy, psychology, the social siences, and the cognitive sciences.

In his intriguing book Meaning in Technology, Arnold Pacey defends a worldview “in which human relationships and human purposes may have a closer connection with technological progress than sometimes seems possible” (Pacey 2001: 38). He distinguishes between the prevalent detached approach to science and technology and a participatory approach, in which we “feel ourselves to be involved in the system on which we are working” (p. 12). According to him, it is the latter that endows technology with meaning. Pacey might have found support for his insights in the present paper. Not only because we have an intimate participatory relationship with language in general and language-based cognitive technologies in particular, but also because such technologies are, ultimately, the technologies of meaning par excellence.

Notes

* I have presented some of the ideas put together in this paper, in one way or another, in the following forums: “Dialogue Analysis 2000” (International Association for Dialogue Analysis, Bologna, June 2000); “Limited Cognitive Resources: Inference and Reference” (Center on Resource Adaptive Cognitive Processes, Saarbrücken, October 2000); “IV Encontro Brasileiro Internacional de Ciência Cognitiva”; Marília, Brazil, December 2000); and “Ciencia, Tecnología y Bien Común: La Actualidad de Leibniz” (Universidad Politécnica de Valencia, Spain, March 2001). I thank the organizers as well as the participants who enlightened me with their comments and criticism.

**To appear in the first issue of the International Journal of Cognition and Technology.

  1. Notice that my definition is substantially narrower than those attributed to this term by other researchers (e.g., Dautenhahn 2000), including the definition used in the announcement of this journal.
  2. It should be noticed that some of the expressions in these two lists of illustrations – e.g., ‘demonstration’, ‘persuasion’, ‘decision’, etc. – display the well-known process/product ambiguity. This is why they can belong both to the list of states and to that of processes.
  3. See Zue (1999).
  4. On the three last items, see for example the papers presented in Proceedings (2000), as well as those collected in Cassell et al. (Eds., 2000) and Dautenhahn (Ed.., 2000).
  5. I have proposed a distinction between ‘demonstration’ and ‘argumentation’ as preferred moves in different types of polemics in Dascal (1998a).
  6. For a critique of the initial projects of mechanical translation, which pointed out the insufficiency of linguistic theory to support them, see Bar-Hillel (1964: chapters 10-14).
  7. Usually, in the first case it is said that it is syntactically incomplete, while in the second it is said to be semantically incomplete. In both cases, however, semantics – in the broad sense of correspondence between a symbolic system and the properties it purports to represent – is involved. The formation rules in fact select a set of well-formed formulae or combinations of symbols according to some criterion of well-formedness that is supposed to correspond to some property (e.g., ‘grammaticality’ in a linguistic system or ‘propositionality’ in the propositional calculus), whereas the transformation rules select a set of derivation relations between formulae that is supposed to correspond to another property (e.g., ‘meaning invariance’ in the standard model of generative grammar or ‘validity’ in the propositional calculus).
  8. See Anderson & Belnap (1975: 403ff.).
  9. Some passages in Turing’s paper may suggest that he took success in playing the “imitation game” (i.e., what I called the “test”) as an operational definition of intelligence, and thus – from the point of view of behaviorism – as equivalent to it, rather than a sign of it. See, for example, Block (1981) and Richardson (1982). Eli Dresner tried to persuade me that this is the case, but he concedes that “Turing definitely does not describe himself as a behaviorist” (personal communication).
  10. Among them Rousseau and Adam Smith (cf. Dascal 1978 and Forthcoming).
  11. For an analysis of this debate, see Dascal (1995), where several of the authors mentioned in this and the preceding paragraphs are discussed. Those interested particularly in Leibniz and Hobbes should consult Dascal (1998b and 1998c, respectively). On the implications of this debate for AI and current work in the philosophy of mind and of language, see Dascal (1992b, 1997a) and Dresner & Dascal (2001).
  12. I coined the term ‘psychopragmatics’ for the branch of pragmatics that deals not with the social uses of language such as communication (a task reserved for ‘sociopragmatics’) but with the mental uses of language. See Dascal (1983) and references therein.
  13. For example: “We thought a day and night of steady rain / was plenty, but it’s falling again, downright tireless … / …Much like words / But words don’t fall exactly; they hang in there / In the heaven of language, immune to gravity / If not to time, entering your mind / From no direction, travelling no distance at all, / And with rainy persistence tease from the spread earth / So many wonderful scents … (Robert Mezey, “Words”; quoted in Aitchison 1994, p. v). The images employed in this poem capture several of the “environmental” properties of language described in section 4.1.
  14. In this respect, I am much more moderate than Winograd & Flores, who interpret Heidegger’s dictum as claiming that “nothing exists except through language (Winograd & Flores 1986: 68).
  15. Watson latter rejected this reductionist claim (see Watson & McDougall 1928).
  16. In fact, linguistic articulation goes well beyond this, since one can identify sub-phonemic features out of which phonemes are formed, as well as supra-lexical meaningful compounds such as idioms, whose meaning cannot be accounted for in terms of lexical-syntactic composition.
  17. “Grammar itself is a machine / Which, from innumerable sequences / selects the strings of words for intercourse …/ When the words have vanished, grammar’s left, / And it’s a machine Meaning what? / A totally foreign language” (Lars Gustafsson, “The machines”, quoted by Haberland 1996, p. 92).
  18. The philosopher Gilles Deleuze, who describes this kind structure using the botanical model of the rhizome, rather than the now popular neural net model, has highlighted its centrality for understanding the multi-layered complexity of human thought and its expression. See. Deleuze & Guattari (1976, 1980).
  19. A striking example of the sheer linguistic difficulty in overcoming this obstacle is exemplified by Alejo Carpentier’s story “Viaje a la semilla” (in Carpentier 1979, pp. 63-94). The story moves backwards from a current event to the “seed” whence it derives. In spite of the author’s ingenious efforts, however, it becomes apparent that it is virtually impossible to neutralize the temporal order embedded in various levels of linguistic structure.
  20. For example, Smith pointed out one of this expressive devices used to circumvent the basic syntactic order in English (subject-verb-object), namely the anteposition of “whatever is most interesting in the sentence” (Smith 1983, 18), which is accounted for in modern syntactic theory in terms of an “epicyclic” rule.
  21. For example, the “maxims” that govern conversation according to Grice. For a critique of the view that, since conversation is not ruled by constitutive rules of a grammatical kind, it is not, properly speaking, a rule-governed phenomenon, see Dascal (1992a).
  22. For discussion and interpretation of the “matching bias” phenomenon, see Evans (1982: 140-144) and Dascal (1987).
  23. Leibniz devoted much thought, in his projects for an encyclopedia and its role in the “art of discovery” to the cognitive role of a variety of types of indexing. See Dascal (In press).
  24. On the notion of “semantic relevance”, see Achinstein (1968). On the difficulty of establishing a clear distinction between “dictionary” and “encyclopedia”, see Peeters (2000) and Cabrera (2001).
  25. Topoi, loci communes, or "commonplaces" occupied a central place in humanist education in the renaissance and the early modern period. Dozens of "Commonplace-Books" were printed at the time, and students were required to write and use their own commonplace lists. Such a practice not only established shared forms of expression, but also shared conceptual tools, which thus constituted a background of "mental structures" guiding the thought and understanding of educated persons throughout Europe for at least two centuries. For a study of this linguistic-based cognitive resource, see Moss, 1996).
  26. Some of these resources have been put to use in computer applications. Chinese word-processors, taking advantage of the Chinese habit of systematically using proverbs (mainly four-character ones), “propose” to the writer possible proverbial continuations once the first two characters of the proverb are typed. Attempts to simulate and exploit the dialogical resources of natural language for human-computer interfaces are now proliferating. The pioneer classic ELIZA employed a number of phrasal structures routinely occurring in nondirective psychotherapy in order to create the impression of a real dialogue between therapist and patient (Weizenbaum 1966). The MUD robot-agent JULIA, like ELIZA, employs lists of common queries and a matching procedure in order to generate natural-looking “conversation” with users (cf. Foner 2000). More recent rule-based systems of dialogue and conversation (e.g., Kreutel & Matheson 2000; Webb 2000) are no doubt much more sophisticated and useful tools than ELIZA, but they still remain excessively subordinated, in my opinion, to the rule-following model.
  27. For a rhetorical analysis of the scientific paper and its evolution from the 17th century onwards, see Gross (1990) and Gross et al. (2002)..
  28. For a sample of researon metacognition, see Metcalfe & Shimamura (1994). For the relationship between metacognition and consciousness, see Newton (1995), and for its relationship with conversation, see Hayashi (1999).
  29. A striking example of the use of language for alleged "scientific" purposes is Scientology. This religious movement, based on the "science" of "Dianetics", claims to provide its followers with a "cognitive technology" that allows them to achieve the status of "Clears", essentially through linguistic manipulation. For an analysis of this phenomenon, see Mishori & Dascal (2000).
  30. Lavoisier, who was in this respect a follower of Condillac, viewed his new chemical nomenclature as having cognitive implications far beyond those of a mere terminological reform (cf. Bensaude-Vincent 1993).
  31. Zamir (2002) also proposes an epistemological account of how literature can express and eventually generate cognitive content that the resources of philosophical discourse are unable to capture.
  32. See Astroh (1995), Barth (1992), Dascal (1997b, 1998a, 2000) and references therein


References

Achinstein, Peter (1968). Concepts of science: A philosophical analysis. Baltimore: The Johns Hopkins Press.

Aitchison, Jean (1994). Words in the mind (2nd ed.). Oxford: Blackwell.

Alper, Harvey P. (1989). Introduction. In H. P. Alper (Ed.) (pp. 1-14).

Alper, Harvey P. (Ed.) (1989). Mantra. Albany: State University of New York Press.

Anderson, Alan R. & Nuel D. Belnap, Jr. (1975). Entailment: The logic of relevance and necessity, vol. 1. Princeton: Princeton University Press.

Astroh, Michael (1995). Sprachphilosophie und Rhetorik. In Dascal et al. (Eds.), pp. 1622-1643.

Bar-Hillel, Yehoshua (1964). Language and information. Reading, MA & Jerusalem: Addison-Wesley & Magnes Press.

Barth, Else M. (1992). Dialogical approaches. In Dascal et al. (Eds.), pp. 663-676.

Bensaude-Vincent, Bernadette (1993). Lavoisier: Mémoires d’une révolution. Paris: Flammarion.

Black, Max (1963). Reasoning with loose concepts. Dialogue, 2, 1-12.

Block, Ned (1981). Psychologism and behaviorism. The Philosophical Review, 80, 5-43.

Brinck, Ingar (2001). Attention and evolution of intentional communication. Pragmatics & Cognition, 9(2), 255-272.

Cabrera, Julio (2001) Words, worlds, words. Pragmatics & Cognition, 9(2), 313-327/

Carpentier, Alejo (1979). Cuentos completos. Barcelona: Brughera.

Cassell, Justine, Joseph Sullivan, Scott Prevost, & Elizabeth Churchill (Eds.) (2000). Embodied conversational agents. Cambridge, MA: The MIT Press.

Dascal, Marcelo (1978). Aporia and theoria: Rousseau on language and thought. Revue Internationale de Philosophie, 124/125, 214-237.

Dascal, Marcelo (1983). Pragmatics and the philosophy of mind, vol. 1: Thought in Language. Amsterdam & Philadelphia: Benjamins.

Dascal, Marcelo (1987). Language and reasoning: Sorting out sociopragmatic and psychopragmatic factors. In B. W. Hamill, R. C. Jernigan & J. C. Bourdreaux (Eds.), The role of language in problem solving II (pp. 183-197). Amsterdam: North Holland.

Dascal, Marcelo (1992a). On the pragmatic structure of conversation. In H. Parret and J. Verschueren (Eds.), (On) Searle on Conversation (pp. 35-56). Amsterdam: Benjamins.

Dascal, Marcelo (1992b). Why does language matter to artificial intelligence?. Minds and Machines, 2, 145-174.

Dascal, Marcelo (1995). The dispute on the primacy of thinking or speaking. In Dascal et al. (Eds.), pp. 1024-1041.

Dascal, Marcelo (1997a). The language of thought and the games of language. In M. Astroh, D. Gerhardus, and G. Heinzman (Eds.), Dialogisches Handeln: Ein Festschrift für Kuno Lorenz (pp. 183-19). Heidelberg: Spektrum Akademischer Verlag.

Dascal, Marcelo (1997b). Critique without critics? Science in Context, 10(1), 39-62.

Dascal, Marcelo (1998a). Types of polemics and types of polemical moves. In Světla Čmejrková, Jana Hoffmannová, Olga Müllerová, & Jindra. Světlá, Dialogue analysis VI, vol. 1 (pp. 15-33). Tübingen: Max Niemeyer.

Dascal, Marcelo (1998b). Language in the mind's house. Leibniz Society Review, 8, 1-24.

Dascal, Marcelo (1998c). Desafio de Hobbes. In Leonel Ribeiro dos Santos, Pedro M. S. Alves & Adelino Cardoso (Eds.), Descartes, Leibniz e a Modernidade (pp. 369-398). Lisboa: Colibri.

Dascal, Marcelo (2000). Controversies and epistemology. In Tian Yu Cao (Ed.), Philosophy of science (= Vol. 10 of Proceedings of the Twentieth World Congress of Philosophy) (pp. 159-192). Philadelphia: Philosophers Index Inc..

Dascal, Marcelo (In press). Leibniz y las tecnologías cognitivas. In Agustín Andreu, Javier Echeverría, & Concha Roldán (Eds.), Ciencia, tecnología y el bien común: La actualidad de Leibniz. Valencia: Universidad Politécnica de Valencia.

Dascal, Marcelo (Forthcoming). Adams Smith's theory of language. In K. Haakonssen (Ed.), The Cambridge Companion to Adam Smith. Cambridge: Cambridge University Press.

Dascal, Marcelo, Dietrich Gerhardus, Kuno Lorenz, & Georg Meggle (Eds.) (1992-5). Philosophy of Language – A handbook of contemporary research, vols. 1 & 2. Berlin & New York: Walter de Gruyter.

Dautenhahn, Kerstin (2000). Living with intelligent agents: A cognitive technology view. In K. Dautenhahn (Ed.), Human cognition and social agent technology (pp. 415-426). Amsterdam: Benjamins.

De Gelder, Bea (1981). Attributing mental states: A second look at mother-child interaction”. In H. Parret , M. Sbisà & J. Verschueren (Eds.), Possibilities and limitations of pragmatics (pp. 237-250). Amsterdam: Benjamins.

Deleuze, Gilles & Félix Guattari (1976). Rhizome. Paris: Minuit.

Deleuze, Gilles & Félix Guattari (1980). Mille plateaux. Paris: Minuit.

Dertouzos, Michael L. (1999). The future of computing. Scientific American, 281(2), 36-39.

Dresner, Eli & Marcelo Dascal (2001). Semantics, pragmatics, and the digital information age. Studies in Communication Sciences, 1(2), 1-22.

Dreyfus, Hubert (1971). What computers can’t do. New York: Harper & Row.

Dreyfus, Hubert (1992). What computers still can’t do. Cambridge, MA: The MIT Press.

Dror, Itiel E. & Marcelo Dascal (1997). Can Wittgenstein help free the mind from rules? The philosophical foundations of connectionism. In David M. Johnson & Christina E. Erneling (Eds.), The future of the cognitive revolution (pp. 217-226). New York: Oxford University Press.

Evans, John T. St. B.. (1982). The psychology of deductive reasoning. London: Routledge & Kegan Paul.

Foner, Leonard (2000). Are we having fun yet? Using social agents in social domains. In K. Dautenhahn (Ed.), Human cognition and social agent technology (pp. 323-348). Amsterdam: Benjamins.

Geertz, Clifford (1973). The interpretation of cultures. New York: Basic Books.

Gorayska, Barbara & Jacob Mey (Eds.) (1996). Cognitive technology: In search for a human interface. Amsterdam: Elsevier.

Gorayska, Barbara & Jonathon Marsh (1996). Epistemic technology and relevance analysis: Rethinking cognitive technology. In Gorayska & Mey (Eds.) (pp. 27-39).

Gross, Alan G. (1990). The rhetoric of science. Cambridge, MA: Harvard University Press.

Gross, Alan G., Joseph E. Harmon & Michael Reidy (2002). Communicating science: The scientific article from the 17th century to the present. Oxford: Oxford University Press.

Gullvåg, Ingemund & Arne Naess (1995). Vagueness and ambiguity. In Dascal et al. (Eds.), pp. 1407-1417.

Haberland, Hartmut (1996). “And we shall be as machines” – or should machines be as us? On the modeling of matter and mind. In Gorayska & Mey (Eds.) (pp. 89-98).

Hayashi, Takuo (1999). A metacognitive model of conversational planning. Pragmatics & Cognition, 7(1), 93-145.

Hutchins, Edwin (1999). Cognitive artifacts. In Robert A. Wilson & Frank C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 126-128). Cambridge, MA: The MIT Press.

Kreutel, Jörn & Colin Matheson (2000). Information states, obligations and intentional structure in dialogue modelling. In Proceedings, pp. 80-86.

Metcalfe, Janet & Arthur P. Shimamura (Eds.) (1994). Metacognition: Knowing about knowing. Cambridge, MA: The MIT Press.

Mishori, Daniel & Marcelo Dascal (2000). Language change as a rhetorical strategy. In Harish Narang (Ed.), Semiotics of language, literature and cinema (pp. 51-67). New Delhi: Books Plus.

Moss, Ann (1996). Printed commonplace-books and the structuring of renaissance thought. Oxford: Oxford University Press.

Naees, Arne (1966). Communication and argument: Elements of applied semantics. Oslo: Universittetsforlaget.

Netz, Reviel (1999). Linguistic formulae as cognitive tools. Pragmatics & Cognition, 7, 147-176.

Newton, Natika (1995). Metacognition and consciousness. Pragmatics & Cognition, 3(2), 285-297.

Peeters, Bert (Ed.) (2000). The lexicon-encyclopedia interface. Amsterdam: Elsevier.

Pacey, Arnold (2001). Meaning in technology. Cambridge, MA: The MIT Press.

Proceedings (2000). Proceedings of the 3rd International Workshop on Human-Computer Conversation (Bellagio, July 2000).

Richardson, R (1982). Turing tests for intelligence: Ned Block’s defense of psychologism. Philosophical Studies, 41, 421-426.

Sillince, John A. A. (1996). Would electronic argumentation improve your ability to express yourself?. In Gorayska & Mey (Eds.) (pp. 375-387).

Smith, Adam (1761). Considerations concerning the first formation of languages and the different genius of original and compounded languages. In The Early Works of Adam Smith, ed. J. Ralph Lindgren (pp. 225-251). New York: Augustust M. Kelley Publisher, 1967.

Smith, Adam (1983). Lectures on rhetoric and belles lettres, ed. J. C. Bryce & A. S. Skinner. Oxford: Clarendon Press.

Smith, Adam (1761). Considerations concerning the first formation of languages and the different genius of original and compounded languages (=FoL). In LRBL.

Smith, Adam (????). Lectures on rhetoric and belles lettres (=LRBL).

Tsur, Reuven & Motti Benari (2001). ‘Composition of place’, experiential set, and the meditative poem. Pragmatics & Cognition, 9(2), 201-234.

Turing, Alan M. (1950). Computing machinery and intelligence. Mind 59, 433-460.

Watson, John B. & William McDougall (1928). Battle of behaviorism: An exposition and an exposure. London: K. Paul, Trench, Trubner & Co.

Webb, Nick (2000). Rule-based dialogue management systems. In Proceedings, pp. 164-169.

Weizenbaum, Joseph (1966). ELIZA – A computer program for the study of natural language communication between man and machine. CACM, 9, 36-45.

Winograd, Terry & Fernando Flores (1986). Understanding computers and cognition: A new foundation for design. Reading, MA: Addison-Wesley.

Woodworth, R. S. & S. B. Sells (1935). An atmosphere effect in syllogistic reasoning. Journal of Experimental Psychology, 18, 451-460.

Zadeh, Lotfi A. (1975). Fuzzy logic and approximate reasoning. Synthese, 30, 407-428.

Zamir, Tzachi (2002), An epistemological basis for linking philosophy and literature. Metaphilosophy, 33(3), 321-336.

Zamir, Tzachi (Forthcoming). Doing nothing. Mosaic: A journal for the interdisciplinary study of literature.

Zue, Victor (1999). Talking to your computer. Scientific American, 281(2), 40-41.