Перевод: с английского на английский

с английского на английский

psychology

  • 1 Psychology

       We come therefore now to that knowledge whereunto the ancient oracle directeth us, which is the knowledge of ourselves; which deserveth the more accurate handling, by how much it toucheth us more nearly. This knowledge, as it is the end and term of natural philosophy in the intention of man, so notwithstanding it is but a portion of natural philosophy in the continent of nature.... [W]e proceed to human philosophy or Humanity, which hath two parts: the one considereth man segregate, or distributively; the other congregate, or in society. So as Human philosophy is either Simple and Particular, or Conjugate and Civil. Humanity Particular consisteth of the same parts whereof man consisteth; that is, of knowledges which respect the Body, and of knowledges that respect the Mind... how the one discloseth the other and how the one worketh upon the other... [:] the one is honored with the inquiry of Aristotle, and the other of Hippocrates. (Bacon, 1878, pp. 236-237)
       The claims of Psychology to rank as a distinct science are... not smaller but greater than those of any other science. If its phenomena are contemplated objectively, merely as nervo-muscular adjustments by which the higher organisms from moment to moment adapt their actions to environing co-existences and sequences, its degree of specialty, even then, entitles it to a separate place. The moment the element of feeling, or consciousness, is used to interpret nervo-muscular adjustments as thus exhibited in the living beings around, objective Psychology acquires an additional, and quite exceptional, distinction. (Spencer, 1896, p. 141)
       Kant once declared that psychology was incapable of ever raising itself to the rank of an exact natural science. The reasons that he gives... have often been repeated in later times. In the first place, Kant says, psychology cannot become an exact science because mathematics is inapplicable to the phenomena of the internal sense; the pure internal perception, in which mental phenomena must be constructed,-time,-has but one dimension. In the second place, however, it cannot even become an experimental science, because in it the manifold of internal observation cannot be arbitrarily varied,-still less, another thinking subject be submitted to one's experiments, comformably to the end in view; moreover, the very fact of observation means alteration of the observed object. (Wundt, 1904, p. 6)
       It is [Gustav] Fechner's service to have found and followed the true way; to have shown us how a "mathematical psychology" may, within certain limits, be realized in practice.... He was the first to show how Herbart's idea of an "exact psychology" might be turned to practical account. (Wundt, 1904, pp. 6-7)
       "Mind," "intellect," "reason," "understanding," etc. are concepts... that existed before the advent of any scientific psychology. The fact that the naive consciousness always and everywhere points to internal experience as a special source of knowledge, may, therefore, be accepted for the moment as sufficient testimony to the rights of psychology as science.... "Mind," will accordingly be the subject, to which we attribute all the separate facts of internal observation as predicates. The subject itself is determined p. 17) wholly and exclusively by its predicates. (Wundt, 1904,
       The study of animal psychology may be approached from two different points of view. We may set out from the notion of a kind of comparative physiology of mind, a universal history of the development of mental life in the organic world. Or we may make human psychology the principal object of investigation. Then, the expressions of mental life in animals will be taken into account only so far as they throw light upon the evolution of consciousness in man.... Human psychology... may confine itself altogether to man, and generally has done so to far too great an extent. There are plenty of psychological text-books from which you would hardly gather that there was any other conscious life than the human. (Wundt, 1907, pp. 340-341)
       The Behaviorist began his own formulation of the problem of psychology by sweeping aside all medieval conceptions. He dropped from his scientific vocabulary all subjective terms such as sensation, perception, image, desire, purpose, and even thinking and emotion as they were subjectively defined. (Watson, 1930, pp. 5-6)
       According to the medieval classification of the sciences, psychology is merely a chapter of special physics, although the most important chapter; for man is a microcosm; he is the central figure of the universe. (deWulf, 1956, p. 125)
       At the beginning of this century the prevailing thesis in psychology was Associationism.... Behavior proceeded by the stream of associations: each association produced its successors, and acquired new attachments with the sensations arriving from the environment.
       In the first decade of the century a reaction developed to this doctrine through the work of the Wurzburg school. Rejecting the notion of a completely self-determining stream of associations, it introduced the task ( Aufgabe) as a necessary factor in describing the process of thinking. The task gave direction to thought. A noteworthy innovation of the Wurzburg school was the use of systematic introspection to shed light on the thinking process and the contents of consciousness. The result was a blend of mechanics and phenomenalism, which gave rise in turn to two divergent antitheses, Behaviorism and the Gestalt movement. The behavioristic reaction insisted that introspection was a highly unstable, subjective procedure.... Behaviorism reformulated the task of psychology as one of explaining the response of organisms as a function of the stimuli impinging upon them and measuring both objectively. However, Behaviorism accepted, and indeed reinforced, the mechanistic assumption that the connections between stimulus and response were formed and maintained as simple, determinate functions of the environment.
       The Gestalt reaction took an opposite turn. It rejected the mechanistic nature of the associationist doctrine but maintained the value of phenomenal observation. In many ways it continued the Wurzburg school's insistence that thinking was more than association-thinking has direction given to it by the task or by the set of the subject. Gestalt psychology elaborated this doctrine in genuinely new ways in terms of holistic principles of organization.
       Today psychology lives in a state of relatively stable tension between the poles of Behaviorism and Gestalt psychology.... (Newell & Simon, 1963, pp. 279-280)
       As I examine the fate of our oppositions, looking at those already in existence as guide to how they fare and shape the course of science, it seems to me that clarity is never achieved. Matters simply become muddier and muddier as we go down through time. Thus, far from providing the rungs of a ladder by which psychology gradually climbs to clarity, this form of conceptual structure leads rather to an ever increasing pile of issues, which we weary of or become diverted from, but never really settle. (Newell, 1973b, pp. 288-289)
       The subject matter of psychology is as old as reflection. Its broad practical aims are as dated as human societies. Human beings, in any period, have not been indifferent to the validity of their knowledge, unconcerned with the causes of their behavior or that of their prey and predators. Our distant ancestors, no less than we, wrestled with the problems of social organization, child rearing, competition, authority, individual differences, personal safety. Solving these problems required insights-no matter how untutored-into the psychological dimensions of life. Thus, if we are to follow the convention of treating psychology as a young discipline, we must have in mind something other than its subject matter. We must mean that it is young in the sense that physics was young at the time of Archimedes or in the sense that geometry was "founded" by Euclid and "fathered" by Thales. Sailing vessels were launched long before Archimedes discovered the laws of bouyancy [ sic], and pillars of identical circumference were constructed before anyone knew that C IID. We do not consider the ship builders and stone cutters of antiquity physicists and geometers. Nor were the ancient cave dwellers psychologists merely because they rewarded the good conduct of their children. The archives of folk wisdom contain a remarkable collection of achievements, but craft-no matter how perfected-is not science, nor is a litany of successful accidents a discipline. If psychology is young, it is young as a scientific discipline but it is far from clear that psychology has attained this status. (Robinson, 1986, p. 12)

    Historical dictionary of quotations in cognitive science > Psychology

  • 2 Cognitive Psychology

       The basic reason for studying cognitive processes has become as clear as the reason for studying anything else: because they are there. Our knowledge of the world must be somehow developed from stimulus input.... Cognitive processes surely exist, so it can hardly be unscientific to study them. (Neisser, 1967, p. 5).
       The task of the cognitive psychologist is a highly inferential one. The cognitive psychologist must proceed from observations of the behavior of humans performing intellectual tasks to conclusions about the abstract mechanisms underlying the behavior. Developing a theory in cognitive psychology is much like developing a model for the working of the engine of a strange new vehicle by driving the vehicle, being unable to open it up to inspect the engine itself....
       It is well understood from the automata theory... that many different mechanisms can generate the same external behavior. (Anderson, 1980, pp. 12, 17)
       [Cognitive psychology does not] deal with whole people but with a very special and bizarre-almost Frankensteinian-preparation, which consists of a brain attached to two eyes, two ears, and two index fingers. This preparation is only to be found inside small, gloomy cubicles, outside which red lights burn to warn ordinary people away.... It does not feel hungry or tired or inquisitive; it does not think extraneous thoughts or try to understand what is going on. It is, in short, a computer, made in the image of the larger electronic organism that sends it stimuli and records its responses. (Claxton, 1980, p. 13)
       4) Cognitive Psychology Has Not Succeeded in Making a Significant Contribution to the Understanding of the Human Mind
       Cognitive psychology is not getting anywhere; that in spite of our sophisticated methodology, we have not succeeded in making a substantial contribution toward the understanding of the human mind.... A short time ago, the information processing approach to cognition was just beginning. Hopes were high that the analysis of information processing into a series of discrete stages would offer profound insights into human cognition. But in only a few short years the vigor of this approach was spent. It was only natural that hopes that had been so high should sink low. (Glass, Holyoak & Santa, 1979, p. ix)
       Cognitive psychology attempts to understand the nature of human intelligence and how people think. (Anderson, 1980, p. 3)
       The past few years have witnessed a noticeable increase in interest in an investigation of the cognitive processes.... It has resulted from a recognition of the complex processes that mediate between the classical "stimuli" and "responses" out of which stimulus-response learning theories hoped to fashion a psychology that would by-pass anything smacking of the "mental." The impeccable peripheralism of such theories could not last. One might do well to have a closer look at these intervening "cognitive maps." (Bruner, Goodnow & Austin, 1956, p. vii)

    Historical dictionary of quotations in cognitive science > Cognitive Psychology

  • 3 occupational psychology

    HR
    the branch of psychology concerned with the assessment of the well-being of employees within their work environment in order to improve performance and efficiency, job satisfaction, and occupational health. The eight main areas of occupational psychology include: human-machine interaction; design of working environment; health and safety; personnel recruitment and assessment; performance appraisal and career development; counseling and personal development; training; motivation; industrial relations; and organization change and development.

    The ultimate business dictionary > occupational psychology

  • 4 industrial psychology

    The ultimate business dictionary > industrial psychology

  • 5 Educational Psychology

       No aptitude-treatment interactions [ATIs] are so well confirmed that they can be used directly as guides to instruction.... Aptitude-treatment interactions exist. To assert the opposite is to assert that whichever educational procedure is best for Johnny is best for everyone else in Johnny's school. Even the most commonplace adaptation of instruction, such as choosing different books for more and less capable readers of a given age, rests on an assumption of ATI that it seems foolish to challenge. It becomes clear that the problem of characterizing, understanding, and using... interactions poses the major challenge to educational and psychological science today. (Cronbach & Snow, 1977, pp. vii, 492)

    Historical dictionary of quotations in cognitive science > Educational Psychology

  • 6 Gestalt Psychology

       The Gestaltists Demonstrate How Symbolic Reasoning Follows Their Principles of Perception
       The Gestaltists look for simple and fundamental principles about how perception is organized, and then attempt to show how symbolic reasoning can be seen as following the same principles, while we construct a complex theory of how knowledge is applied to solve intellectual problems and then attempt to show how the symbolic description that is what one "sees" is constructed according to similar processes. (Minsky & Papert, 1973, p. 34)

    Historical dictionary of quotations in cognitive science > Gestalt Psychology

  • 7 Bibliography

     ■ Aitchison, J. (1987). Noam Chomsky: Consensus and controversy. New York: Falmer Press.
     ■ Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco: W. H. Freeman.
     ■ Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
     ■ Anderson, J. R. (1995). Cognitive psychology and its implications (4th ed.). New York: W. H. Freeman.
     ■ Archilochus (1971). In M. L. West (Ed.), Iambi et elegi graeci (Vol. 1). Oxford: Oxford University Press.
     ■ Armstrong, D. M. (1990). The causal theory of the mind. In W. G. Lycan (Ed.), Mind and cognition: A reader (pp. 37-47). Cambridge, MA: Basil Blackwell. (Originally published in 1981 in The nature of mind and other essays, Ithaca, NY: University Press).
     ■ Atkins, P. W. (1992). Creation revisited. Oxford: W. H. Freeman & Company.
     ■ Austin, J. L. (1962). How to do things with words. Cambridge, MA: Harvard University Press.
     ■ Bacon, F. (1878). Of the proficience and advancement of learning divine and human. In The works of Francis Bacon (Vol. 1). Cambridge, MA: Hurd & Houghton.
     ■ Bacon, R. (1928). Opus majus (Vol. 2). R. B. Burke (Trans.). Philadelphia, PA: University of Pennsylvania Press.
     ■ Bar-Hillel, Y. (1960). The present status of automatic translation of languages. In F. L. Alt (Ed.), Advances in computers (Vol. 1). New York: Academic Press.
     ■ Barr, A., & E. A. Feigenbaum (Eds.) (1981). The handbook of artificial intelligence (Vol. 1). Reading, MA: Addison-Wesley.
     ■ Barr, A., & E. A. Feigenbaum (Eds.) (1982). The handbook of artificial intelligence (Vol. 2). Los Altos, CA: William Kaufman.
     ■ Barron, F. X. (1963). The needs for order and for disorder as motives in creative activity. In C. W. Taylor & F. X. Barron (Eds.), Scientific creativity: Its rec ognition and development (pp. 153-160). New York: Wiley.
     ■ Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press.
     ■ Bartley, S. H. (1969). Principles of perception. London: Harper & Row.
     ■ Barzun, J. (1959). The house of intellect. New York: Harper & Row.
     ■ Beach, F. A., D. O. Hebb, C. T. Morgan & H. W. Nissen (Eds.) (1960). The neu ropsychology of Lashley. New York: McGraw-Hill.
     ■ Berkeley, G. (1996). Principles of human knowledge: Three Dialogues. Oxford: Oxford University Press. (Originally published in 1710.)
     ■ Berlin, I. (1953). The hedgehog and the fox: An essay on Tolstoy's view of history. NY: Simon & Schuster.
     ■ Bierwisch, J. (1970). Semantics. In J. Lyons (Ed.), New horizons in linguistics. Baltimore: Penguin Books.
     ■ Black, H. C. (1951). Black's law dictionary. St. Paul, MN: West Publishing.
     ■ Bobrow, D. G., & D. A. Norman (1975). Some principles of memory schemata. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding: Stud ies in Cognitive Science (pp. 131-149). New York: Academic Press.
     ■ Boden, M. A. (1977). Artificial intelligence and natural man. New York: Basic Books.
     ■ Boden, M. A. (1981). Minds and mechanisms. Ithaca, NY: Cornell University Press.
     ■ Boden, M. A. (1990a). The creative mind: Myths and mechanisms. London: Cardinal.
     ■ Boden, M. A. (1990b). The philosophy of artificial intelligence. Oxford: Oxford University Press.
     ■ Boden, M. A. (1994). Precis of The creative mind: Myths and mechanisms. Behavioral and brain sciences 17, 519-570.
     ■ Boden, M. (1996). Creativity. In M. Boden (Ed.), Artificial Intelligence (2nd ed.). San Diego: Academic Press.
     ■ Bolter, J. D. (1984). Turing's man: Western culture in the computer age. Chapel Hill, NC: University of North Carolina Press.
     ■ Bolton, N. (1972). The psychology of thinking. London: Methuen.
     ■ Bourne, L. E. (1973). Some forms of cognition: A critical analysis of several papers. In R. Solso (Ed.), Contemporary issues in cognitive psychology (pp. 313324). Loyola Symposium on Cognitive Psychology (Chicago 1972). Washington, DC: Winston.
     ■ Bransford, J. D., N. S. McCarrell, J. J. Franks & K. E. Nitsch (1977). Toward unexplaining memory. In R. Shaw & J. D. Bransford (Eds.), Perceiving, acting, and knowing (pp. 431-466). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Breger, L. (1981). Freud's unfinished journey. London: Routledge & Kegan Paul.
     ■ Brehmer, B. (1986). In one word: Not from experience. In H. R. Arkes & K. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader (pp. 705-719). Cambridge: Cambridge University Press.
     ■ Bresnan, J. (1978). A realistic transformational grammar. In M. Halle, J. Bresnan & G. A. Miller (Eds.), Linguistic theory and psychological reality (pp. 1-59). Cambridge, MA: MIT Press.
     ■ Brislin, R. W., W. J. Lonner & R. M. Thorndike (Eds.) (1973). Cross- cultural research methods. New York: Wiley.
     ■ Bronowski, J. (1977). A sense of the future: Essays in natural philosophy. P. E. Ariotti with R. Bronowski (Eds.). Cambridge, MA: MIT Press.
     ■ Bronowski, J. (1978). The origins of knowledge and imagination. New Haven, CT: Yale University Press.
     ■ Brown, R. O. (1973). A first language: The early stages. Cambridge, MA: Harvard University Press.
     ■ Brown, T. (1970). Lectures on the philosophy of the human mind. In R. Brown (Ed.), Between Hume and Mill: An anthology of British philosophy- 1749- 1843 (pp. 330-387). New York: Random House/Modern Library.
     ■ Bruner, J. S., J. Goodnow & G. Austin (1956). A study of thinking. New York: Wiley.
     ■ Campbell, J. (1982). Grammatical man: Information, entropy, language, and life. New York: Simon & Schuster.
     ■ Campbell, J. (1989). The improbable machine. New York: Simon & Schuster.
     ■ Carlyle, T. (1966). On heroes, hero- worship and the heroic in history. Lincoln: University of Nebraska Press. (Originally published in 1841.)
     ■ Carnap, R. (1959). The elimination of metaphysics through logical analysis of language [Ueberwindung der Metaphysik durch logische Analyse der Sprache]. In A. J. Ayer (Ed.), Logical positivism (pp. 60-81) A. Pap (Trans). New York: Free Press. (Originally published in 1932.)
     ■ Cassirer, E. (1946). Language and myth. New York: Harper and Brothers. Reprinted. New York: Dover Publications, 1953.
     ■ Cattell, R. B., & H. J. Butcher (1970). Creativity and personality. In P. E. Vernon (Ed.), Creativity. Harmondsworth, England: Penguin Books.
     ■ Caudill, M., & C. Butler (1990). Naturally intelligent systems. Cambridge, MA: MIT Press/Bradford Books.
     ■ Chandrasekaran, B. (1990). What kind of information processing is intelligence? A perspective on AI paradigms and a proposal. In D. Partridge & R. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 14-46). Cambridge: Cambridge University Press.
     ■ Charniak, E., & McDermott, D. (1985). Introduction to artificial intelligence. Reading, MA: Addison-Wesley.
     ■ Chase, W. G., & H. A. Simon (1988). The mind's eye in chess. In A. Collins & E. E. Smith (Eds.), Readings in cognitive science: A perspective from psychology and artificial intelligence (pp. 461-493). San Mateo, CA: Kaufmann.
     ■ Cheney, D. L., & R. M. Seyfarth (1990). How monkeys see the world: Inside the mind of another species. Chicago: University of Chicago Press.
     ■ Chi, M.T.H., R. Glaser & E. Rees (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 7-73). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Chomsky, N. (1957). Syntactic structures. The Hague: Mouton. Janua Linguarum.
     ■ Chomsky, N. (1964). A transformational approach to syntax. In J. A. Fodor & J. J. Katz (Eds.), The structure of language: Readings in the philosophy of lan guage (pp. 211-245). Englewood Cliffs, NJ: Prentice-Hall.
     ■ Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
     ■ Chomsky, N. (1972). Language and mind (enlarged ed.). New York: Harcourt Brace Jovanovich.
     ■ Chomsky, N. (1979). Language and responsibility. New York: Pantheon.
     ■ Chomsky, N. (1986). Knowledge of language: Its nature, origin and use. New York: Praeger Special Studies.
     ■ Churchland, P. (1979). Scientific realism and the plasticity of mind. New York: Cambridge University Press.
     ■ Churchland, P. M. (1989). A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, MA: MIT Press.
     ■ Churchland, P. S. (1986). Neurophilosophy. Cambridge, MA: MIT Press/Bradford Books.
     ■ Clark, A. (1996). Philosophical Foundations. In M. A. Boden (Ed.), Artificial in telligence (2nd ed.). San Diego: Academic Press.
     ■ Clark, H. H., & T. B. Carlson (1981). Context for comprehension. In J. Long & A. Baddeley (Eds.), Attention and performance (Vol. 9, pp. 313-330). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Clarke, A. C. (1984). Profiles of the future: An inquiry into the limits of the possible. New York: Holt, Rinehart & Winston.
     ■ Claxton, G. (1980). Cognitive psychology: A suitable case for what sort of treatment? In G. Claxton (Ed.), Cognitive psychology: New directions (pp. 1-25). London: Routledge & Kegan Paul.
     ■ Code, M. (1985). Order and organism. Albany, NY: State University of New York Press.
     ■ Collingwood, R. G. (1972). The idea of history. New York: Oxford University Press.
     ■ Coopersmith, S. (1967). The antecedents of self- esteem. San Francisco: W. H. Freeman.
     ■ Copland, A. (1952). Music and imagination. London: Oxford University Press.
     ■ Coren, S. (1994). The intelligence of dogs. New York: Bantam Books.
     ■ Cottingham, J. (Ed.) (1996). Western philosophy: An anthology. Oxford: Blackwell Publishers.
     ■ Cox, C. (1926). The early mental traits of three hundred geniuses. Stanford, CA: Stanford University Press.
     ■ Craik, K.J.W. (1943). The nature of explanation. Cambridge: Cambridge University Press.
     ■ Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: HarperCollins.
     ■ Cronbach, L. J., & R. E. Snow (1977). Aptitudes and instructional methods. New York: Irvington. Paperback edition, 1981.
     ■ Csikszentmihalyi, M. (1993). The evolving self. New York: Harper Perennial.
     ■ Culler, J. (1976). Ferdinand de Saussure. New York: Penguin Books.
     ■ Curtius, E. R. (1973). European literature and the Latin Middle Ages. W. R. Trask (Trans.). Princeton, NJ: Princeton University Press.
     ■ D'Alembert, J.L.R. (1963). Preliminary discourse to the encyclopedia of Diderot. R. N. Schwab (Trans.). Indianapolis: Bobbs-Merrill.
     ■ Damasio, A. (1994). Descartes' error: Emotion, reason, and the human brain. New York: Avon.
     ■ Dampier, W. C. (1966). A history of modern science. Cambridge: Cambridge University Press.
     ■ Darwin, C. (1911). The life and letters of Charles Darwin (Vol. 1). Francis Darwin (Ed.). New York: Appleton.
     ■ Davidson, D. (1970) Mental events. In L. Foster & J. W. Swanson (Eds.), Experience and theory (pp. 79-101). Amherst: University of Massachussetts Press.
     ■ Davies, P. (1995). About time: Einstein's unfinished revolution. New York: Simon & Schuster/Touchstone.
     ■ Davis, R., & J. J. King (1977). An overview of production systems. In E. Elcock & D. Michie (Eds.), Machine intelligence 8. Chichester, England: Ellis Horwood.
     ■ Davis, R., & D. B. Lenat (1982). Knowledge- based systems in artificial intelligence. New York: McGraw-Hill.
     ■ Dawkins, R. (1982). The extended phenotype: The gene as the unit of selection. Oxford: W. H. Freeman.
     ■ deKleer, J., & J. S. Brown (1983). Assumptions and ambiguities in mechanistic mental models (1983). In D. Gentner & A. L. Stevens (Eds.), Mental modes (pp. 155-190). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Dennett, D. C. (1978a). Brainstorms: Philosophical essays on mind and psychology. Montgomery, VT: Bradford Books.
     ■ Dennett, D. C. (1978b). Toward a cognitive theory of consciousness. In D. C. Dennett, Brainstorms: Philosophical Essays on Mind and Psychology. Montgomery, VT: Bradford Books.
     ■ Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster/Touchstone.
     ■ Descartes, R. (1897-1910). Traite de l'homme. In Oeuvres de Descartes (Vol. 11, pp. 119-215). Paris: Charles Adam & Paul Tannery. (Originally published in 1634.)
     ■ Descartes, R. (1950). Discourse on method. L. J. Lafleur (Trans.). New York: Liberal Arts Press. (Originally published in 1637.)
     ■ Descartes, R. (1951). Meditation on first philosophy. L. J. Lafleur (Trans.). New York: Liberal Arts Press. (Originally published in 1641.)
     ■ Descartes, R. (1955). The philosophical works of Descartes. E. S. Haldane and G.R.T. Ross (Trans.). New York: Dover. (Originally published in 1911 by Cambridge University Press.)
     ■ Descartes, R. (1967). Discourse on method (Pt. V). In E. S. Haldane and G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 106-118). Cambridge: Cambridge University Press. (Originally published in 1637.)
     ■ Descartes, R. (1970a). Discourse on method. In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 181-200). Cambridge: Cambridge University Press. (Originally published in 1637.)
     ■ Descartes, R. (1970b). Principles of philosophy. In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 1, pp. 178-291). Cambridge: Cambridge University Press. (Originally published in 1644.)
     ■ Descartes, R. (1984). Meditations on first philosophy. In J. Cottingham, R. Stoothoff & D. Murduch (Trans.), The philosophical works of Descartes (Vol. 2). Cambridge: Cambridge University Press. (Originally published in 1641.)
     ■ Descartes, R. (1986). Meditations on first philosophy. J. Cottingham (Trans.). Cambridge: Cambridge University Press. (Originally published in 1641 as Med itationes de prima philosophia.)
     ■ deWulf, M. (1956). An introduction to scholastic philosophy. Mineola, NY: Dover Books.
     ■ Dixon, N. F. (1981). Preconscious processing. London: Wiley.
     ■ Doyle, A. C. (1986). The Boscombe Valley mystery. In Sherlock Holmes: The com plete novels and stories (Vol. 1). New York: Bantam.
     ■ Dreyfus, H., & S. Dreyfus (1986). Mind over machine. New York: Free Press.
     ■ Dreyfus, H. L. (1972). What computers can't do: The limits of artificial intelligence (revised ed.). New York: Harper & Row.
     ■ Dreyfus, H. L., & S. E. Dreyfus (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press.
     ■ Edelman, G. M. (1992). Bright air, brilliant fire: On the matter of the mind. New York: Basic Books.
     ■ Ehrenzweig, A. (1967). The hidden order of art. London: Weidenfeld & Nicolson.
     ■ Einstein, A., & L. Infeld (1938). The evolution of physics. New York: Simon & Schuster.
     ■ Eisenstein, S. (1947). Film sense. New York: Harcourt, Brace & World.
     ■ Everdell, W. R. (1997). The first moderns. Chicago: University of Chicago Press.
     ■ Eysenck, M. W. (1977). Human memory: Theory, research and individual difference. Oxford: Pergamon.
     ■ Eysenck, M. W. (1982). Attention and arousal: Cognition and performance. Berlin: Springer.
     ■ Eysenck, M. W. (1984). A handbook of cognitive psychology. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Fancher, R. E. (1979). Pioneers of psychology. New York: W. W. Norton.
     ■ Farrell, B. A. (1981). The standing of psychoanalysis. New York: Oxford University Press.
     ■ Feldman, D. H. (1980). Beyond universals in cognitive development. Norwood, NJ: Ablex.
     ■ Fetzer, J. H. (1996). Philosophy and cognitive science (2nd ed.). New York: Paragon House.
     ■ Finke, R. A. (1990). Creative imagery: Discoveries and inventions in visualization. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Flanagan, O. (1991). The science of the mind. Cambridge MA: MIT Press/Bradford Books.
     ■ Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press/Bradford Books.
     ■ Frege, G. (1972). Conceptual notation. T. W. Bynum (Trans.). Oxford: Clarendon Press. (Originally published in 1879.)
     ■ Frege, G. (1979). Logic. In H. Hermes, F. Kambartel & F. Kaulbach (Eds.), Gottlob Frege: Posthumous writings. Chicago: University of Chicago Press. (Originally published in 1879-1891.)
     ■ Freud, S. (1959). Creative writers and day-dreaming. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 9, pp. 143-153). London: Hogarth Press.
     ■ Freud, S. (1966). Project for a scientific psychology. In J. Strachey (Ed.), The stan dard edition of the complete psychological works of Sigmund Freud (Vol. 1, pp. 295-398). London: Hogarth Press. (Originally published in 1950 as Aus den AnfaЁngen der Psychoanalyse, in London by Imago Publishing.)
     ■ Freud, S. (1976). Lecture 18-Fixation to traumas-the unconscious. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 16, p. 285). London: Hogarth Press.
     ■ Galileo, G. (1990). Il saggiatore [The assayer]. In S. Drake (Ed.), Discoveries and opinions of Galileo. New York: Anchor Books. (Originally published in 1623.)
     ■ Gassendi, P. (1970). Letter to Descartes. In "Objections and replies." In E. S. Haldane & G.R.T. Ross (Eds.), The philosophical works of Descartes (Vol. 2, pp. 179-240). Cambridge: Cambridge University Press. (Originally published in 1641.)
     ■ Gazzaniga, M. S. (1988). Mind matters: How mind and brain interact to create our conscious lives. Boston: Houghton Mifflin in association with MIT Press/Bradford Books.
     ■ Genesereth, M. R., & N. J. Nilsson (1987). Logical foundations of artificial intelligence. Palo Alto, CA: Morgan Kaufmann.
     ■ Ghiselin, B. (1952). The creative process. New York: Mentor.
     ■ Ghiselin, B. (1985). The creative process. Berkeley, CA: University of California Press. (Originally published in 1952.)
     ■ Gilhooly, K. J. (1996). Thinking: Directed, undirected and creative (3rd ed.). London: Academic Press.
     ■ Glass, A. L., K. J. Holyoak & J. L. Santa (1979). Cognition. Reading, MA: AddisonWesley.
     ■ Goody, J. (1977). The domestication of the savage mind. Cambridge: Cambridge University Press.
     ■ Gruber, H. E. (1980). Darwin on man: A psychological study of scientific creativity (2nd ed.). Chicago: University of Chicago Press.
     ■ Gruber, H. E., & S. Davis (1988). Inching our way up Mount Olympus: The evolving systems approach to creative thinking. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives. Cambridge: Cambridge University Press.
     ■ Guthrie, E. R. (1972). The psychology of learning. New York: Harper. (Originally published in 1935.)
     ■ Habermas, J. (1972). Knowledge and human interests. Boston: Beacon Press.
     ■ Hadamard, J. (1945). The psychology of invention in the mathematical field. Princeton, NJ: Princeton University Press.
     ■ Hand, D. J. (1985). Artificial intelligence and psychiatry. Cambridge: Cambridge University Press.
     ■ Harris, M. (1981). The language myth. London: Duckworth.
     ■ Haugeland, J. (Ed.) (1981). Mind design: Philosophy, psychology, artificial intelligence. Cambridge, MA: MIT Press/Bradford Books.
     ■ Haugeland, J. (1981a). The nature and plausibility of cognitivism. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 243-281). Cambridge, MA: MIT Press.
     ■ Haugeland, J. (1981b). Semantic engines: An introduction to mind design. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 1-34). Cambridge, MA: MIT Press/Bradford Books.
     ■ Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, MA: MIT Press.
     ■ Hawkes, T. (1977). Structuralism and semiotics. Berkeley: University of California Press.
     ■ Hebb, D. O. (1949). The organisation of behaviour. New York: Wiley.
     ■ Hebb, D. O. (1958). A textbook of psychology. Philadelphia: Saunders.
     ■ Hegel, G.W.F. (1910). The phenomenology of mind. J. B. Baille (Trans.). London: Sonnenschein. (Originally published as Phaenomenologie des Geistes, 1807.)
     ■ Heisenberg, W. (1958). Physics and philosophy. New York: Harper & Row.
     ■ Hempel, C. G. (1966). Philosophy of natural science. Englewood Cliffs, NJ: PrenticeHall.
     ■ Herman, A. (1997). The idea of decline in Western history. New York: Free Press.
     ■ Herrnstein, R. J., & E. G. Boring (Eds.) (1965). A source book in the history of psy chology. Cambridge, MA: Harvard University Press.
     ■ Herzmann, E. (1964). Mozart's creative process. In P. H. Lang (Ed.), The creative world of Mozart (pp. 17-30). London: Oldbourne Press.
     ■ Hilgard, E. R. (1957). Introduction to psychology. London: Methuen.
     ■ Hobbes, T. (1651). Leviathan. London: Crooke.
     ■ Hofstadter, D. R. (1979). Goedel, Escher, Bach: An eternal golden braid. New York: Basic Books.
     ■ Holliday, S. G., & M. J. Chandler (1986). Wisdom: Explorations in adult competence. Basel, Switzerland: Karger.
     ■ Horn, J. L. (1986). In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 3). Hillsdale, NJ: Erlbaum.
     ■ Hull, C. (1943). Principles of behavior. New York: Appleton-Century-Crofts.
     ■ Hume, D. (1955). An inquiry concerning human understanding. New York: Liberal Arts Press. (Originally published in 1748.)
     ■ Hume, D. (1975). An enquiry concerning human understanding. In L. A. SelbyBigge (Ed.), Hume's enquiries (3rd. ed., revised P. H. Nidditch). Oxford: Clarendon. (Spelling and punctuation revised.) (Originally published in 1748.)
     ■ Hume, D. (1978). A treatise of human nature. L. A. Selby-Bigge (Ed.), Hume's enquiries (3rd. ed., revised P. H. Nidditch). Oxford: Clarendon. (With some modifications of spelling and punctuation.) (Originally published in 1690.)
     ■ Hunt, E. (1973). The memory we must have. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language. (pp. 343-371) San Francisco: W. H. Freeman.
     ■ Husserl, E. (1960). Cartesian meditations. The Hague: Martinus Nijhoff.
     ■ Inhelder, B., & J. Piaget (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books. (Originally published in 1955 as De la logique de l'enfant a` la logique de l'adolescent. [Paris: Presses Universitaire de France])
     ■ James, W. (1890a). The principles of psychology (Vol. 1). New York: Dover Books.
     ■ James, W. (1890b). The principles of psychology. New York: Henry Holt.
     ■ Jevons, W. S. (1900). The principles of science (2nd ed.). London: Macmillan.
     ■ Johnson, G. (1986). Machinery of the mind: Inside the new science of artificial intelli gence. New York: Random House.
     ■ Johnson, M. L. (1988). Mind, language, machine. New York: St. Martin's Press.
     ■ Johnson-Laird, P. N. (1983). Mental models: Toward a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.
     ■ Johnson-Laird, P. N. (1988). The computer and the mind: An introduction to cognitive science. Cambridge, MA: Harvard University Press.
     ■ Jones, E. (1961). The life and work of Sigmund Freud. L. Trilling & S. Marcus (Eds.). London: Hogarth.
     ■ Jones, R. V. (1985). Complementarity as a way of life. In A. P. French & P. J. Kennedy (Eds.), Niels Bohr: A centenary volume. Cambridge, MA: Harvard University Press.
     ■ Kant, I. (1933). Critique of Pure Reason (2nd ed.). N. K. Smith (Trans.). London: Macmillan. (Originally published in 1781 as Kritik der reinen Vernunft.)
     ■ Kant, I. (1891). Solution of the general problems of the Prolegomena. In E. Belfort (Trans.), Kant's Prolegomena. London: Bell. (With minor modifications.) (Originally published in 1783.)
     ■ Katona, G. (1940). Organizing and memorizing: Studies in the psychology of learning and teaching. New York: Columbia University Press.
     ■ Kaufman, A. S. (1979). Intelligent testing with the WISC-R. New York: Wiley.
     ■ Koestler, A. (1964). The act of creation. New York: Arkana (Penguin).
     ■ Kohlberg, L. (1971). From is to ought. In T. Mischel (Ed.), Cognitive development and epistemology. (pp. 151-235) New York: Academic Press.
     ■ KoЁhler, W. (1925). The mentality of apes. New York: Liveright.
     ■ KoЁhler, W. (1927). The mentality of apes (2nd ed.). Ella Winter (Trans.). London: Routledge & Kegan Paul.
     ■ KoЁhler, W. (1930). Gestalt psychology. London: G. Bell.
     ■ KoЁhler, W. (1947). Gestalt psychology. New York: Liveright.
     ■ KoЁhler, W. (1969). The task of Gestalt psychology. Princeton, NJ: Princeton University Press.
     ■ Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.
     ■ Langer, E. J. (1989). Mindfulness. Reading, MA: Addison-Wesley.
     ■ Langer, S. (1962). Philosophical sketches. Baltimore: Johns Hopkins University Press.
     ■ Langley, P., H. A. Simon, G. L. Bradshaw & J. M. Zytkow (1987). Scientific dis covery: Computational explorations of the creative process. Cambridge, MA: MIT Press.
     ■ Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior, the Hixon Symposium (pp. 112-146) New York: Wiley.
     ■ LeDoux, J. E., & W. Hirst (1986). Mind and brain: Dialogues in cognitive neuroscience. Cambridge: Cambridge University Press.
     ■ Lehnert, W. (1978). The process of question answering. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Leiber, J. (1991). Invitation to cognitive science. Oxford: Blackwell.
     ■ Lenat, D. B., & G. Harris (1978). Designing a rule system that searches for scientific discoveries. In D. A. Waterman & F. Hayes-Roth (Eds.), Pattern directed inference systems (pp. 25-52) New York: Academic Press.
     ■ Levenson, T. (1995). Measure for measure: A musical history of science. New York: Touchstone. (Originally published in 1994.)
     ■ Leґvi-Strauss, C. (1963). Structural anthropology. C. Jacobson & B. Grundfest Schoepf (Trans.). New York: Basic Books. (Originally published in 1958.)
     ■ Levine, M. W., & J. M. Schefner (1981). Fundamentals of sensation and perception. London: Addison-Wesley.
     ■ Lewis, C. I. (1946). An analysis of knowledge and valuation. LaSalle, IL: Open Court.
     ■ Lighthill, J. (1972). A report on artificial intelligence. Unpublished manuscript, Science Research Council.
     ■ Lipman, M., A. M. Sharp & F. S. Oscanyan (1980). Philosophy in the classroom. Philadelphia: Temple University Press.
     ■ Lippmann, W. (1965). Public opinion. New York: Free Press. (Originally published in 1922.)
     ■ Locke, J. (1956). An essay concerning human understanding. Chicago: Henry Regnery Co. (Originally published in 1690.)
     ■ Locke, J. (1975). An essay concerning human understanding. P. H. Nidditch (Ed.). Oxford: Clarendon. (Originally published in 1690.) (With spelling and punctuation modernized and some minor modifications of phrasing.)
     ■ Lopate, P. (1994). The art of the personal essay. New York: Doubleday/Anchor Books.
     ■ Lorimer, F. (1929). The growth of reason. London: Kegan Paul. Machlup, F., & U. Mansfield (Eds.) (1983). The study of information. New York: Wiley.
     ■ Manguel, A. (1996). A history of reading. New York: Viking.
     ■ Margolis, H. (1987). Patterns, thinking, and cognition. Chicago: University of Chicago Press.
     ■ Markey, J. F. (1928). The symbolic process. London: Kegan Paul.
     ■ Martin, R. M. (1969). On Ziff's "Natural and formal languages." In S. Hook (Ed.), Language and philosophy: A symposium (pp. 249-263). New York: New York University Press.
     ■ Mazlish, B. (1993). The fourth discontinuity: the co- evolution of humans and machines. New Haven, CT: Yale University Press.
     ■ McCarthy, J., & P. J. Hayes (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer & D. Michie (Eds.), Machine intelligence 4. Edinburgh: Edinburgh University Press.
     ■ McClelland, J. L., D. E. Rumelhart & G. E. Hinton (1986). The appeal of parallel distributed processing. In D. E. Rumelhart, J. L. McClelland & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the mi crostructure of cognition (Vol. 1, pp. 3-40). Cambridge, MA: MIT Press/ Bradford Books.
     ■ McCorduck, P. (1979). Machines who think. San Francisco: W. H. Freeman.
     ■ McLaughlin, T. (1970). Music and communication. London: Faber & Faber.
     ■ Mednick, S. A. (1962). The associative basis of the creative process. Psychological Review 69, 431-436.
     ■ Meehl, P. E., & C. J. Golden (1982). Taxometric methods. In Kendall, P. C., & Butcher, J. N. (Eds.), Handbook of research methods in clinical psychology (pp. 127-182). New York: Wiley.
     ■ Mehler, J., E.C.T. Walker & M. Garrett (Eds.) (1982). Perspectives on mental rep resentation: Experimental and theoretical studies of cognitive processes and ca pacities. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Mill, J. S. (1900). A system of logic, ratiocinative and inductive: Being a connected view of the principles of evidence and the methods of scientific investigation. London: Longmans, Green.
     ■ Miller, G. A. (1979, June). A very personal history. Talk to the Cognitive Science Workshop, Cambridge, MA.
     ■ Miller, J. (1983). States of mind. New York: Pantheon Books.
     ■ Minsky, M. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision (pp. 211-277). New York: McGrawHill.
     ■ Minsky, M., & S. Papert (1973). Artificial intelligence. Condon Lectures, Oregon State System of Higher Education, Eugene, Oregon.
     ■ Minsky, M. L. (1986). The society of mind. New York: Simon & Schuster.
     ■ Mischel, T. (1976). Psychological explanations and their vicissitudes. In J. K. Cole & W. J. Arnold (Eds.), Nebraska Symposium on motivation (Vol. 23). Lincoln, NB: University of Nebraska Press.
     ■ Morford, M.P.O., & R. J. Lenardon (1995). Classical mythology (5th ed.). New York: Longman.
     ■ Murdoch, I. (1954). Under the net. New York: Penguin.
     ■ Nagel, E. (1959). Methodological issues in psychoanalytic theory. In S. Hook (Ed.), Psychoanalysis, scientific method, and philosophy: A symposium. New York: New York University Press.
     ■ Nagel, T. (1979). Mortal questions. London: Cambridge University Press.
     ■ Nagel, T. (1986). The view from nowhere. Oxford: Oxford University Press.
     ■ Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
     ■ Neisser, U. (1972). Changing conceptions of imagery. In P. W. Sheehan (Ed.), The function and nature of imagery (pp. 233-251). London: Academic Press.
     ■ Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman.
     ■ Neisser, U. (1978). Memory: What are the important questions? In M. M. Gruneberg, P. E. Morris & R. N. Sykes (Eds.), Practical aspects of memory (pp. 3-24). London: Academic Press.
     ■ Neisser, U. (1979). The concept of intelligence. In R. J. Sternberg & D. K. Detterman (Eds.), Human intelligence: Perspectives on its theory and measurement (pp. 179-190). Norwood, NJ: Ablex.
     ■ Nersessian, N. (1992). How do scientists think? Capturing the dynamics of conceptual change in science. In R. N. Giere (Ed.), Cognitive models of science (pp. 3-44). Minneapolis: University of Minnesota Press.
     ■ Newell, A. (1973a). Artificial intelligence and the concept of mind. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language (pp. 1-60). San Francisco: W. H. Freeman.
     ■ Newell, A. (1973b). You can't play 20 questions with nature and win. In W. G. Chase (Ed.), Visual information processing (pp. 283-310). New York: Academic Press.
     ■ Newell, A., & H. A. Simon (1963). GPS: A program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 279-293). New York & McGraw-Hill.
     ■ Newell, A., & H. A. Simon (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
     ■ Nietzsche, F. (1966). Beyond good and evil. W. Kaufmann (Trans.). New York: Vintage. (Originally published in 1885.)
     ■ Nilsson, N. J. (1971). Problem- solving methods in artificial intelligence. New York: McGraw-Hill.
     ■ Nussbaum, M. C. (1978). Aristotle's Princeton University Press. De Motu Anamalium. Princeton, NJ:
     ■ Oersted, H. C. (1920). Thermo-electricity. In Kirstine Meyer (Ed.), H. C. Oersted, Natuurvidenskabelige Skrifter (Vol. 2). Copenhagen: n.p. (Originally published in 1830 in The Edinburgh encyclopaedia.)
     ■ Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.
     ■ Onians, R. B. (1954). The origins of European thought. Cambridge, MA: Cambridge University Press.
     ■ Osgood, C. E. (1960). Method and theory in experimental psychology. New York: Oxford University Press. (Originally published in 1953.)
     ■ Osgood, C. E. (1966). Language universals and psycholinguistics. In J. H. Greenberg (Ed.), Universals of language (2nd ed., pp. 299-322). Cambridge, MA: MIT Press.
     ■ Palmer, R. E. (1969). Hermeneutics. Evanston, IL: Northwestern University Press.
     ■ Peirce, C. S. (1934). Some consequences of four incapacities-Man, a sign. In C. Hartsborne & P. Weiss (Eds.), Collected papers of Charles Saunders Peirce (Vol. 5, pp. 185-189). Cambridge, MA: Harvard University Press.
     ■ Penfield, W. (1959). In W. Penfield & L. Roberts, Speech and brain mechanisms. Princeton, NJ: Princeton University Press.
     ■ Penrose, R. (1994). Shadows of the mind: A search for the missing science of conscious ness. Oxford: Oxford University Press.
     ■ Perkins, D. N. (1981). The mind's best work. Cambridge, MA: Harvard University Press.
     ■ Peterfreund, E. (1986). The heuristic approach to psychoanalytic therapy. In
     ■ J. Reppen (Ed.), Analysts at work, (pp. 127-144). Hillsdale, NJ: Analytic Press.
     ■ Piaget, J. (1952). The origin of intelligence in children. New York: International Universities Press. (Originally published in 1936.)
     ■ Piaget, J. (1954). Le langage et les opeґrations intellectuelles. Proble` mes de psycho linguistique. Symposium de l'Association de Psychologie Scientifique de Langue Francёaise. Paris: Presses Universitaires de France.
     ■ Piaget, J. (1977). Problems of equilibration. In H. E. Gruber & J. J. Voneche (Eds.), The essential Piaget (pp. 838-841). London: Routlege & Kegan Paul. (Originally published in 1975 as L'eґquilibration des structures cognitives [Paris: Presses Universitaires de France].)
     ■ Piaget, J., & B. Inhelder. (1973). Memory and intelligence. New York: Basic Books.
     ■ Pinker, S. (1994). The language instinct. New York: Morrow.
     ■ Pinker, S. (1996). Facts about human language relevant to its evolution. In J.-P. Changeux & J. Chavaillon (Eds.), Origins of the human brain. A symposium of the Fyssen foundation (pp. 262-283). Oxford: Clarendon Press. Planck, M. (1949). Scientific autobiography and other papers. F. Gaynor (Trans.). New York: Philosophical Library.
     ■ Planck, M. (1990). Wissenschaftliche Selbstbiographie. W. Berg (Ed.). Halle, Germany: Deutsche Akademie der Naturforscher Leopoldina.
     ■ Plato (1892). Meno. In The Dialogues of Plato (B. Jowett, Trans.; Vol. 2). New York: Clarendon. (Originally published circa 380 B.C.)
     ■ Poincareґ, H. (1913). Mathematical creation. In The foundations of science. G. B. Halsted (Trans.). New York: Science Press.
     ■ Poincareґ, H. (1921). The foundations of science: Science and hypothesis, the value of science, science and method. G. B. Halstead (Trans.). New York: Science Press.
     ■ Poincareґ, H. (1929). The foundations of science: Science and hypothesis, the value of science, science and method. New York: Science Press.
     ■ Poincareґ, H. (1952). Science and method. F. Maitland (Trans.) New York: Dover.
     ■ Polya, G. (1945). How to solve it. Princeton, NJ: Princeton University Press.
     ■ Polanyi, M. (1958). Personal knowledge. London: Routledge & Kegan Paul.
     ■ Popper, K. (1968). Conjectures and refutations: The growth of scientific knowledge. New York: Harper & Row/Basic Books.
     ■ Popper, K., & J. Eccles (1977). The self and its brain. New York: Springer-Verlag.
     ■ Popper, K. R. (1959). The logic of scientific discovery. London: Hutchinson.
     ■ Putnam, H. (1975). Mind, language and reality: Philosophical papers (Vol. 2). Cambridge: Cambridge University Press.
     ■ Putnam, H. (1987). The faces of realism. LaSalle, IL: Open Court.
     ■ Pylyshyn, Z. W. (1981). The imagery debate: Analog media versus tacit knowledge. In N. Block (Ed.), Imagery (pp. 151-206). Cambridge, MA: MIT Press.
     ■ Pylyshyn, Z. W. (1984). Computation and cognition: Towards a foundation for cog nitive science. Cambridge, MA: MIT Press/Bradford Books.
     ■ Quillian, M. R. (1968). Semantic memory. In M. Minsky (Ed.), Semantic information processing (pp. 216-260). Cambridge, MA: MIT Press.
     ■ Quine, W.V.O. (1960). Word and object. Cambridge, MA: Harvard University Press.
     ■ Rabbitt, P.M.A., & S. Dornic (Eds.). Attention and performance (Vol. 5). London: Academic Press.
     ■ Rawlins, G.J.E. (1997). Slaves of the Machine: The quickening of computer technology. Cambridge, MA: MIT Press/Bradford Books.
     ■ Reid, T. (1970). An inquiry into the human mind on the principles of common sense. In R. Brown (Ed.), Between Hume and Mill: An anthology of British philosophy- 1749- 1843 (pp. 151-178). New York: Random House/Modern Library.
     ■ Reitman, W. (1970). What does it take to remember? In D. A. Norman (Ed.), Models of human memory (pp. 470-510). London: Academic Press.
     ■ Ricoeur, P. (1974). Structure and hermeneutics. In D. I. Ihde (Ed.), The conflict of interpretations: Essays in hermeneutics (pp. 27-61). Evanston, IL: Northwestern University Press.
     ■ Robinson, D. N. (1986). An intellectual history of psychology. Madison: University of Wisconsin Press.
     ■ Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
     ■ Rosch, E. (1977). Human categorization. In N. Warren (Ed.), Studies in cross cultural psychology (Vol. 1, pp. 1-49) London: Academic Press.
     ■ Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27-48). Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rosch, E., & B. B. Lloyd (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rose, S. (1970). The chemistry of life. Baltimore: Penguin Books.
     ■ Rose, S. (1976). The conscious brain (updated ed.). New York: Random House.
     ■ Rose, S. (1993). The making of memory: From molecules to mind. New York: Anchor Books. (Originally published in 1992)
     ■ Roszak, T. (1994). The cult of information: A neo- Luddite treatise on high- tech, artificial intelligence, and the true art of thinking (2nd ed.). Berkeley: University of California Press.
     ■ Royce, J. R., & W. W. Rozeboom (Eds.) (1972). The psychology of knowing. New York: Gordon & Breach.
     ■ Rumelhart, D. E. (1977). Introduction to human information processing. New York: Wiley.
     ■ Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. Bruce & W. F. Brewer (Eds.), Theoretical issues in reading comprehension. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Rumelhart, D. E., & J. L. McClelland (1986). On learning the past tenses of English verbs. In J. L. McClelland & D. E. Rumelhart (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2). Cambridge, MA: MIT Press.
     ■ Rumelhart, D. E., P. Smolensky, J. L. McClelland & G. E. Hinton (1986). Schemata and sequential thought processes in PDP models. In J. L. McClelland, D. E. Rumelhart & the PDP Research Group (Eds.), Parallel Distributed Processing (Vol. 2, pp. 7-57). Cambridge, MA: MIT Press.
     ■ Russell, B. (1927). An outline of philosophy. London: G. Allen & Unwin.
     ■ Russell, B. (1961). History of Western philosophy. London: George Allen & Unwin.
     ■ Russell, B. (1965). How I write. In Portraits from memory and other essays. London: Allen & Unwin.
     ■ Russell, B. (1992). In N. Griffin (Ed.), The selected letters of Bertrand Russell (Vol. 1), The private years, 1884- 1914. Boston: Houghton Mifflin. Ryecroft, C. (1966). Psychoanalysis observed. London: Constable.
     ■ Sagan, C. (1978). The dragons of Eden: Speculations on the evolution of human intel ligence. New York: Ballantine Books.
     ■ Salthouse, T. A. (1992). Expertise as the circumvention of human processing limitations. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172-194). Cambridge: Cambridge University Press.
     ■ Sanford, A. J. (1987). The mind of man: Models of human understanding. New Haven, CT: Yale University Press.
     ■ Sapir, E. (1921). Language. New York: Harcourt, Brace, and World.
     ■ Sapir, E. (1964). Culture, language, and personality. Berkeley: University of California Press. (Originally published in 1941.)
     ■ Sapir, E. (1985). The status of linguistics as a science. In D. G. Mandelbaum (Ed.), Selected writings of Edward Sapir in language, culture and personality (pp. 160166). Berkeley: University of California Press. (Originally published in 1929).
     ■ Scardmalia, M., & C. Bereiter (1992). Literate expertise. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172-194). Cambridge: Cambridge University Press.
     ■ Schafer, R. (1954). Psychoanalytic interpretation in Rorschach testing. New York: Grune & Stratten.
     ■ Schank, R. C. (1973). Identification of conceptualizations underlying natural language. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language (pp. 187-248). San Francisco: W. H. Freeman.
     ■ Schank, R. C. (1976). The role of memory in language processing. In C. N. Cofer (Ed.), The structure of human memory. (pp. 162-189) San Francisco: W. H. Freeman.
     ■ Schank, R. C. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Schank, R. C., & R. P. Abelson (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ SchroЁdinger, E. (1951). Science and humanism. Cambridge: Cambridge University Press.
     ■ Searle, J. R. (1981a). Minds, brains, and programs. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence (pp. 282-306). Cambridge, MA: MIT Press.
     ■ Searle, J. R. (1981b). Minds, brains and programs. In D. Hofstadter & D. Dennett (Eds.), The mind's I (pp. 353-373). New York: Basic Books.
     ■ Searle, J. R. (1983). Intentionality. New York: Cambridge University Press.
     ■ Serres, M. (1982). The origin of language: Biology, information theory, and thermodynamics. M. Anderson (Trans.). In J. V. Harari & D. F. Bell (Eds.), Hermes: Literature, science, philosophy (pp. 71-83). Baltimore: Johns Hopkins University Press.
     ■ Simon, H. A. (1966). Scientific discovery and the psychology of problem solving. In R. G. Colodny (Ed.), Mind and cosmos: Essays in contemporary science and philosophy (pp. 22-40). Pittsburgh: University of Pittsburgh Press.
     ■ Simon, H. A. (1979). Models of thought. New Haven, CT: Yale University Press.
     ■ Simon, H. A. (1989). The scientist as a problem solver. In D. Klahr & K. Kotovsky (Eds.), Complex information processing: The impact of Herbert Simon. Hillsdale, N.J.: Lawrence Erlbaum Associates.
     ■ Simon, H. A., & C. Kaplan (1989). Foundations of cognitive science. In M. Posner (Ed.), Foundations of cognitive science (pp. 1-47). Cambridge, MA: MIT Press.
     ■ Simonton, D. K. (1988). Creativity, leadership and chance. In R. J. Sternberg (Ed.), The nature of creativity. Cambridge: Cambridge University Press.
     ■ Skinner, B. F. (1974). About behaviorism. New York: Knopf.
     ■ Smith, E. E. (1988). Concepts and thought. In J. Sternberg & E. E. Smith (Eds.), The psychology of human thought (pp. 19-49). Cambridge: Cambridge University Press.
     ■ Smith, E. E. (1990). Thinking: Introduction. In D. N. Osherson & E. E. Smith (Eds.), Thinking. An invitation to cognitive science. (Vol. 3, pp. 1-2). Cambridge, MA: MIT Press.
     ■ Socrates. (1958). Meno. In E. H. Warmington & P. O. Rouse (Eds.), Great dialogues of Plato W.H.D. Rouse (Trans.). New York: New American Library. (Original publication date unknown.)
     ■ Solso, R. L. (1974). Theories of retrieval. In R. L. Solso (Ed.), Theories in cognitive psychology. Potomac, MD: Lawrence Erlbaum Associates.
     ■ Spencer, H. (1896). The principles of psychology. New York: Appleton-CenturyCrofts.
     ■ Steiner, G. (1975). After Babel: Aspects of language and translation. New York: Oxford University Press.
     ■ Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning. Hillsdale, NJ: Lawrence Erlbaum Associates.
     ■ Sternberg, R. J. (1994). Intelligence. In R. J. Sternberg, Thinking and problem solving. San Diego: Academic Press.
     ■ Sternberg, R. J., & J. E. Davidson (1985). Cognitive development in gifted and talented. In F. D. Horowitz & M. O'Brien (Eds.), The gifted and talented (pp. 103-135). Washington, DC: American Psychological Association.
     ■ Storr, A. (1993). The dynamics of creation. New York: Ballantine Books. (Originally published in 1972.)
     ■ Stumpf, S. E. (1994). Philosophy: History and problems (5th ed.). New York: McGraw-Hill.
     ■ Sulloway, F. J. (1996). Born to rebel: Birth order, family dynamics, and creative lives. New York: Random House/Vintage Books.
     ■ Thorndike, E. L. (1906). Principles of teaching. New York: A. G. Seiler.
     ■ Thorndike, E. L. (1970). Animal intelligence: Experimental studies. Darien, CT: Hafner Publishing Co. (Originally published in 1911.)
     ■ Titchener, E. B. (1910). A textbook of psychology. New York: Macmillan.
     ■ Titchener, E. B. (1914). A primer of psychology. New York: Macmillan.
     ■ Toulmin, S. (1957). The philosophy of science. London: Hutchinson.
     ■ Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organisation of memory. London: Academic Press.
     ■ Turing, A. (1946). In B. E. Carpenter & R. W. Doran (Eds.), ACE reports of 1946 and other papers. Cambridge, MA: MIT Press.
     ■ Turkle, S. (1984). Computers and the second self: Computers and the human spirit. New York: Simon & Schuster.
     ■ Tyler, S. A. (1978). The said and the unsaid: Mind, meaning, and culture. New York: Academic Press.
     ■ van Heijenoort (Ed.) (1967). From Frege to Goedel. Cambridge: Harvard University Press.
     ■ Varela, F. J. (1984). The creative circle: Sketches on the natural history of circularity. In P. Watzlawick (Ed.), The invented reality (pp. 309-324). New York: W. W. Norton.
     ■ Voltaire (1961). On the Penseґs of M. Pascal. In Philosophical letters (pp. 119-146). E. Dilworth (Trans.). Indianapolis: Bobbs-Merrill.
     ■ Wagman, M. (1997a). Cognitive science and the symbolic operations of human and artificial intelligence: Theory and research into the intellective processes. Westport, CT: Praeger.
     ■ Wagman, M. (1997b). The general unified theory of intelligence: Central conceptions and specific application to domains of cognitive science. Westport, CT: Praeger.
     ■ Wagman, M. (1998a). Cognitive science and the mind- body problem: From philosophy to psychology to artificial intelligence to imaging of the brain. Westport, CT: Praeger.
     ■ Wagman, M. (1999). The human mind according to artificial intelligence: Theory, re search, and implications. Westport, CT: Praeger.
     ■ Wall, R. (1972). Introduction to mathematical linguistics. Englewood Cliffs, NJ: Prentice-Hall.
     ■ Wallas, G. (1926). The Art of Thought. New York: Harcourt, Brace & Co.
     ■ Wason, P. (1977). Self contradictions. In P. Johnson-Laird & P. Wason (Eds.), Thinking: Readings in cognitive science. Cambridge: Cambridge University Press.
     ■ Wason, P. C., & P. N. Johnson-Laird. (1972). Psychology of reasoning: Structure and content. Cambridge, MA: Harvard University Press.
     ■ Watson, J. (1930). Behaviorism. New York: W. W. Norton.
     ■ Watzlawick, P. (1984). Epilogue. In P. Watzlawick (Ed.), The invented reality. New York: W. W. Norton, 1984.
     ■ Weinberg, S. (1977). The first three minutes: A modern view of the origin of the uni verse. New York: Basic Books.
     ■ Weisberg, R. W. (1986). Creativity: Genius and other myths. New York: W. H. Freeman.
     ■ Weizenbaum, J. (1976). Computer power and human reason: From judgment to cal culation. San Francisco: W. H. Freeman.
     ■ Wertheimer, M. (1945). Productive thinking. New York: Harper & Bros.
     ■ Whitehead, A. N. (1925). Science and the modern world. New York: Macmillan.
     ■ Whorf, B. L. (1956). In J. B. Carroll (Ed.), Language, thought and reality: Selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press.
     ■ Whyte, L. L. (1962). The unconscious before Freud. New York: Anchor Books.
     ■ Wiener, N. (1954). The human use of human beings. Boston: Houghton Mifflin.
     ■ Wiener, N. (1964). God & Golem, Inc.: A comment on certain points where cybernetics impinges on religion. Cambridge, MA: MIT Press.
     ■ Winograd, T. (1972). Understanding natural language. New York: Academic Press.
     ■ Winston, P. H. (1987). Artificial intelligence: A perspective. In E. L. Grimson & R. S. Patil (Eds.), AI in the 1980s and beyond (pp. 1-12). Cambridge, MA: MIT Press.
     ■ Winston, P. H. (Ed.) (1975). The psychology of computer vision. New York: McGrawHill.
     ■ Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.
     ■ Wittgenstein, L. (1958). The blue and brown books. New York: Harper Colophon.
     ■ Woods, W. A. (1975). What's in a link: Foundations for semantic networks. In D. G. Bobrow & A. Collins (Eds.), Representations and understanding: Studies in cognitive science (pp. 35-84). New York: Academic Press.
     ■ Woodworth, R. S. (1938). Experimental psychology. New York: Holt; London: Methuen (1939).
     ■ Wundt, W. (1904). Principles of physiological psychology (Vol. 1). E. B. Titchener (Trans.). New York: Macmillan.
     ■ Wundt, W. (1907). Lectures on human and animal psychology. J. E. Creighton & E. B. Titchener (Trans.). New York: Macmillan.
     ■ Young, J. Z. (1978). Programs of the brain. New York: Oxford University Press.
     ■ Ziman, J. (1978). Reliable knowledge: An exploration of the grounds for belief in science. Cambridge: Cambridge University Press.

    Historical dictionary of quotations in cognitive science > Bibliography

  • 8 Cognitive Science

       The basic idea of cognitive science is that intelligent beings are semantic engines-in other words, automatic formal systems with interpretations under which they consistently make sense.... [P]eople and intelligent computers turn out to be merely different manifestations of the same underlying phenomenon. (Haugeland, 1981b, p. 31)
       2) Experimental Psychology, Theoretical Linguistics, and Computational Simulation of Cognitive Processes Are All Components of Cognitive Science
       I went away from the Symposium with a strong conviction, more intuitive than rational, that human experimental psychology, theoretical linguistics, and computer simulation of cognitive processes were all pieces of a larger whole, and that the future would see progressive elaboration and coordination of their shared concerns.... I have been working toward a cognitive science for about twenty years beginning before I knew what to call it. (G. A. Miller, 1979, p. 9)
        Cognitive Science studies the nature of cognition in human beings, other animals, and inanimate machines (if such a thing is possible). While computers are helpful within cognitive science, they are not essential to its being. A science of cognition could still be pursued even without these machines.
        Computer Science studies various kinds of problems and the use of computers to solve them, without concern for the means by which we humans might otherwise resolve them. There could be no computer science if there were no machines of this kind, because they are indispensable to its being. Artificial Intelligence is a special branch of computer science that investigates the extent to which the mental powers of human beings can be captured by means of machines.
       There could be cognitive science without artificial intelligence but there could be no artificial intelligence without cognitive science. One final caveat: In the case of an emerging new discipline such as cognitive science there is an almost irresistible temptation to identify the discipline itself (as a field of inquiry) with one of the theories that inspired it (such as the computational conception...). This, however, is a mistake. The field of inquiry (or "domain") stands to specific theories as questions stand to possible answers. The computational conception should properly be viewed as a research program in cognitive science, where "research programs" are answers that continue to attract followers. (Fetzer, 1996, pp. xvi-xvii)
       What is the nature of knowledge and how is this knowledge used? These questions lie at the core of both psychology and artificial intelligence.
       The psychologist who studies "knowledge systems" wants to know how concepts are structured in the human mind, how such concepts develop, and how they are used in understanding and behavior. The artificial intelligence researcher wants to know how to program a computer so that it can understand and interact with the outside world. The two orientations intersect when the psychologist and the computer scientist agree that the best way to approach the problem of building an intelligent machine is to emulate the human conceptual mechanisms that deal with language.... The name "cognitive science" has been used to refer to this convergence of interests in psychology and artificial intelligence....
       This working partnership in "cognitive science" does not mean that psychologists and computer scientists are developing a single comprehensive theory in which people are no different from machines. Psychology and artificial intelligence have many points of difference in methods and goals.... We simply want to work on an important area of overlapping interest, namely a theory of knowledge systems. As it turns out, this overlap is substantial. For both people and machines, each in their own way, there is a serious problem in common of making sense out of what they hear, see, or are told about the world. The conceptual apparatus necessary to perform even a partial feat of understanding is formidable and fascinating. (Schank & Abelson, 1977, pp. 1-2)
       Within the last dozen years a general change in scientific outlook has occurred, consonant with the point of view represented here. One can date the change roughly from 1956: in psychology, by the appearance of Bruner, Goodnow, and Austin's Study of Thinking and George Miller's "The Magical Number Seven"; in linguistics, by Noam Chomsky's "Three Models of Language"; and in computer science, by our own paper on the Logic Theory Machine. (Newell & Simon, 1972, p. 4)

    Historical dictionary of quotations in cognitive science > Cognitive Science

  • 9 Artificial Intelligence

       In my opinion, none of [these programs] does even remote justice to the complexity of human mental processes. Unlike men, "artificially intelligent" programs tend to be single minded, undistractable, and unemotional. (Neisser, 1967, p. 9)
       Future progress in [artificial intelligence] will depend on the development of both practical and theoretical knowledge.... As regards theoretical knowledge, some have sought a unified theory of artificial intelligence. My view is that artificial intelligence is (or soon will be) an engineering discipline since its primary goal is to build things. (Nilsson, 1971, pp. vii-viii)
       Most workers in AI [artificial intelligence] research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the last 25 years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from being realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised.... In the meantime, claims and predictions regarding the potential results of AI research had been publicized which went even farther than the expectations of the majority of workers in the field, whose embarrassments have been added to by the lamentable failure of such inflated predictions....
       When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents "another step in the general process of evolution"; that possibilities in the 1980s include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000 [one has the right to be skeptical]. (Lighthill, 1972, p. 17)
       4) Just as Astronomy Succeeded Astrology, the Discovery of Intellectual Processes in Machines Should Lead to a Science, Eventually
       Just as astronomy succeeded astrology, following Kepler's discovery of planetary regularities, the discoveries of these many principles in empirical explorations on intellectual processes in machines should lead to a science, eventually. (Minsky & Papert, 1973, p. 11)
       Many problems arise in experiments on machine intelligence because things obvious to any person are not represented in any program. One can pull with a string, but one cannot push with one.... Simple facts like these caused serious problems when Charniak attempted to extend Bobrow's "Student" program to more realistic applications, and they have not been faced up to until now. (Minsky & Papert, 1973, p. 77)
       What do we mean by [a symbolic] "description"? We do not mean to suggest that our descriptions must be made of strings of ordinary language words (although they might be). The simplest kind of description is a structure in which some features of a situation are represented by single ("primitive") symbols, and relations between those features are represented by other symbols-or by other features of the way the description is put together. (Minsky & Papert, 1973, p. 11)
       [AI is] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular. (Boden, 1977, p. 5)
       The word you look for and hardly ever see in the early AI literature is the word knowledge. They didn't believe you have to know anything, you could always rework it all.... In fact 1967 is the turning point in my mind when there was enough feeling that the old ideas of general principles had to go.... I came up with an argument for what I called the primacy of expertise, and at the time I called the other guys the generalists. (Moses, quoted in McCorduck, 1979, pp. 228-229)
       9) Artificial Intelligence Is Psychology in a Particularly Pure and Abstract Form
       The basic idea of cognitive science is that intelligent beings are semantic engines-in other words, automatic formal systems with interpretations under which they consistently make sense. We can now see why this includes psychology and artificial intelligence on a more or less equal footing: people and intelligent computers (if and when there are any) turn out to be merely different manifestations of the same underlying phenomenon. Moreover, with universal hardware, any semantic engine can in principle be formally imitated by a computer if only the right program can be found. And that will guarantee semantic imitation as well, since (given the appropriate formal behavior) the semantics is "taking care of itself" anyway. Thus we also see why, from this perspective, artificial intelligence can be regarded as psychology in a particularly pure and abstract form. The same fundamental structures are under investigation, but in AI, all the relevant parameters are under direct experimental control (in the programming), without any messy physiology or ethics to get in the way. (Haugeland, 1981b, p. 31)
       There are many different kinds of reasoning one might imagine:
        Formal reasoning involves the syntactic manipulation of data structures to deduce new ones following prespecified rules of inference. Mathematical logic is the archetypical formal representation. Procedural reasoning uses simulation to answer questions and solve problems. When we use a program to answer What is the sum of 3 and 4? it uses, or "runs," a procedural model of arithmetic. Reasoning by analogy seems to be a very natural mode of thought for humans but, so far, difficult to accomplish in AI programs. The idea is that when you ask the question Can robins fly? the system might reason that "robins are like sparrows, and I know that sparrows can fly, so robins probably can fly."
        Generalization and abstraction are also natural reasoning process for humans that are difficult to pin down well enough to implement in a program. If one knows that Robins have wings, that Sparrows have wings, and that Blue jays have wings, eventually one will believe that All birds have wings. This capability may be at the core of most human learning, but it has not yet become a useful technique in AI.... Meta- level reasoning is demonstrated by the way one answers the question What is Paul Newman's telephone number? You might reason that "if I knew Paul Newman's number, I would know that I knew it, because it is a notable fact." This involves using "knowledge about what you know," in particular, about the extent of your knowledge and about the importance of certain facts. Recent research in psychology and AI indicates that meta-level reasoning may play a central role in human cognitive processing. (Barr & Feigenbaum, 1981, pp. 146-147)
       Suffice it to say that programs already exist that can do things-or, at the very least, appear to be beginning to do things-which ill-informed critics have asserted a priori to be impossible. Examples include: perceiving in a holistic as opposed to an atomistic way; using language creatively; translating sensibly from one language to another by way of a language-neutral semantic representation; planning acts in a broad and sketchy fashion, the details being decided only in execution; distinguishing between different species of emotional reaction according to the psychological context of the subject. (Boden, 1981, p. 33)
       Can the synthesis of Man and Machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens-and I have... good reasons for thinking that it must-we have nothing to regret and certainly nothing to fear. (Clarke, 1984, p. 243)
       The thesis of GOFAI... is not that the processes underlying intelligence can be described symbolically... but that they are symbolic. (Haugeland, 1985, p. 113)
        14) Artificial Intelligence Provides a Useful Approach to Psychological and Psychiatric Theory Formation
       It is all very well formulating psychological and psychiatric theories verbally but, when using natural language (even technical jargon), it is difficult to recognise when a theory is complete; oversights are all too easily made, gaps too readily left. This is a point which is generally recognised to be true and it is for precisely this reason that the behavioural sciences attempt to follow the natural sciences in using "classical" mathematics as a more rigorous descriptive language. However, it is an unfortunate fact that, with a few notable exceptions, there has been a marked lack of success in this application. It is my belief that a different approach-a different mathematics-is needed, and that AI provides just this approach. (Hand, quoted in Hand, 1985, pp. 6-7)
       We might distinguish among four kinds of AI.
       Research of this kind involves building and programming computers to perform tasks which, to paraphrase Marvin Minsky, would require intelligence if they were done by us. Researchers in nonpsychological AI make no claims whatsoever about the psychological realism of their programs or the devices they build, that is, about whether or not computers perform tasks as humans do.
       Research here is guided by the view that the computer is a useful tool in the study of mind. In particular, we can write computer programs or build devices that simulate alleged psychological processes in humans and then test our predictions about how the alleged processes work. We can weave these programs and devices together with other programs and devices that simulate different alleged mental processes and thereby test the degree to which the AI system as a whole simulates human mentality. According to weak psychological AI, working with computer models is a way of refining and testing hypotheses about processes that are allegedly realized in human minds.
    ... According to this view, our minds are computers and therefore can be duplicated by other computers. Sherry Turkle writes that the "real ambition is of mythic proportions, making a general purpose intelligence, a mind." (Turkle, 1984, p. 240) The authors of a major text announce that "the ultimate goal of AI research is to build a person or, more humbly, an animal." (Charniak & McDermott, 1985, p. 7)
       Research in this field, like strong psychological AI, takes seriously the functionalist view that mentality can be realized in many different types of physical devices. Suprapsychological AI, however, accuses strong psychological AI of being chauvinisticof being only interested in human intelligence! Suprapsychological AI claims to be interested in all the conceivable ways intelligence can be realized. (Flanagan, 1991, pp. 241-242)
        16) Determination of Relevance of Rules in Particular Contexts
       Even if the [rules] were stored in a context-free form the computer still couldn't use them. To do that the computer requires rules enabling it to draw on just those [ rules] which are relevant in each particular context. Determination of relevance will have to be based on further facts and rules, but the question will again arise as to which facts and rules are relevant for making each particular determination. One could always invoke further facts and rules to answer this question, but of course these must be only the relevant ones. And so it goes. It seems that AI workers will never be able to get started here unless they can settle the problem of relevance beforehand by cataloguing types of context and listing just those facts which are relevant in each. (Dreyfus & Dreyfus, 1986, p. 80)
       Perhaps the single most important idea to artificial intelligence is that there is no fundamental difference between form and content, that meaning can be captured in a set of symbols such as a semantic net. (G. Johnson, 1986, p. 250)
        18) The Assumption That the Mind Is a Formal System
       Artificial intelligence is based on the assumption that the mind can be described as some kind of formal system manipulating symbols that stand for things in the world. Thus it doesn't matter what the brain is made of, or what it uses for tokens in the great game of thinking. Using an equivalent set of tokens and rules, we can do thinking with a digital computer, just as we can play chess using cups, salt and pepper shakers, knives, forks, and spoons. Using the right software, one system (the mind) can be mapped into the other (the computer). (G. Johnson, 1986, p. 250)
        19) A Statement of the Primary and Secondary Purposes of Artificial Intelligence
       The primary goal of Artificial Intelligence is to make machines smarter.
       The secondary goals of Artificial Intelligence are to understand what intelligence is (the Nobel laureate purpose) and to make machines more useful (the entrepreneurial purpose). (Winston, 1987, p. 1)
       The theoretical ideas of older branches of engineering are captured in the language of mathematics. We contend that mathematical logic provides the basis for theory in AI. Although many computer scientists already count logic as fundamental to computer science in general, we put forward an even stronger form of the logic-is-important argument....
       AI deals mainly with the problem of representing and using declarative (as opposed to procedural) knowledge. Declarative knowledge is the kind that is expressed as sentences, and AI needs a language in which to state these sentences. Because the languages in which this knowledge usually is originally captured (natural languages such as English) are not suitable for computer representations, some other language with the appropriate properties must be used. It turns out, we think, that the appropriate properties include at least those that have been uppermost in the minds of logicians in their development of logical languages such as the predicate calculus. Thus, we think that any language for expressing knowledge in AI systems must be at least as expressive as the first-order predicate calculus. (Genesereth & Nilsson, 1987, p. viii)
        21) Perceptual Structures Can Be Represented as Lists of Elementary Propositions
       In artificial intelligence studies, perceptual structures are represented as assemblages of description lists, the elementary components of which are propositions asserting that certain relations hold among elements. (Chase & Simon, 1988, p. 490)
       Artificial intelligence (AI) is sometimes defined as the study of how to build and/or program computers to enable them to do the sorts of things that minds can do. Some of these things are commonly regarded as requiring intelligence: offering a medical diagnosis and/or prescription, giving legal or scientific advice, proving theorems in logic or mathematics. Others are not, because they can be done by all normal adults irrespective of educational background (and sometimes by non-human animals too), and typically involve no conscious control: seeing things in sunlight and shadows, finding a path through cluttered terrain, fitting pegs into holes, speaking one's own native tongue, and using one's common sense. Because it covers AI research dealing with both these classes of mental capacity, this definition is preferable to one describing AI as making computers do "things that would require intelligence if done by people." However, it presupposes that computers could do what minds can do, that they might really diagnose, advise, infer, and understand. One could avoid this problematic assumption (and also side-step questions about whether computers do things in the same way as we do) by defining AI instead as "the development of computers whose observable performance has features which in humans we would attribute to mental processes." This bland characterization would be acceptable to some AI workers, especially amongst those focusing on the production of technological tools for commercial purposes. But many others would favour a more controversial definition, seeing AI as the science of intelligence in general-or, more accurately, as the intellectual core of cognitive science. As such, its goal is to provide a systematic theory that can explain (and perhaps enable us to replicate) both the general categories of intentionality and the diverse psychological capacities grounded in them. (Boden, 1990b, pp. 1-2)
       Because the ability to store data somewhat corresponds to what we call memory in human beings, and because the ability to follow logical procedures somewhat corresponds to what we call reasoning in human beings, many members of the cult have concluded that what computers do somewhat corresponds to what we call thinking. It is no great difficulty to persuade the general public of that conclusion since computers process data very fast in small spaces well below the level of visibility; they do not look like other machines when they are at work. They seem to be running along as smoothly and silently as the brain does when it remembers and reasons and thinks. On the other hand, those who design and build computers know exactly how the machines are working down in the hidden depths of their semiconductors. Computers can be taken apart, scrutinized, and put back together. Their activities can be tracked, analyzed, measured, and thus clearly understood-which is far from possible with the brain. This gives rise to the tempting assumption on the part of the builders and designers that computers can tell us something about brains, indeed, that the computer can serve as a model of the mind, which then comes to be seen as some manner of information processing machine, and possibly not as good at the job as the machine. (Roszak, 1994, pp. xiv-xv)
       The inner workings of the human mind are far more intricate than the most complicated systems of modern technology. Researchers in the field of artificial intelligence have been attempting to develop programs that will enable computers to display intelligent behavior. Although this field has been an active one for more than thirty-five years and has had many notable successes, AI researchers still do not know how to create a program that matches human intelligence. No existing program can recall facts, solve problems, reason, learn, and process language with human facility. This lack of success has occurred not because computers are inferior to human brains but rather because we do not yet know in sufficient detail how intelligence is organized in the brain. (Anderson, 1995, p. 2)

    Historical dictionary of quotations in cognitive science > Artificial Intelligence

  • 10 Epistemology

       1) Beyond Psychophysiology and Sociology and History of Science There Is Nothing for Epistemology to Do
       If we have psychophysiology to cover causal mechanisms, and the sociology and history of science to note the occasions on which observation sentences are invoked or dodged in constructing and dismantling theories, then epistemology has nothing to do. (Rorty, 1979, p. 225)
       But I think that at this point it may be more useful to say rather that epistemology still goes on, though in a new setting and a clarified status. Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz, a physical human subject. This human subject is accorded a certain experimentally controlled input-certain patterns of irradiation in assorted frequencies, for instance-and in the fullness of time the subject delivers as output a description of the three-dimensional external world and its history. The relation between the meager input and the torrential output is a relation that we are prompted to study for somewhat the same reasons that always prompted epistemology; namely, in order to see how evidence relates to theory, and in what ways one's theory of nature transcends any available evidence. (Quine, quoted in Royce & Rozeboom, 1972, p. 18)
       3) The Assumption That Cognitive Psychology Has Epistemological Import Can Be Challenged
       Only the assumption, that one day the various taxonomies put together by, for example, Chomsky, Piaget, Leґvi-Strauss, Marx, and Freud will all flow together and spell out one great Universal Language of Nature... would suggest that cognitive psychology had epistemological import. But that suggestion would still be as misguided as the suggestion that, since we may predict everything by knowing enough about matter in motion, a completed neurophysiology will help us demonstrate Galileo's superiority to his contemporaries. The gap between explaining ourselves and justifying ourselves is just as great whether a programming language or a hardware language is used in the explanations. (Rorty, 1979, p. 249)

    Historical dictionary of quotations in cognitive science > Epistemology

  • 11 Stevens, Stanley Smith

    [br]
    b. 4 November 1906 Ogden, Utah, USA
    d. 18 January 1973 Cambridge, Massachusetts, USA
    [br]
    American psychophysicist, proponent of " Stevens Law" of sensory magnitude, and developer of the technology of hearing and acoustics.
    [br]
    Of Mormon origins, Stevens graduated PhD in physiology from Harvard in 1933. After a further fellowship in physiology and a research fellowship in physics, he became an instructor in experimental psychology. At the beginning of the Second World War he founded the PsychoAcoustic Laboratory at Harvard, which grew into the Laboratory of Psychophysics, and in 1962 he became the first Professor of Psychophysics.
    Originally his research concentrated on sound and communication, but it later enlarged to embrace the whole range of sensory phenomena. It was his earlier studies that established the law relating sensory magnitude to stimulus magnitude. His studies of the loudness scale and its relationship to the decibel scale were significant in the development of the electronic hearing aid.
    [br]
    Principal Honours and Distinctions
    National Academy of Sciences 1946. Society of Experimental Psychologists Warren Medal 1943. American Psychological Association Science Award 1960.
    Bibliography
    1938, Hearing: Its Psychology and Physiology.
    Further Reading
    1951, Handbook of Experimental Psychology.
    MG

    Biographical history of technology > Stevens, Stanley Smith

  • 12 Leavitt, Harold J.

    (b. 1922) Gen Mgt
    U.S. psychologist and academic. Researcher with an interest in organization behavior and psychology, and originator of Leavitt’s Diamond and author of Managerial Psychology (1958).

    The ultimate business dictionary > Leavitt, Harold J.

  • 13 Lewin, Kurt

    (1890–1947) Gen Mgt
    Germanborn social psychologist. Known for studies of leadership styles and group decision making, developer of force field analysis with a linked change management model, pioneer of action research and the T-Group (see sensitivity training) approach.
    Lewin was a professor of philosophy and psychology at Berlin University until 1932 when he fled from the Nazis to the United States. He was professor of child psychology at the Child Welfare Research Station in Iowa until 1944. After leaving Iowa, Lewin worked at MIT, with Douglas McGregor among others, founding a research center for group dynamics.

    The ultimate business dictionary > Lewin, Kurt

  • 14 Behaviorism

       A person is changed by the contingencies of reinforcement under which he behaves; he does not store the contingencies. In particular, he does not store copies of the stimuli which have played a part in the contingencies. There are no "iconic representations" in his mind; there are no "data structures stored in his memory"; he has no "cognitive map" of the world in which he has lived. He has simply been changed in such a way that stimuli now control particular kinds of perceptual behavior. (Skinner, 1974, p. 84)
       Psychology as the behaviorist views it is a purely objective natural science. Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its method nor is the scientific value of its data dependent upon the readiness with which they lend themselves to interpretation in terms of consciousness. The behaviorist, in his efforts to get a unitary scheme of animal response, recognizes no dividing line between man and brute. The behavior of man, with all its refinement and complexity, forms only a part of the behaviorist's total scheme of investigation. (Watson, quoted in Fancher, 1979, p. 319)

    Historical dictionary of quotations in cognitive science > Behaviorism

  • 15 Cognitivism

       Cognitivism in psychology and philosophy is roughly the position that intelligent behavior can (only) be explained by appeal to internal "cognitive processes." (Haugeland, 1981a, p. 243)
       Cognitive science is an interdisciplinary effort drawing on psychology and linguistics, and philosophy. Emboldened by an apparent convergence of interests, some scientists in these fields have chosen not to reject mental functions out of hand as the behaviorists did. Instead, they have relied on the concept of mental representations and on a set of assumptions collectively called the functionalist positions. From this viewpoint, people behave according to knowledge made up of symbolic mental representations. Cognition consists of the manipulation of these symbols. Psychological phenomena are described in terms of functional processes.
       The efficacy of such processes resides in the possibility of interpreting items as symbols in an abstract and well-defined way, according to a set of unequivocal rules. Such a set of rules constitutes what is known as a syntax.
       The exercise of these syntactical rules is a form of computation.... Computation is assumed to be largely independent of the structure and the mode of development of the nervous system, just as a piece of computer software can run on different machines with different architectures and is thus "independent" of them....
       This point of view-called cognitivism by some-has had a great vogue and has prompted a burst of psychological work of great interest and value. Accompanying it have been a set of remarkable ideas.... I cannot overemphasize the degree to which these ideas or their variants pervade modern science.... But I must also add that the cognitivist enterprise rests on a set of unexamined assumptions. One of its most curious deficiencies is that it makes only marginal reference to the biological foundations that underlie the mechanisms it purports to explain. The result is a scientific deviation as great as that of the behaviorism it has attempted to supplant. (Edelman, 1992, pp. 13-14)

    Historical dictionary of quotations in cognitive science > Cognitivism

  • 16 Computers

       The brain has been compared to a digital computer because the neuron, like a switch or valve, either does or does not complete a circuit. But at that point the similarity ends. The switch in the digital computer is constant in its effect, and its effect is large in proportion to the total output of the machine. The effect produced by the neuron varies with its recovery from [the] refractory phase and with its metabolic state. The number of neurons involved in any action runs into millions so that the influence of any one is negligible.... Any cell in the system can be dispensed with.... The brain is an analogical machine, not digital. Analysis of the integrative activities will probably have to be in statistical terms. (Lashley, quoted in Beach, Hebb, Morgan & Nissen, 1960, p. 539)
       It is essential to realize that a computer is not a mere "number cruncher," or supercalculating arithmetic machine, although this is how computers are commonly regarded by people having no familiarity with artificial intelligence. Computers do not crunch numbers; they manipulate symbols.... Digital computers originally developed with mathematical problems in mind, are in fact general purpose symbol manipulating machines....
       The terms "computer" and "computation" are themselves unfortunate, in view of their misleading arithmetical connotations. The definition of artificial intelligence previously cited-"the study of intelligence as computation"-does not imply that intelligence is really counting. Intelligence may be defined as the ability creatively to manipulate symbols, or process information, given the requirements of the task in hand. (Boden, 1981, pp. 15, 16-17)
       The task is to get computers to explain things to themselves, to ask questions about their experiences so as to cause those explanations to be forthcoming, and to be creative in coming up with explanations that have not been previously available. (Schank, 1986, p. 19)
       In What Computers Can't Do, written in 1969 (2nd edition, 1972), the main objection to AI was the impossibility of using rules to select only those facts about the real world that were relevant in a given situation. The "Introduction" to the paperback edition of the book, published by Harper & Row in 1979, pointed out further that no one had the slightest idea how to represent the common sense understanding possessed even by a four-year-old. (Dreyfus & Dreyfus, 1986, p. 102)
       A popular myth says that the invention of the computer diminishes our sense of ourselves, because it shows that rational thought is not special to human beings, but can be carried on by a mere machine. It is a short stop from there to the conclusion that intelligence is mechanical, which many people find to be an affront to all that is most precious and singular about their humanness.
       In fact, the computer, early in its career, was not an instrument of the philistines, but a humanizing influence. It helped to revive an idea that had fallen into disrepute: the idea that the mind is real, that it has an inner structure and a complex organization, and can be understood in scientific terms. For some three decades, until the 1940s, American psychology had lain in the grip of the ice age of behaviorism, which was antimental through and through. During these years, extreme behaviorists banished the study of thought from their agenda. Mind and consciousness, thinking, imagining, planning, solving problems, were dismissed as worthless for anything except speculation. Only the external aspects of behavior, the surface manifestations, were grist for the scientist's mill, because only they could be observed and measured....
       It is one of the surprising gifts of the computer in the history of ideas that it played a part in giving back to psychology what it had lost, which was nothing less than the mind itself. In particular, there was a revival of interest in how the mind represents the world internally to itself, by means of knowledge structures such as ideas, symbols, images, and inner narratives, all of which had been consigned to the realm of mysticism. (Campbell, 1989, p. 10)
       [Our artifacts] only have meaning because we give it to them; their intentionality, like that of smoke signals and writing, is essentially borrowed, hence derivative. To put it bluntly: computers themselves don't mean anything by their tokens (any more than books do)-they only mean what we say they do. Genuine understanding, on the other hand, is intentional "in its own right" and not derivatively from something else. (Haugeland, 1981a, pp. 32-33)
       he debate over the possibility of computer thought will never be won or lost; it will simply cease to be of interest, like the previous debate over man as a clockwork mechanism. (Bolter, 1984, p. 190)
       t takes us a long time to emotionally digest a new idea. The computer is too big a step, and too recently made, for us to quickly recover our balance and gauge its potential. It's an enormous accelerator, perhaps the greatest one since the plow, twelve thousand years ago. As an intelligence amplifier, it speeds up everything-including itself-and it continually improves because its heart is information or, more plainly, ideas. We can no more calculate its consequences than Babbage could have foreseen antibiotics, the Pill, or space stations.
       Further, the effects of those ideas are rapidly compounding, because a computer design is itself just a set of ideas. As we get better at manipulating ideas by building ever better computers, we get better at building even better computers-it's an ever-escalating upward spiral. The early nineteenth century, when the computer's story began, is already so far back that it may as well be the Stone Age. (Rawlins, 1997, p. 19)
       According to weak AI, the principle value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion than before. But according to strong AI the computer is not merely a tool in the study of the mind; rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states. And according to strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations. (Searle, 1981b, p. 353)
       What makes people smarter than machines? They certainly are not quicker or more precise. Yet people are far better at perceiving objects in natural scenes and noting their relations, at understanding language and retrieving contextually appropriate information from memory, at making plans and carrying out contextually appropriate actions, and at a wide range of other natural cognitive tasks. People are also far better at learning to do these things more accurately and fluently through processing experience.
       What is the basis for these differences? One answer, perhaps the classic one we might expect from artificial intelligence, is "software." If we only had the right computer program, the argument goes, we might be able to capture the fluidity and adaptability of human information processing. Certainly this answer is partially correct. There have been great breakthroughs in our understanding of cognition as a result of the development of expressive high-level computer languages and powerful algorithms. However, we do not think that software is the whole story.
       In our view, people are smarter than today's computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at.... hese tasks generally require the simultaneous consideration of many pieces of information or constraints. Each constraint may be imperfectly specified and ambiguous, yet each can play a potentially decisive role in determining the outcome of processing. (McClelland, Rumelhart & Hinton, 1986, pp. 3-4)

    Historical dictionary of quotations in cognitive science > Computers

  • 17 History

       For, as I take it, Universal History, the history of what man has accomplished in this world, is at bottom the History of the great Men who have worked here. They were the leaders of men, these great ones; the modellers, patterns, and in a wide sense creators, of whatsoever the general mass of men contrived to do or attain; all things that we see standing accomplished in the world are properly the outer material result, the practical realisation and embodiment, of Thoughts that dwelt in the great Men sent into the world: the soul of the world's history, it may justly be considered, were the history of these. (Carlyle, 1966, p. 1)
       It is generally thought to be of importance to a man that he should know himself: where knowing himself means knowing not his merely personal peculiarities, the things that distinguish him from other men, but his nature as a man.... Knowing yourself means knowing what you can do; and since nobody knows what he can do until he tries, the only clue to what man can do is what man has done. The value of history, then, is that it teaches us what man has done and thus what man is. (Collingwood, 1972, p. 10)
       To regard [psychology] as rising above the sphere of history, and establishing the permanent and unchanging laws of human nature, is therefore possible only to a person who mistakes the transient conditions of a certain historical age for the permanent conditions of human life. (Collingwood, 1972, p. 224)

    Historical dictionary of quotations in cognitive science > History

  • 18 Information Processing

       The term "information processing" originated in the late fifties in the computer field as a general descriptive term that seemed somewhat less contingent and parochial than "computer science," which also came into use during the same period. Thus, it was the name of choice for two of the encompassing professional organizations formed at the time: the In ternational Federation of Information Processing Societies and the American Federation of Information Processing Societies. Although the transfer of the phrase from activities of computers to parallel activities of human beings undoubtedly occurred independently in a number of heads, the term was originally identified pretty closely with computer simulation of cognitive processes... ; that is, with the kind of effort from which arose the theory in this book. (Newell & Simon, 1972, p. 888)
       It was because the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in "memory" and retrieve them again, classify inputs, recognize patterns and so on.... Indeed the assumptions that underlie most contemporary work on information processing are surprisingly like those of nineteenth century introspective psychology, though without introspection itself. (Neisser, 1976, pp. 5, 7)
       The processor was assumed to be rational, and attention was directed to the logical nature of problem solving strategies. The "mature western mind" was presumed to be one that, in abstracting knowledge from the idosyncracies of particular everyday experience, employed Aristotelian laws of logic. When applied to categories, this meant that to know a category was to have an abstracted clear-cut, necessary, and sufficient criteria for category membership. If other thought processes, such as imagery, ostensive definition, reasoning by analogy to particular instances, or the use of metaphors were considered at all, they were usually relegated to lesser beings such as women, children, primitive people, or even to nonhumans. (Rosch & Lloyd, 1978, p. 2)

    Historical dictionary of quotations in cognitive science > Information Processing

  • 19 Introspection

       1) Experimental Introspection Is the One Reliable Method of Knowing Ourselves
       When we are trying to understand the mental processes of a child or a dog or an insect as shown by conduct and action, the outward signs of mental processes,... we must always fall back upon experimental introspection... [;] we cannot imagine processes in another mind that we do not find in our own. Experimental introspection is thus our one reliable method of knowing ourselves; it is the sole gateway to psychology. (Titchener, 1914, p. 32)
       There is a somewhat misleading point of view that one's own experience provides a sufficient understanding of mental life for scientific purposes. Indeed, early in the history of experimental psychology, the main method for studying cognition was introspection. By observing one's own mind, the argument went, one could say how one carried out cognitive activities....
       Yet introspection failed to be a good technique for the elucidation of mental processes in general. There are two simple reasons for this. First, so many things which we can do seem to be quite unrelated to conscious experience. Someone asks you your name. You do not know how you retrieve it, yet obviously there is some process by which the retrieval occurs. In the same way, when someone speaks to you, you understand what they say, but you do not know how you came to understand. Yet somehow processes take place in which words are picked out from the jumble of sound waves which reach your ears, in-built knowledge of syntax and semantics gives it meaning, and the significance of the message comes to be appreciated. Clearly, introspection is not of much use here, but it is undeniable that understanding language is as much a part of mental life as is thinking.
       As if these arguments were not enough, it is also the case that introspective data are notoriously difficult to evaluate. Because it is private to the experiencer, and experience may be difficult to convey in words to somebody else. Many early introspective protocols were very confusing to read and, even worse, the kinds of introspection reported tended to conform to the theoretical categories used in different laboratories. Clearly, what was needed was both a change in experimental method and a different (non-subjective) theoretical framework to describe mental life. (Sanford, 1987, pp. 2-3)

    Historical dictionary of quotations in cognitive science > Introspection

  • 20 Logic

       My initial step... was to attempt to reduce the concept of ordering in a sequence to that of logical consequence, so as to proceed from there to the concept of number. To prevent anything intuitive from penetrating here unnoticed, I had to bend every effort to keep the chain of inference free of gaps. In attempting to comply with this requirement in the strictest possible way, I found the inadequacy of language to be an obstacle. (Frege, 1972, p. 104)
       I believe I can make the relation of my 'conceptual notation' to ordinary language clearest if I compare it to the relation of the microscope to the eye. The latter, because of the range of its applicability and because of the ease with which it can adapt itself to the most varied circumstances, has a great superiority over the microscope. Of course, viewed as an optical instrument it reveals many imperfections, which usually remain unnoticed only because of its intimate connection with mental life. But as soon as scientific purposes place strong requirements upon sharpness of resolution, the eye proves to be inadequate.... Similarly, this 'conceptual notation' is devised for particular scientific purposes; and therefore one may not condemn it because it is useless for other purposes. (Frege, 1972, pp. 104-105)
       To sum up briefly, it is the business of the logician to conduct an unceasing struggle against psychology and those parts of language and grammar which fail to give untrammeled expression to what is logical. He does not have to answer the question: How does thinking normally take place in human beings? What course does it naturally follow in the human mind? What is natural to one person may well be unnatural to another. (Frege, 1979, pp. 6-7)
       We are very dependent on external aids in our thinking, and there is no doubt that the language of everyday life-so far, at least, as a certain area of discourse is concerned-had first to be replaced by a more sophisticated instrument, before certain distinctions could be noticed. But so far the academic world has, for the most part, disdained to master this instrument. (Frege, 1979, pp. 6-7)
       There is no reproach the logician need fear less than the reproach that his way of formulating things is unnatural.... If we were to heed those who object that logic is unnatural, we would run the risk of becoming embroiled in interminable disputes about what is natural, disputes which are quite incapable of being resolved within the province of logic. (Frege, 1979, p. 128)
       [L]inguists will be forced, internally as it were, to come to grips with the results of modern logic. Indeed, this is apparently already happening to some extent. By "logic" is not meant here recursive function-theory, California model-theory, constructive proof-theory, or even axiomatic settheory. Such areas may or may not be useful for linguistics. Rather under "logic" are included our good old friends, the homely locutions "and," "or," "if-then," "if and only if," "not," "for all x," "for some x," and "is identical with," plus the calculus of individuals, event-logic, syntax, denotational semantics, and... various parts of pragmatics.... It is to these that the linguist can most profitably turn for help. These are his tools. And they are "clean tools," to borrow a phrase of the late J. L. Austin in another context, in fact, the only really clean ones we have, so that we might as well use them as much as we can. But they constitute only what may be called "baby logic." Baby logic is to the linguist what "baby mathematics" (in the phrase of Murray Gell-Mann) is to the theoretical physicist-very elementary but indispensable domains of theory in both cases. (Martin, 1969, pp. 261-262)
       There appears to be no branch of deductive inference that requires us to assume the existence of a mental logic in order to do justice to the psychological phenomena. To be logical, an individual requires, not formal rules of inference, but a tacit knowledge of the fundamental semantic principle governing any inference; a deduction is valid provided that there is no way of interpreting the premises correctly that is inconsistent with the conclusion. Logic provides a systematic method for searching for such counter-examples. The empirical evidence suggests that ordinary individuals possess no such methods. (Johnson-Laird, quoted in Mehler, Walker & Garrett, 1982, p. 130)
       The fundamental paradox of logic [that "there is no class (as a totality) of those classes which, each taken as a totality, do not belong to themselves" (Russell to Frege, 16 June 1902, in van Heijenoort, 1967, p. 125)] is with us still, bequeathed by Russell-by way of philosophy, mathematics, and even computer science-to the whole of twentieth-century thought. Twentieth-century philosophy would begin not with a foundation for logic, as Russell had hoped in 1900, but with the discovery in 1901 that no such foundation can be laid. (Everdell, 1997, p. 184)

    Historical dictionary of quotations in cognitive science > Logic

См. также в других словарях:

  • Psychology — (from Greek gr. ψῡχή, psȳkhē , breath, life, soul ; and gr. λογία, logia ) is an academic and applied discipline involving the scientific study of mental processes and behavior. Psychologists study such phenomena as perception, cognition, emotion …   Wikipedia

  • PSYCHOLOGY — PSYCHOLOGY, the science of the mind or of mental phenomena and activities. Psychological Concepts in the Bible Psychology has a long past, but only a short history (H. Ebbinghaus, Abriss der Psychologie, 1908). Nowhere is this aphorism better… …   Encyclopedia of Judaism

  • Psychology — • The science which treats of the soul and its operations Catholic Encyclopedia. Kevin Knight. 2006. Psychology     Psychology     † …   Catholic encyclopedia

  • psychology — [sī käl′ə jē] n. pl. psychologies [ModL psychologia: see PSYCHO & LOGY] 1. a) the science dealing with the mind and with mental and emotional processes b) the science of human and animal behavior 2. the sum of the actions, traits, attitudes,… …   English World dictionary

  • Psychology — Psy*chol o*gy, n. pl. {Psychologies}. [Psycho + logy: cf. F. psychologie. See {Psychical}.] The science of the human soul; specifically, the systematic or scientific knowledge of the powers and functions of the human soul, so far as they are… …   The Collaborative International Dictionary of English

  • Psychology — affective computing affective forecasting amygdala hijack attentional blink bibliotherapy brain fingerprinting busy brain …   New words

  • psychology — 1650s, study of the soul, probably coined mid 16c. in Germany by Melanchthon as Mod.L. psychologia, from Gk. psykhe breath, spirit, soul (see PSYCHE (Cf. psyche)) + logia study of (see LOGY (Cf. logy)). Meaning study of the mind first recorded… …   Etymology dictionary

  • psychology — [n] study of the mind; emotional and mental constitution attitude, behaviorism, medicine, mental make up, mental processes, personality study, psych*, science of the mind, therapy, way of thinking*, where head is at*; concepts 349,360,410 …   New thesaurus

  • psychology — ► NOUN 1) the scientific study of the human mind and its functions. 2) the mental characteristics or attitude of a person. 3) the mental factors governing a situation or activity. DERIVATIVES psychologist noun …   English terms dictionary

  • psychology — /suy kol euh jee/, n., pl. psychologies. 1. the science of the mind or of mental states and processes. 2. the science of human and animal behavior. 3. the sum or characteristics of the mental states and processes of a person or class of persons,… …   Universalium

  • psychology — Variously defined as the science of behaviour or the science of mind, psychology emerged as a distinct discipline in the second half of the nineteenth century, with the work of researchers such as Wilhelm Wundt (1832 1920) who founded the first… …   Dictionary of sociology

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»