Перевод: с английского на все языки

со всех языков на английский

the+information+is+entered+in+the+computer

  • 1 enter

    'entə
    1) (to go or come in: Enter by this door.) entrar
    2) (to come or go into (a place): He entered the room.) entrar (en)
    3) (to give the name of (another person or oneself) for a competition etc: He entered for the race; I entered my pupils for the examination.) inscribir(se)
    4) (to write (one's name etc) in a book etc: Did you enter your name in the visitors' book?) registrar
    5) (to start in: She entered his employment last week.) comenzar
    - enter on/upon
    enter vb
    1. entrar
    2. presentarse / inscribirse
    3. anotar
    4. introducir
    tr['entəSMALLr/SMALL]
    1 (gen) entrar en
    2 (join) ingresar en; (school etc) matricularse en; (army etc) alistarse en
    3 (participate) participar en, tomar parte en; (register) inscribirse en
    how many people have entered the race? ¿cuántos se han inscrito en la carrera?
    4 (write down, record) anotar, apuntar
    have you entered it in the account? ¿lo has anotado en la cuenta?
    5 formal use (present for consideration, submit) formular, presentar
    1 (gen) entrar
    2 (theatre) entrar en escena
    \
    SMALLIDIOMATIC EXPRESSION/SMALL
    to enter into the spirit of something entrar en el ambiente de algo
    to enter somebody's head / enter somebody's mind pasarse por la cabeza de alguien, ocurrírsele a alguien
    enter ['ɛntər] vt
    1) : entrar en, entrar a
    2) begin: entrar en, comenzar, iniciar
    3) record: anotar, inscribir, dar entrada a
    4) join: entrar en, alistarse en, hacerse socio de
    enter vi
    1) : entrar
    2)
    to enter into : entrar en, firmar (unacuerdo), entablar (negociaciones, etc.)
    v.
    entrar v.
    ingresar v.
    introducir (datos) v.

    I
    1. 'entər, 'entə(r)
    1)
    a) \<\<room/house/country\>\> entrar en, entrar a (esp AmL)

    to enter port\<\<ship\>\> tomar puerto

    b) ( penetrate) entrar en
    2) ( begin) \<\<period/phase\>\> entrar en
    3)
    a) ( join) \<\<army\>\> alistarse en, entrar en; \<\<firm/organization\>\> entrar en, incorporarse a

    to enter the priesthood — hacerse* sacerdote

    b) ( begin to take part in) \<\<war/negotiations\>\> entrar en; \<\<debate/dispute\>\> sumarse a
    c) \<\<student/candidate\>\> presentar
    d) \<\<race\>\> inscribirse* (para tomar parte) en
    4)
    a) ( record - in register) inscribir*; (- in ledger, book) anotar, dar* entrada a
    b) ( Comput) dar* entrada a, introducir*
    5) ( Law)

    to enter a plea of guilty/not guilty — declararse culpable/inocente


    2.
    vi

    enter!adelante! or pase!

    2)

    to enter (FOR something)\<\<for competition/race\>\> inscribirse* (en algo); \<\<for examination\>\> presentarse (a algo)

    Phrasal Verbs:

    II
    noun ( Comput) intro m
    ['entǝ(r)]
    1. VT
    1) (=go into, come into) [+ room, country, tunnel] entrar en; [+ bus, train] subir a

    the ship entered harbourel barco entró en el puerto

    the thought never entered my head — jamás se me ocurrió, jamás se me pasó por la cabeza

    to enter hospitalfrm ingresar en el hospital

    2) (=penetrate) [+ market] introducirse en; (sexually) penetrar
    3) (=join) [+ army, navy] alistarse en, enrolarse en; [+ college, school] entrar en; [+ company, organization] incorporarse a, entrar a formar parte de; [+ profession] ingresar en, entrar en; [+ discussion, conversation] unirse a, intervenir en; [+ war] entrar en

    he entered the churchse hizo sacerdote

    he decided to enter a monasterydecidió hacerse monje

    he entered politics at a young age — se metió en la política cuando era joven

    4) (=go in for) [+ live competition, exam] presentarse a; [+ race, postal competition] participar en, tomar parte en
    5) (=enrol) [+ pupil] (for school) matricular, inscribir; (for examination) presentar

    how many students are you entering this year? — ¿a cuántos alumnos presentas este año?

    to enter sth/sb for sth: he entered his son for Eton — matriculó or inscribió a su hijo en Eton

    6) (=write down) [+ name] escribir, apuntar; [+ claim, request] presentar, formular; (Econ) [+ amount, transaction] registrar, anotar; (Comm) [+ order] registrar, anotar
    7) (=begin) entrar en
    8) (Comput) [+ data] introducir
    9) (Jur)

    to enter a plea of guilty/not guilty — declararse culpable/no culpable

    2. VI
    1) (=come in, go in) entrar

    enter!frm ¡adelante!, ¡pase!

    2) (Theat) entrar en escena

    enter, stage left — entra en escena por la izquierda del escenario

    3)

    to enter for[+ live competition] (=put name down for) inscribirse en; (=take part in) presentarse a; [+ race, postal competition] (=put name down for) inscribirse en; (=take part in) participar en

    are you going to enter for the exam? — ¿te vas a presentar al examen?

    * * *

    I
    1. ['entər, 'entə(r)]
    1)
    a) \<\<room/house/country\>\> entrar en, entrar a (esp AmL)

    to enter port\<\<ship\>\> tomar puerto

    b) ( penetrate) entrar en
    2) ( begin) \<\<period/phase\>\> entrar en
    3)
    a) ( join) \<\<army\>\> alistarse en, entrar en; \<\<firm/organization\>\> entrar en, incorporarse a

    to enter the priesthood — hacerse* sacerdote

    b) ( begin to take part in) \<\<war/negotiations\>\> entrar en; \<\<debate/dispute\>\> sumarse a
    c) \<\<student/candidate\>\> presentar
    d) \<\<race\>\> inscribirse* (para tomar parte) en
    4)
    a) ( record - in register) inscribir*; (- in ledger, book) anotar, dar* entrada a
    b) ( Comput) dar* entrada a, introducir*
    5) ( Law)

    to enter a plea of guilty/not guilty — declararse culpable/inocente


    2.
    vi

    enter!adelante! or pase!

    2)

    to enter (FOR something)\<\<for competition/race\>\> inscribirse* (en algo); \<\<for examination\>\> presentarse (a algo)

    Phrasal Verbs:

    II
    noun ( Comput) intro m

    English-spanish dictionary > enter

  • 2 Artificial Intelligence

       In my opinion, none of [these programs] does even remote justice to the complexity of human mental processes. Unlike men, "artificially intelligent" programs tend to be single minded, undistractable, and unemotional. (Neisser, 1967, p. 9)
       Future progress in [artificial intelligence] will depend on the development of both practical and theoretical knowledge.... As regards theoretical knowledge, some have sought a unified theory of artificial intelligence. My view is that artificial intelligence is (or soon will be) an engineering discipline since its primary goal is to build things. (Nilsson, 1971, pp. vii-viii)
       Most workers in AI [artificial intelligence] research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the last 25 years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from being realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised.... In the meantime, claims and predictions regarding the potential results of AI research had been publicized which went even farther than the expectations of the majority of workers in the field, whose embarrassments have been added to by the lamentable failure of such inflated predictions....
       When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents "another step in the general process of evolution"; that possibilities in the 1980s include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000 [one has the right to be skeptical]. (Lighthill, 1972, p. 17)
       4) Just as Astronomy Succeeded Astrology, the Discovery of Intellectual Processes in Machines Should Lead to a Science, Eventually
       Just as astronomy succeeded astrology, following Kepler's discovery of planetary regularities, the discoveries of these many principles in empirical explorations on intellectual processes in machines should lead to a science, eventually. (Minsky & Papert, 1973, p. 11)
       Many problems arise in experiments on machine intelligence because things obvious to any person are not represented in any program. One can pull with a string, but one cannot push with one.... Simple facts like these caused serious problems when Charniak attempted to extend Bobrow's "Student" program to more realistic applications, and they have not been faced up to until now. (Minsky & Papert, 1973, p. 77)
       What do we mean by [a symbolic] "description"? We do not mean to suggest that our descriptions must be made of strings of ordinary language words (although they might be). The simplest kind of description is a structure in which some features of a situation are represented by single ("primitive") symbols, and relations between those features are represented by other symbols-or by other features of the way the description is put together. (Minsky & Papert, 1973, p. 11)
       [AI is] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular. (Boden, 1977, p. 5)
       The word you look for and hardly ever see in the early AI literature is the word knowledge. They didn't believe you have to know anything, you could always rework it all.... In fact 1967 is the turning point in my mind when there was enough feeling that the old ideas of general principles had to go.... I came up with an argument for what I called the primacy of expertise, and at the time I called the other guys the generalists. (Moses, quoted in McCorduck, 1979, pp. 228-229)
       9) Artificial Intelligence Is Psychology in a Particularly Pure and Abstract Form
       The basic idea of cognitive science is that intelligent beings are semantic engines-in other words, automatic formal systems with interpretations under which they consistently make sense. We can now see why this includes psychology and artificial intelligence on a more or less equal footing: people and intelligent computers (if and when there are any) turn out to be merely different manifestations of the same underlying phenomenon. Moreover, with universal hardware, any semantic engine can in principle be formally imitated by a computer if only the right program can be found. And that will guarantee semantic imitation as well, since (given the appropriate formal behavior) the semantics is "taking care of itself" anyway. Thus we also see why, from this perspective, artificial intelligence can be regarded as psychology in a particularly pure and abstract form. The same fundamental structures are under investigation, but in AI, all the relevant parameters are under direct experimental control (in the programming), without any messy physiology or ethics to get in the way. (Haugeland, 1981b, p. 31)
       There are many different kinds of reasoning one might imagine:
        Formal reasoning involves the syntactic manipulation of data structures to deduce new ones following prespecified rules of inference. Mathematical logic is the archetypical formal representation. Procedural reasoning uses simulation to answer questions and solve problems. When we use a program to answer What is the sum of 3 and 4? it uses, or "runs," a procedural model of arithmetic. Reasoning by analogy seems to be a very natural mode of thought for humans but, so far, difficult to accomplish in AI programs. The idea is that when you ask the question Can robins fly? the system might reason that "robins are like sparrows, and I know that sparrows can fly, so robins probably can fly."
        Generalization and abstraction are also natural reasoning process for humans that are difficult to pin down well enough to implement in a program. If one knows that Robins have wings, that Sparrows have wings, and that Blue jays have wings, eventually one will believe that All birds have wings. This capability may be at the core of most human learning, but it has not yet become a useful technique in AI.... Meta- level reasoning is demonstrated by the way one answers the question What is Paul Newman's telephone number? You might reason that "if I knew Paul Newman's number, I would know that I knew it, because it is a notable fact." This involves using "knowledge about what you know," in particular, about the extent of your knowledge and about the importance of certain facts. Recent research in psychology and AI indicates that meta-level reasoning may play a central role in human cognitive processing. (Barr & Feigenbaum, 1981, pp. 146-147)
       Suffice it to say that programs already exist that can do things-or, at the very least, appear to be beginning to do things-which ill-informed critics have asserted a priori to be impossible. Examples include: perceiving in a holistic as opposed to an atomistic way; using language creatively; translating sensibly from one language to another by way of a language-neutral semantic representation; planning acts in a broad and sketchy fashion, the details being decided only in execution; distinguishing between different species of emotional reaction according to the psychological context of the subject. (Boden, 1981, p. 33)
       Can the synthesis of Man and Machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens-and I have... good reasons for thinking that it must-we have nothing to regret and certainly nothing to fear. (Clarke, 1984, p. 243)
       The thesis of GOFAI... is not that the processes underlying intelligence can be described symbolically... but that they are symbolic. (Haugeland, 1985, p. 113)
        14) Artificial Intelligence Provides a Useful Approach to Psychological and Psychiatric Theory Formation
       It is all very well formulating psychological and psychiatric theories verbally but, when using natural language (even technical jargon), it is difficult to recognise when a theory is complete; oversights are all too easily made, gaps too readily left. This is a point which is generally recognised to be true and it is for precisely this reason that the behavioural sciences attempt to follow the natural sciences in using "classical" mathematics as a more rigorous descriptive language. However, it is an unfortunate fact that, with a few notable exceptions, there has been a marked lack of success in this application. It is my belief that a different approach-a different mathematics-is needed, and that AI provides just this approach. (Hand, quoted in Hand, 1985, pp. 6-7)
       We might distinguish among four kinds of AI.
       Research of this kind involves building and programming computers to perform tasks which, to paraphrase Marvin Minsky, would require intelligence if they were done by us. Researchers in nonpsychological AI make no claims whatsoever about the psychological realism of their programs or the devices they build, that is, about whether or not computers perform tasks as humans do.
       Research here is guided by the view that the computer is a useful tool in the study of mind. In particular, we can write computer programs or build devices that simulate alleged psychological processes in humans and then test our predictions about how the alleged processes work. We can weave these programs and devices together with other programs and devices that simulate different alleged mental processes and thereby test the degree to which the AI system as a whole simulates human mentality. According to weak psychological AI, working with computer models is a way of refining and testing hypotheses about processes that are allegedly realized in human minds.
    ... According to this view, our minds are computers and therefore can be duplicated by other computers. Sherry Turkle writes that the "real ambition is of mythic proportions, making a general purpose intelligence, a mind." (Turkle, 1984, p. 240) The authors of a major text announce that "the ultimate goal of AI research is to build a person or, more humbly, an animal." (Charniak & McDermott, 1985, p. 7)
       Research in this field, like strong psychological AI, takes seriously the functionalist view that mentality can be realized in many different types of physical devices. Suprapsychological AI, however, accuses strong psychological AI of being chauvinisticof being only interested in human intelligence! Suprapsychological AI claims to be interested in all the conceivable ways intelligence can be realized. (Flanagan, 1991, pp. 241-242)
        16) Determination of Relevance of Rules in Particular Contexts
       Even if the [rules] were stored in a context-free form the computer still couldn't use them. To do that the computer requires rules enabling it to draw on just those [ rules] which are relevant in each particular context. Determination of relevance will have to be based on further facts and rules, but the question will again arise as to which facts and rules are relevant for making each particular determination. One could always invoke further facts and rules to answer this question, but of course these must be only the relevant ones. And so it goes. It seems that AI workers will never be able to get started here unless they can settle the problem of relevance beforehand by cataloguing types of context and listing just those facts which are relevant in each. (Dreyfus & Dreyfus, 1986, p. 80)
       Perhaps the single most important idea to artificial intelligence is that there is no fundamental difference between form and content, that meaning can be captured in a set of symbols such as a semantic net. (G. Johnson, 1986, p. 250)
        18) The Assumption That the Mind Is a Formal System
       Artificial intelligence is based on the assumption that the mind can be described as some kind of formal system manipulating symbols that stand for things in the world. Thus it doesn't matter what the brain is made of, or what it uses for tokens in the great game of thinking. Using an equivalent set of tokens and rules, we can do thinking with a digital computer, just as we can play chess using cups, salt and pepper shakers, knives, forks, and spoons. Using the right software, one system (the mind) can be mapped into the other (the computer). (G. Johnson, 1986, p. 250)
        19) A Statement of the Primary and Secondary Purposes of Artificial Intelligence
       The primary goal of Artificial Intelligence is to make machines smarter.
       The secondary goals of Artificial Intelligence are to understand what intelligence is (the Nobel laureate purpose) and to make machines more useful (the entrepreneurial purpose). (Winston, 1987, p. 1)
       The theoretical ideas of older branches of engineering are captured in the language of mathematics. We contend that mathematical logic provides the basis for theory in AI. Although many computer scientists already count logic as fundamental to computer science in general, we put forward an even stronger form of the logic-is-important argument....
       AI deals mainly with the problem of representing and using declarative (as opposed to procedural) knowledge. Declarative knowledge is the kind that is expressed as sentences, and AI needs a language in which to state these sentences. Because the languages in which this knowledge usually is originally captured (natural languages such as English) are not suitable for computer representations, some other language with the appropriate properties must be used. It turns out, we think, that the appropriate properties include at least those that have been uppermost in the minds of logicians in their development of logical languages such as the predicate calculus. Thus, we think that any language for expressing knowledge in AI systems must be at least as expressive as the first-order predicate calculus. (Genesereth & Nilsson, 1987, p. viii)
        21) Perceptual Structures Can Be Represented as Lists of Elementary Propositions
       In artificial intelligence studies, perceptual structures are represented as assemblages of description lists, the elementary components of which are propositions asserting that certain relations hold among elements. (Chase & Simon, 1988, p. 490)
       Artificial intelligence (AI) is sometimes defined as the study of how to build and/or program computers to enable them to do the sorts of things that minds can do. Some of these things are commonly regarded as requiring intelligence: offering a medical diagnosis and/or prescription, giving legal or scientific advice, proving theorems in logic or mathematics. Others are not, because they can be done by all normal adults irrespective of educational background (and sometimes by non-human animals too), and typically involve no conscious control: seeing things in sunlight and shadows, finding a path through cluttered terrain, fitting pegs into holes, speaking one's own native tongue, and using one's common sense. Because it covers AI research dealing with both these classes of mental capacity, this definition is preferable to one describing AI as making computers do "things that would require intelligence if done by people." However, it presupposes that computers could do what minds can do, that they might really diagnose, advise, infer, and understand. One could avoid this problematic assumption (and also side-step questions about whether computers do things in the same way as we do) by defining AI instead as "the development of computers whose observable performance has features which in humans we would attribute to mental processes." This bland characterization would be acceptable to some AI workers, especially amongst those focusing on the production of technological tools for commercial purposes. But many others would favour a more controversial definition, seeing AI as the science of intelligence in general-or, more accurately, as the intellectual core of cognitive science. As such, its goal is to provide a systematic theory that can explain (and perhaps enable us to replicate) both the general categories of intentionality and the diverse psychological capacities grounded in them. (Boden, 1990b, pp. 1-2)
       Because the ability to store data somewhat corresponds to what we call memory in human beings, and because the ability to follow logical procedures somewhat corresponds to what we call reasoning in human beings, many members of the cult have concluded that what computers do somewhat corresponds to what we call thinking. It is no great difficulty to persuade the general public of that conclusion since computers process data very fast in small spaces well below the level of visibility; they do not look like other machines when they are at work. They seem to be running along as smoothly and silently as the brain does when it remembers and reasons and thinks. On the other hand, those who design and build computers know exactly how the machines are working down in the hidden depths of their semiconductors. Computers can be taken apart, scrutinized, and put back together. Their activities can be tracked, analyzed, measured, and thus clearly understood-which is far from possible with the brain. This gives rise to the tempting assumption on the part of the builders and designers that computers can tell us something about brains, indeed, that the computer can serve as a model of the mind, which then comes to be seen as some manner of information processing machine, and possibly not as good at the job as the machine. (Roszak, 1994, pp. xiv-xv)
       The inner workings of the human mind are far more intricate than the most complicated systems of modern technology. Researchers in the field of artificial intelligence have been attempting to develop programs that will enable computers to display intelligent behavior. Although this field has been an active one for more than thirty-five years and has had many notable successes, AI researchers still do not know how to create a program that matches human intelligence. No existing program can recall facts, solve problems, reason, learn, and process language with human facility. This lack of success has occurred not because computers are inferior to human brains but rather because we do not yet know in sufficient detail how intelligence is organized in the brain. (Anderson, 1995, p. 2)

    Historical dictionary of quotations in cognitive science > Artificial Intelligence

  • 3 Goldstine, Herman H.

    [br]
    b. 13 September 1913 USA
    [br]
    American mathematician largely responsible for the development of ENIAC, an early electronic computer.
    [br]
    Goldstine studied mathematics at the University of Chicago, Illinois, gaining his PhD in 1936. After teaching mathematics there, he moved to a similar position at the University of Michigan in 1939, becoming an assistant professor. After the USA entered the Second World War, in 1942 he joined the army as a lieutenant in the Ballistic Missile Research Laboratory at the Aberdeen Proving Ground in Maryland. He was then assigned to the Moore School of Engineering at the University of Pennsylvania, where he was involved with Arthur Burks in building the valve-based Electronic Numerical Integrator and Computer (ENIAC) to compute ballistic tables. The machine was completed in 1946, but prior to this Goldstine had met John von Neumann of the Institute for Advanced Studies (IAS) at Princeton, New Jersey, and active collaboration between them had already begun. After the war he joined von Neumann as Assistant Director of the Computer Project at the Institute of Advanced Studies, Princeton, becoming its Director in 1954. There he developed the idea of computer-flow diagrams and, with von Neumann, built the first computer to use a magnetic drum for data storage. In 1958 he joined IBM as Director of the Mathematical Sciences Department, becoming Director of Development at the IBM Data Processing Headquarters in 1965. Two years later he became a Research Consultant, and in 1969 he became an IBM Research Fellow.
    [br]
    Principal Honours and Distinctions
    Goldstine's many awards include three honorary degrees for his contributions to the development of computers.
    Bibliography
    1946, with A.Goldstine, "The Electronic Numerical Integrator and Computer (ENIAC)", Mathematical Tables and Other Aids to Computation 2:97 (describes the work on ENIAC).
    1946, with A.W.Burks and J.von Neumann, "Preliminary discussions of the logical design of an electronic computing instrument", Princeton Institute for Advanced Studies.
    1972, The Computer from Pascal to von Neumann, Princeton University Press.
    1977, "A brief history of the computer", Proceedings of the American Physical Society 121:339.
    Further Reading
    M.Campbell-Kelly \& M.R.Williams (eds), 1985, The Moore School Lectures (1946), Charles Babbage Institute Report Series for the History of Computing, Vol 9. M.R.Williams, 1985, History of Computing Technology, London: Prentice-Hall.
    KF

    Biographical history of technology > Goldstine, Herman H.

  • 4 Williams, Sir Frederic Calland

    [br]
    b. 26 June 1911 Stockport, Cheshire, England
    d. 11 August 1977 Prestbury, Cheshire, England
    [br]
    English electrical engineer who invented the Williams storage cathode ray tube, which was extensively used worldwide as a data memory in the first digital computers.
    [br]
    Following education at Stockport Grammar School, Williams entered Manchester University in 1929, gaining his BSc in 1932 and MSc in 1933. After a short time as a college apprentice with Metropolitan Vickers, he went to Magdalen College, Oxford, to study for a DPhil, which he was awarded in 1936. He returned to Manchester University that year as an assistant lecturer, gaining his DSc in 1939. Following the outbreak of the Second World War he worked for the Scientific Civil Service, initially at the Bawdsey Research Station and then at the Telecommunications Research Establishment at Malvern, Worcestershire. There he was involved in research on non-incandescent amplifiers and diode rectifiers and the development of the first practical radar system capable of identifying friendly aircraft. Later in the war, he devised an automatic radar system suitable for use by fighter aircraft.
    After the war he resumed his academic career at Manchester, becoming Professor of Electrical Engineering and Director of the University Electrotechnical Laboratory in 1946. In the same year he succeeded in developing a data-memory device based on the cathode ray tube, in which the information was stored and read by electron-beam scanning of a charge-retaining target. The Williams storage tube, as it became known, not only found obvious later use as a means of storing single-frame, still television images but proved to be a vital component of the pioneering Manchester University MkI digital computer. Because it enabled both data and program instructions to be stored in the computer, it was soon used worldwide in the development of the early stored-program computers.
    [br]
    Principal Honours and Distinctions
    Knighted 1976. OBE 1945. CBE 1961. FRS 1950. Hon. DSc Durham 1964, Sussex 1971, Wales 1971. First Royal Society of Arts Benjamin Franklin Medal 1957. City of Philadelphia John Scott Award 1960. Royal Society Hughes Medal 1963. Institution of Electrical Engineers Faraday Medal 1972. Institute of Electrical and Electronics Engineers Pioneer Award 1973.
    Bibliography
    Williams contributed papers to many scientific journals, including Proceedings of the Royal Society, Proceedings of the Cambridge Philosophical Society, Journal of the Institution of Electrical Engineers, Proceedings of the Institution of Mechanical Engineers, Wireless Engineer, Post Office Electrical Engineers' Journal. Note especially: 1948, with J.Kilburn, "Electronic digital computers", Nature 162:487; 1949, with J.Kilburn, "A storage system for use with binary digital computing machines", Proceedings of the Institution of Electrical Engineers 96:81; 1975, "Early computers at Manchester University", Radio \& Electronic Engineer 45:327. Williams also collaborated in the writing of vols 19 and 20 of the MIT Radiation
    Laboratory Series.
    Further Reading
    B.Randell, 1973, The Origins of Digital Computers, Berlin: Springer-Verlag. M.R.Williams, 1985, A History of Computing Technology, London: Prentice-Hall. See also: Stibitz, George R.; Strachey, Christopher.
    KF

    Biographical history of technology > Williams, Sir Frederic Calland

  • 5 Kemeny, John G.

    [br]
    b. before 1939
    [br]
    American mathematician and systems programmer, jointly responsible with Thomas Kurtz for the development of the high-level computer language BASIC.
    [br]
    Kemeny entered the USA as an immigrant in 1939. He subsequently became a mathematics lecturer at Dartmouth College, Hanover, New Hampshire, and later became a professor and then Chairman of the Mathematics Department; finally, in 1971, he became President of the College. In 1964, with Thomas Kurtz, he developed the high-level computer language known as BASIC (Beginners All-purpose Symbolic Instruction Code). It was initially designed for use by students with a time-sharing minicomputer, but it soon became the standard language for microcomputers, frequently being embedded in the computer as "firmware" loaded into a read-only-memory (ROM) integrated circuit.
    [br]
    Bibliography
    1963. Programming for a Digital Computer.
    1964. with T.E.Kurtz, BASIC Instruction Manual.
    1968, with T.E.Kurtz, "Dartmouth time-sharing", Science 223.
    Further Reading
    R.L.Wexelblat (ed.), 1981, History of Programming Languages, New York: Academic Press.
    KF

    Biographical history of technology > Kemeny, John G.

  • 6 conversation subject

    Text indicating the topic of a conversation. It is entered by the user or generated by the computer based on conversation information. It is displayed in the conversation title bar or in an alert.

    English-Arabic terms dictionary > conversation subject

  • 7 default location

    "Information that describes the current location of a computer. The location is entered manually in Default Location, and can be used by programs to determine the location of a computer when a location sensor is unavailable."

    English-Arabic terms dictionary > default location

  • 8 Berezin, Evelyn

    [br]
    b. 1925 New York, USA
    [br]
    American pioneer in computer technology.
    [br]
    Born into a poor family in the Bronx, New York City, Berezin first majored in business studies but transferred her interest to physics. She graduated in 1946 and then, with the aid of an Atomic Energy Commission fellowship, she obtained her PhD in cosmic ray physics at New York University. When the fellowship expired, opportunities in the developing field of electronic data processing seemed more promising than thise in physics. Berezin entered the firm of Electronic Computer Corporation in 1951 and was asked to "build a computer", although few at that time had actually seen one; the result was the Elecom 200. In 1953, for Underwood Corporation, she designed the first office computer, although it was never marketed, as Underwood sold out to Olivetti.
    Berezin's next position was as head of logic design for Teleregister Corporation in the late 1950s. Here, she led a team specializing in the design of on-line systems. Her most notable achievement was the design of a nationwide online computer reservation system for United Airlines, the first system of this kind and the precursor of similar on-line systems. It was installed in the early 1960s and was the first large non-military on-line interactive system.
    In the 1960s Berezin moved to the Digitronics Corporation as manager of logic design, her work here resulted in the first high-speed commercial digital communications terminal. Also in the 1960s, her involvement in Data Secretary, a challenger to the IBM editing typewriter, makes it possible to regard her as one of the pioneers of word processing. In 1976 Berezin transferred from the electronic data and computing field to that of financial management.
    [br]
    Further Reading
    A.Stanley, 1993, Mothers and Daughters of Invention, Meruchen, NJ: Scarecrow Press, 651–3.
    LRD

    Biographical history of technology > Berezin, Evelyn

  • 9 Jobs, Steven Paul

    [br]
    b. 24 February 1955 San Francisco, California, USA
    [br]
    American engineer who, with Stephen Wozniak, built the first home computer.
    [br]
    Moving with his family to Mountain View, Palo Alto, in 1960, Jobs entered Homestead High School, Cupertino, in 1968. At about the same time he joined the Explorers' Club for young engineers set up by Hewlett-Packard Company. As a result of this contact, three years later he met up with Stephen Wozniak, who was working at Hewlett-Packard and helped him with the construction of the first home computer based on the 8-bit MOS Technology 6502 microprocessor. In 1973 he went to Reid College, Portland, Oregon, to study engineering, but he dropped out in the second semester and spent time in India. On his return he obtained a job with Atari to design video games, but he soon met up again with Wozniak, who had been unable to interest Hewlett-Packard in commercial development of his home computer. Together they therefore founded Apple Computer Company to make and market it, and found a willing buyer in the Byte Shop chain store. The venture proved successful, and with the help of a financial backer, Mike Markkula, a second version, the Apple II, was developed in 1976. With Jobs as Chairman, the company experienced a phenomenal growth and by 1983 had 4,700 employees and an annual turnover of US$983 million. The company then began to run into difficulties and John Sculley, a former president of Pepsi-Cola, was brought in to manage the business while Jobs concentrated on developing new computers, including the Apple Macintosh. Eventually a power struggle developed, and with Sculley now Chairman and Chief Executive, Jobs resigned in 1985 to set up his own computer company, NeXt.
    [br]
    Principal Honours and Distinctions
    First National Technology Medal (with Wozniak) 1985.
    Further Reading
    J.S.Young, 1988, Steve Jobs: The Journey is the Reward: Scott Foresman \& Co. (includes a biography and a detailed account of Apple Company).
    M.Moritz, 1984, The Little Kingdom. The Private Story of Apple Computers.
    KF

    Biographical history of technology > Jobs, Steven Paul

  • 10 Babbage, Charles

    [br]
    b. 26 December 1791 Walworth, Surrey, England
    d. 18 October 1871 London, England
    [br]
    English mathematician who invented the forerunner of the modern computer.
    [br]
    Charles Babbage was the son of a banker, Benjamin Babbage, and was a sickly child who had a rather haphazard education at private schools near Exeter and later at Enfield. Even as a child, he was inordinately fond of algebra, which he taught himself. He was conversant with several advanced mathematical texts, so by the time he entered Trinity College, Cambridge, in 1811, he was ahead of his tutors. In his third year he moved to Peterhouse, whence he graduated in 1814, taking his MA in 1817. He first contributed to the Philosophical Transactions of the Royal Society in 1815, and was elected a fellow of that body in 1816. He was one of the founders of the Astronomical Society in 1820 and served in high office in it.
    While he was still at Cambridge, in 1812, he had the first idea of calculating numerical tables by machinery. This was his first difference engine, which worked on the principle of repeatedly adding a common difference. He built a small model of an engine working on this principle between 1820 and 1822, and in July of the latter year he read an enthusiastically received note about it to the Astronomical Society. The following year he was awarded the Society's first gold medal. He submitted details of his invention to Sir Humphry Davy, President of the Royal Society; the Society reported favourably and the Government became interested, and following a meeting with the Chancellor of the Exchequer Babbage was awarded a grant of £1,500. Work proceeded and was carried on for four years under the direction of Joseph Clement.
    In 1827 Babbage went abroad for a year on medical advice. There he studied foreign workshops and factories, and in 1832 he published his observations in On the Economy of Machinery and Manufactures. While abroad, he received the news that he had been appointed Lucasian Professor of Mathematics at Cambridge University. He held the Chair until 1839, although he neither resided in College nor gave any lectures. For this he was paid between £80 and £90 a year! Differences arose between Babbage and Clement. Manufacture was moved from Clement's works in Lambeth, London, to new, fireproof buildings specially erected by the Government near Babbage's house in Dorset Square, London. Clement made a large claim for compensation and, when it was refused, withdrew his workers as well as all the special tools he had made up for the job. No work was possible for the next fifteen months, during which Babbage conceived the idea of his "analytical engine". He approached the Government with this, but it was not until eight years later, in 1842, that he received the reply that the expense was considered too great for further backing and that the Government was abandoning the project. This was in spite of the demonstration and perfectly satisfactory operation of a small section of the analytical engine at the International Exhibition of 1862. It is said that the demands made on manufacture in the production of his engines had an appreciable influence in improving the standard of machine tools, whilst similar benefits accrued from his development of a system of notation for the movements of machine elements. His opposition to street organ-grinders was a notable eccentricity; he estimated that a quarter of his mental effort was wasted by the effect of noise on his concentration.
    [br]
    Principal Honours and Distinctions
    FRS 1816. Astronomical Society Gold Medal 1823.
    Bibliography
    Babbage wrote eighty works, including: 1864, Passages from the Life of a Philosopher.
    July 1822, Letter to Sir Humphry Davy, PRS, on the Application of Machinery to the purpose of calculating and printing Mathematical Tables.
    Further Reading
    1961, Charles Babbage and His Calculating Engines: Selected Writings by Charles Babbage and Others, eds Philip and Emily Morrison, New York: Dover Publications.
    IMcN

    Biographical history of technology > Babbage, Charles

  • 11 data

    Англо-русский словарь по компьютерной безопасности > data

См. также в других словарях:

  • computer science — computer scientist. the science that deals with the theory and methods of processing information in digital computers, the design of computer hardware and software, and the applications of computers. [1970 75] * * * Study of computers, their… …   Universalium

  • Computer Fraud and Abuse Act — The Computer Fraud and Abuse Act is a law passed by the United States Congress in 1986, intended to reduce cracking of computer systems and to address federal computer related offenses. The Act (codified as 18 U.S.C. § 1030) governs… …   Wikipedia

  • Information-based complexity — (IBC) studies optimal algorithms and computational complexity for the continuous problems which arise in physical science, economics, engineering, and mathematical finance. IBC has studied such continuous problems as path integration, partial… …   Wikipedia

  • Computer fraud — is the use of information technology to commit fraud. In the United States, computer fraud is specifically proscribed by the Computer Fraud and Abuse Act, which provides for jail time and fines. Contents 1 Notable incidents 2 See also 3 External… …   Wikipedia

  • Information Engineering — (IE) or Information Engineering Methodology (IEM) is an approach to designing and developing information systems. It has a somewhat chequered history that follows two very distinct threads. It is said to have originated in Australia between 1976… …   Wikipedia

  • The Sims 3 — Developer(s) The Sims Studio Publisher(s) Electronic Arts …   Wikipedia

  • The Mole (U.S. season 2) — The Mole: The Next Betrayal Country of origin  United States No. of episodes 13 …   Wikipedia

  • The Mole (Australia season 4) — The Mole in Paradise Country of origin  Australia No. of episodes 10 Broadcast …   Wikipedia

  • The Fly II — Theatrical release poster Directed by Chris Walas Produced by Steven Ch …   Wikipedia

  • The Pirates of Penzance — The Pirates of Penzance, or The Slave of Duty, is a comic opera in two acts, with music by Arthur Sullivan and libretto by W. S. Gilbert. It is one of the Savoy Operas. The opera s official premiere was at the Fifth Avenue Theatre in New York… …   Wikipedia

  • The Oklahoma Daily — logo School Univer …   Wikipedia

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»