Saturday, 15 March 2014

Computer Science

Computer Science (abbreviated CS or CompSci) is the scientific and practical approach to computation and its applications. It is the systematic study of the feasibility, structure, expression, and mechanization of the methodical processes (or algorithms) that underlie the acquisition, representation, processing, storage, communication of, and access to information, whether such information is encoded in bits and bytes in a computer memory or transcribed engines and protein structures in a human cell.[1] A computer scientist specializes in the theory of computation and the design of computational systems.[2]
Its subfields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of Computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications. Still other fields focus on the challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, whilst the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human-computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.
large capital lambda      Plot of a quicksort algorithm
Utah teapot representing computer graphics     Microsoft Tastenmaus mouse representing human-computer interaction
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations
Contents  [hide]
1 History
1.1 Major achievements
2 Philosophy
2.1 Name of the field
3 Areas of computer science
3.1 Theoretical computer science
3.1.1 Theory of computation
3.1.2 Information and coding theory
3.1.3 Algorithms and data structures
3.1.4 Programming language theory
3.1.5 Formal methods
3.2 Applied computer science
3.2.1 Artificial intelligence
3.2.2 Computer architecture and engineering
3.2.3 Computer graphics and visualization
3.2.4 Computer security and cryptography
3.2.5 Computational science
3.2.6 Computer Networks
3.2.7 Concurrent, parallel and distributed systems
3.2.8 Databases
3.2.9 Health Informatics
3.2.10 Information science
3.2.11 Software engineering
4 The great insights of computer science
5 Academia
5.1 Conferences
5.2 Journals
6 Education
7 See also
8 Notes
9 References
10 Further reading
11 External links
History[edit]

Main article: History of computer science
Charles Babbage is credited with inventing the first mechanical computer.
Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'stepped reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[5] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his difference engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[6] He started developing this machine in 1834 and "in less than two years he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a punched card system derived from the Jacquard loom"[7] making it infinitely programmable.[8] In 1843, during the translation of a French article on the analytical engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[9] Around 1885, Herman Hollerith invented the tabulator which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[10] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's analytical engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[11]
During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[12] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[13][14] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[15] Since practical computers became available, many applications of computing have become distinct areas of study in their own right.
Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[16] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[17] and later the IBM 709[18] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating...if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[16] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.
Time has seen significant improvements in the usability and effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aid was needed for efficient use - in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.
Major achievements[edit]
The German military used the Enigma machine (shown here) during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[19]
Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society - in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the Information Revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750-1850 CE) and the Agricultural Revolution (8000-5000 BCE).
These contributions include:
The start of the "digital revolution," which includes the current Information Age and the Internet.[20]
A formal definition of computation and computability, and proof that there are computationally unsolvable and intractable problems.[21]
The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[22]
In cryptography, breaking the Enigma code was an important factor contributing to the Allied victory in World War II.[19]
Scientific computing enabled practical evaluation of processes and situations of great complexity, as well as experimentation entirely by software. It also enabled advanced study of the mind, and mapping of the human genome became possible with the Human Genome Project.[20] Distributed computing projects such as Folding@home explore protein folding.
Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning, and other statistical and numerical techniques on a large scale.[23] High frequency algorithmic trading can also exacerbate volatility.[24]
Computer graphics and computer-generated imagery have become ubiquitous in modern entertainment, particularly in television, cinema, advertising, animation and video games. Even films that feature no explicit CGI are usually "filmed" now on digital cameras, or edited or postprocessed using a digital video editor.[citation needed]
Simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.[citation needed]
Artificial intelligence is becoming increasingly important as it gets more efficient and complex. There are many applications of the AI, some of which can be seen at home, such as robotic vacuum cleaners. It is also present in video games and on the modern battlefield in drones, anti-missile systems, and squad support robots.
Philosophy[edit]

Main article: Philosophy of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[25] Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[26] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[27]
Name of the field[edit]
The term "computer science" appears in a 1959 article in Communications of the ACM,[28] in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921,[29] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[30] His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[31] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[32] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[33] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM – turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[34] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[35] The term computics has also been suggested.[36] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italy, The Netherlands), informática (Spain, Portugal), informatika (Slavic languages) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[37]
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes."[note 1] The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, statistics, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[13] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined.[38] David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[39]
The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Areas of computer science[edit]

As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[40][41] CSAB, formerly called Computing Sciences Accreditation Board – which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE-CS)[42] – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[40]
Theoretical computer science[edit]
Main article: Theoretical computer science
The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.
Theory of computation[edit]
Main article: Theory of computation
According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[13] The study of the theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous "P=NP?" problem, one of the Millennium Prize Problems,[43] is an open problem in the theory of computation.
DFAexample.svg              Wang tiles.png P = NP ?               GNITIRW-TERCES             Blochsphere.svg
Automata theory             Computability theory     Computational complexity theory            Cryptography    Quantum computing theory
Information and coding theory[edit]
Main articles: Information theory and Coding theory
Information theory is related to the quantification of information. This was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[44] Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Algorithms and data structures[edit]
O(n^2)  Sorting quicksort anim.gif            Singly linked list.png       SimplexRangeSearching.png
Analysis of algorithms   Algorithms          Data structures Computational geometry
Programming language theory[edit]
Main article: Programming language theory
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is an active research area, with numerous dedicated academic journals.
\Gamma\vdash x: \text{Int}      Ideal compiler.png          Python add5 syntax.svg
Type theory        Compiler design                Programming languages
Formal methods[edit]
Main article: Formal methods
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Applied computer science[edit]
Applied Computer Science aims at identifying certain Computer Science concepts that can be used directly in solving real world problems.
Artificial intelligence[edit]
Main article: Artificial intelligence
This branch of computer science aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication which are found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence (AI) research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development which require computational understanding and modeling such as finance and economics, data mining and the physical sciences. The starting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the "Turing Test" is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Nicolas P. Rougier's rendering of the human brain.png  Human eye, rendered from Eye.svg.png                Corner.png                KnnClassification.svg
Machine learning             Computer vision               Image processing            Pattern recognition
User-FastFission-brain.gif            Data.png             Sky.png                Earth.png
Cognitive science              Data mining       Evolutionary computation           Information retrieval
Neuron.svg        English.png        HONDA ASIMO.jpg         MeningiomaMRISegmentation.png
Knowledge representation          Natural language processing     Robotics               Medical Image Computing
Computer architecture and engineering[edit]
Main articles: Computer architecture and Computer engineering
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[45] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.
NOR ANSI.svg   Fivestagespipeline.png  SIMD.svg
Digital logic         Microarchitecture            Multiprocessing
Operating system placement.svg             NETWORK-Library-LAN.png        Emp Tables (Database).PNG      Padlock.svg
Operating systems          Computer networks        Databases          Information security
Roomba original.jpg      Flowchart.png  Ideal compiler.png          Python add5 syntax.svg
Ubiquitous computing   Systems architecture      Compiler design                Programming languages
Computer graphics and visualization[edit]
Main article: Computer graphics (computer science)
Computer graphics is the study of digital visual contents, and involves synthese and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Computer security and cryptography[edit]
Main articles: Computer security and Cryptography
Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.
Computational science[edit]
Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines.
Lorenz attractor yb.svg Quark wiki.jpg  Naphthalene-3D-balls.png          1u04-argonaute.png
Numerical analysis          Computational physics  Computational chemistry             Bioinformatics
Computer Networks[edit]
Main article: Computer network
This branch of computer science aims to manage networks between computers worldwide.
Concurrent, parallel and distributed systems[edit]
Main articles: Concurrency (computer science) and Distributed computing
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea of concurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information is often exchanged amongst themselves to achieve a common goal.
Databases[edit]
Main articles: Database and Database management systems
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.
Health Informatics[edit]
Main article: Health Informatics
Health Informatics in computer science deals with computational techniques for solving problems in health care.
Information science[edit]
Main article: Information science
Earth.png            Neuron.png       English.png        Wacom graphics tablet and pen.png
Information retrieval      Knowledge representation          Natural language processing     Human–computer interaction
Software engineering[edit]
Main article: Software engineering
Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software— it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 and 2018.
See also: computer programming
The great insights of computer science[edit]

The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science [46]
Leibniz's, Boole's, Alan Turing's, Shannon's, & Morse's insight: There are only 2 objects that a computer has to deal with in order to represent "anything"
All the information about any computable problem can be represented using only 0 & 1 (or any other bistable pair that can flip-flop between two easily distinguishable states,such as "on"/"off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
See also: digital physics
Alan Turing's insight: There are only 5 actions that a computer has to perform in order to do "anything"
Every algorithm can be expressed in a language for a computer consisting of only 5 basic instructions:
* move left one location
* move right one location
* print 0 at current-location
* print 1 at current-location
* erase current-location[citation needed]
See also: Turing machine
Boehm and Jacopini's insight: There are only 3 ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything"
Only 3 rules are needed to combine any set of basic instructions into more complex ones:
sequence:
first do this; then do that
selection :
IF such-&-such is the case,
THEN do this
ELSE do that
repetition:
WHILE such & such is the case DO this
Note that the 3 rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it's more elementary than structured programming.)
See also: Elementary function arithmetic#Friedman's grand conjecture
Academia[edit]

Conferences[edit]
Further information: List of computer science conferences
Conferences are strategic events of the Academic Research in computer science. During those conferences, researchers from the public and private sectors present their recent work and meet. Proceedings of these conferences are an important part of the computer science literature.
Journals[edit]
Further information: Category:Computer science journals
Education[edit]

Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics, and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study. The ACM/IEEE-CS Joint Curriculum Task Force "Computing Curriculum 2005" (and 2008 update) [47] gives a guideline for university curriculum.
Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The process aspects of computer programming are often referred to as software engineering.
While computer science professions increasingly drive the U.S. economy, computer science education is absent in most American K-12 curricula. A report entitled "Running on Empty: The Failure to Teach K-12 Computer Science in the Digital Age" was released in October 2010 by Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), and revealed that only 14 states have adopted significant education standards for high school computer science. The report also found that only nine states count high school computer science courses as a core academic subject in their graduation requirements. In tandem with "Running on Empty", a new non-partisan advocacy coalition - Computing in the Core (CinC) - was founded to influence federal and state policy, such as the Computer Science Education Act, which calls for grants to states to develop plans for improving computer science education and supporting computer science teachers.
Within the United States a gender gap in computer science education has been observed as well. Research conducted by the WGBH Educational Foundation and the Association for Computing Machinery (ACM) revealed that more than twice as many high school boys considered computer science to be a “very good” or “good” college major than high school girls.[48] In addition, the high school Advanced Placement (AP) exam for computer science has displayed a disparity in gender. Compared to other AP subjects it has the lowest number of female participants, with a composition of about 15 percent women.[49] This gender gap in computer science is further witnessed at the college level, where 31 percent of undergraduate computer science degrees are earned by women and only 8 percent of computer science faculty consists of women.[50] According to an article published by the Epistemic Games Group in August 2012, the number of women graduates in the computer science field has declined to 13 percent.[51]
See also[edit]

Main article: Outline of computer science
Portal icon          Computer science portal
Academic genealogy of computer scientists
Informatics (academic field)
List of academic computer science departments
List of computer science conferences
List of computer scientists
List of publications in computer science
List of pioneers in computer science
List of software engineering topics
List of unsolved problems in computer science
Women in computing
Wikipedia book Computer science at Wikipedia books
Notes[edit]

Jump up ^ See the entry "Computer science" on Wikiquote for the history of this quotation.
References[edit]

Jump up ^ http://www.cs.bu.edu/AboutCS/WhatIsCS.pdf.
Jump up ^ "WordNet Search - 3.1". Wordnetweb.princeton.edu. Retrieved 2012-05-14.
Jump up ^ "Blaise Pascal". School of Mathematics and Statistics University of St Andrews, Scotland.
Jump up ^ "A Brief History of Computing".
Jump up ^ In 1851
Jump up ^ "Science Museum - Introduction to Babbage". Archived from the original on 2006-09-08. Retrieved 2006-09-24.
Jump up ^ Anthony Hyman, Charles Babbage, pioneer of the computer, 1982
Jump up ^ "The introduction of punched cards into the new engine was important not only as a more convenient form of control than the drums, or because programs could now be of unlimited extent, and could be stored and repeated without the danger of introducing errors in setting the machine by hand; it was important also because it served to crystallize Babbage's feeling that he had invented something really new, something much more than a sophisticated calculating machine." Bruce Collier, 1970
Jump up ^ "A Selection and Adaptation From Ada's Notes found in "Ada, The Enchantress of Numbers," by Betty Alexandra Toole Ed.D. Strawberry Press, Mill Valley, CA". Retrieved 2006-05-04.
Jump up ^ "In this sense Aiken needed IBM, whose technology included the use of punched cards, the accumulation of numerical data, and the transfer of numerical data from one register to another", Bernard Cohen, p.44 (2000)
Jump up ^ Brian Randell, p.187, 1975
Jump up ^ The Association for Computing Machinery (ACM) was founded in 1947.
^ Jump up to: a b c Denning, P.J. (2000). "Computer Science: The Discipline" (PDF). Encyclopedia of Computer Science. Archived from the original on 2006-05-25.
Jump up ^ "Some EDSAC statistics". Cl.cam.ac.uk. Retrieved 2011-11-19.
Jump up ^ Computer science pioneer Samuel D. Conte dies at 85 July 1, 2002
^ Jump up to: a b Levy, Steven (1984). Hackers: Heroes of the Computer Revolution. Doubleday. ISBN 0-385-19195-2.
Jump up ^ "IBM 704 Electronic Data Processing System - CHM Revolution". Computerhistory.org. Retrieved 2013-07-07.
Jump up ^ http://archive.computerhistory.org/resources/text/IBM/IBM.709.1957.102646304.pdf
^ Jump up to: a b David Kahn, The Codebreakers, 1967, ISBN 0-684-83130-9.
^ Jump up to: a b http://www.cis.cornell.edu/Dean/Presentations/Slides/bgu.pdf
Jump up ^ Constable, R.L. (March 2000). Computer Science: Achievements and Challenges circa 2000 (PDF).[dead link]
Jump up ^ Abelson, H.; G.J. Sussman with J. Sussman (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. ISBN 0-262-01153-0. "The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology — the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects."
Jump up ^ Black box traders are on the march The Telegraph, August 26, 2006
Jump up ^ "The Impact of High Frequency Trading on an Electronic Market". Papers.ssrn.com. doi:10.2139/ssrn.1686004. Retrieved 2012-05-14.
Jump up ^ Wegner, P. (October 13–15, 1976). "Research paradigms in computer science". Proceedings of the 2nd international Conference on Software Engineering. San Francisco, California, United States: IEEE Computer Society Press, Los Alamitos, CA.
Jump up ^ Denning, P. J.; Comer, D. E.; Gries, D.; Mulder, M. C.; Tucker, A.; Turner, A. J.; Young, P. R. (Jan 1989). "Computing as a discipline". Communications of the ACM 32: 9–23. doi:10.1145/63238.63239. edit
Jump up ^ Eden, A. H. (2007). "Three Paradigms of Computer Science". Minds and Machines 17 (2): 135–167. doi:10.1007/s11023-007-9060-8. edit
Jump up ^ Louis Fine (1959). "The Role of the University in Computers, Data Processing, and Related Fields". Communications of the ACM 2 (9): 7–14. doi:10.1145/368424.368427.
Jump up ^ "Stanford University Oral History". Stanford University. Retrieved 30 May 2013.
Jump up ^ id., p. 11
Jump up ^ Donald Knuth (1972). "George Forsythe and the Development of Computer Science". Comms. ACM.
Jump up ^ Matti Tedre (2006). The Development of Computer Science: A Sociocultural Perspective, p.260
Jump up ^ Peter Naur (1966). "The science of datalogy". Communications of the ACM 9 (7): 485. doi:10.1145/365719.366510.
Jump up ^ Communications of the ACM 1(4):p.6
Jump up ^ Communications of the ACM 2(1):p.4
Jump up ^ IEEE Computer 28(12):p.136
Jump up ^ P. Mounier-Kuhn, L’Informatique en France, de la seconde guerre mondiale au Plan Calcul. L’émergence d’une science, Paris, PUPS, 2010, ch. 3 & 4.
Jump up ^ Tedre, M. (2011). "Computing as a Science: A Survey of Competing Viewpoints". Minds and Machines 21 (3): 361–387. doi:10.1007/s11023-011-9240-4. edit
Jump up ^ Parnas, D. L. (1998). Annals of Software Engineering 6: 19–37. doi:10.1023/A:1018949113292. edit, p. 19: "Rather than treat software engineering as a subfield of computer science, I treat it as an element of the set, Civil Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering, [...]"
^ Jump up to: a b Computing Sciences Accreditation Board (28 May 1997). "Computer Science as a Profession". Archived from the original on 2008-06-17. Retrieved 2010-05-23.
Jump up ^ Committee on the Fundamentals of Computer Science: Challenges and Opportunities, National Research Council (2004). Computer Science: Reflections on the Field, Reflections from the Field. National Academies Press. ISBN 978-0-309-09301-9.
Jump up ^ "Csab, Inc". Csab.org. 2011-08-03. Retrieved 2011-11-19.
Jump up ^ Clay Mathematics Institute P=NP
Jump up ^ P. Collins, Graham. "Claude E. Shannon: Founder of Information Theory". Scientific American, Inc.
Jump up ^ A. Thisted, Ronald. "COMPUTER ARCHITECTURE". The University of Chicago. Retrieved 7 April 1997.
Jump up ^ http://www.cse.buffalo.edu/~rapaport/computation.html
Jump up ^ "ACM Curricula Recommendations". Retrieved 2012-11-18.
Jump up ^ http://www.acm.org/membership/NIC.pdf
Jump up ^ Gilbert, Alorie. "Newsmaker: Computer science's gender gap". CNET News.
Jump up ^ Dovzan, Nicole. "Examining the Gender Gap in Technology". University of Michigan.
Jump up ^ "Encouraging the next generation of women in computing". Microsoft Research Connections Team. Retrieved 3 Sep 2013.
"Computer Software Engineer." U.S. Bureau of Labor Statistics. U.S. Bureau of Labor Statistics, n.d. Web. 05 Feb. 2013.
Further reading[edit]

Overview
Tucker, Allen B. (2004). Computer Science Handbook (2nd ed.). Chapman and Hall/CRC. ISBN 1-58488-360-X.
"Within more than 70 chapters, every one new or significantly revised, one can find any kind of information and references about computer science one can imagine. [...] all in all, there is absolute nothing about Computer Science that can not be found in the 2.5 kilogram-encyclopaedia with its 110 survey articles [...]." (Christoph Meinel, Zentralblatt MATH)
van Leeuwen, Jan (1994). Handbook of Theoretical Computer Science. The MIT Press. ISBN 0-262-72020-5.
"[...] this set is the most unique and possibly the most useful to the [theoretical computer science] community, in support both of teaching and research [...]. The books can be used by anyone wanting simply to gain an understanding of one of these areas, or by someone desiring to be in research in a topic, or by instructors wishing to find timely information on a subject they are teaching outside their major areas of expertise." (Rocky Ross, SIGACT News)
Ralston, Anthony; Reilly, Edwin D.; Hemmendinger, David (2000). Encyclopedia of Computer Science (4th ed.). Grove's Dictionaries. ISBN 1-56159-248-X.
"Since 1976, this has been the definitive reference work on computer, computing, and computer science. [...] Alphabetically arranged and classified into broad subject areas, the entries cover hardware, computer systems, information and data, software, the mathematics of computing, theory of computation, methodologies, applications, and computing milieu. The editors have done a commendable job of blending historical perspective and practical reference information. The encyclopedia remains essential for most public and academic library reference collections." (Joe Accardin, Northeastern Illinois Univ., Chicago)
Edwin D. Reilly (2003). Milestones in Computer Science and Information Technology. Greenwood Publishing Group. ISBN 978-1-57356-521-9.
Selected papers
Knuth, Donald E. (1996). Selected Papers on Computer Science. CSLI Publications, Cambridge University Press.
Collier, Bruce. The little engine that could've: The calculating machines of Charles Babbage. Garland Publishing Inc. ISBN 0-8240-0043-9.
Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. The MIT press. ISBN 978-0-2625317-9-5.
Randell, Brian (1973). The origins of Digital computers, Selected Papers. Springer-Verlag. ISBN 3-540-06169-X.
"Covering a period from 1966 to 1993, its interest lies not only in the content of each of these papers — still timely today — but also in their being put together so that ideas expressed at different times complement each other nicely." (N. Bernard, Zentralblatt MATH)
Articles
Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005.
Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004.
Research evaluation for computer science, Informatics Europe report[dead link]. Shorter journal version: Bertrand Meyer, Christine Choppy, Jan van Leeuwen and Jorgen Staunstrup, Research evaluation for computer science, in Communications of the ACM, vol. 52, no. 4, pp. 31–34, April 2009.
Curriculum and classification
Association for Computing Machinery. 1998 ACM Computing Classification System. 1998.
Joint Task Force of Association for Computing Machinery (ACM), Association for Information Systems (AIS) and IEEE Computer Society (IEEE-CS). Computing Curricula 2005: The Overview Report. September 30, 2005.
Norman Gibbs, Allen Tucker. "A model curriculum for a liberal arts degree in computer science". Communications of the ACM, Volume 29 Issue 3, March 1986.
External links[edit]

Find more about Computer science at Wikipedia's sister projects
                Definitions and translations from Wiktionary
                Media from Commons
                Quotations from Wikiquote
                Source texts from Wikisource
                Textbooks from Wikibooks
                Learning resources from Wikiversity
Library resources about
Computer science
Resources in your library
Resources in other libraries
Computer science on the Open Directory Project
Scholarly Societies in Computer Science
Best Papers Awards in Computer Science since 1996
Photographs of computer scientists by Bertrand Meyer
EECS.berkeley.edu
Bibliography and academic search engines
CiteSeerx (article): search engine, digital library and repository for scientific and academic papers with a focus on computer and information science.
DBLP Computer Science Bibliography (article): computer science bibliography website hosted at Universität Trier, in Germany.
The Collection of Computer Science Bibliographies (article)
Professional organizations
Association for Computing Machinery
IEEE Computer Society
Informatics Europe
Misc
Computer Science - Stack Exchange a community run Question and Answer site for Computer Science
What is computer science
Is computer science science?
[hide] v t e
Major fields of computer science
Mathematical foundations        
Mathematical logic Set theory Number theory Graph theory Type theory Category theory Numerical analysis Information theory Combinatorics Boolean algebra
Theory of computation
Automata theory Computability theory Computational complexity theory Quantum computing theory
Algorithms, data structures       
Analysis of algorithms Algorithm design Computational geometry
Programming languages, compilers       
Parsers Interpreters Procedural programming Object-oriented programming Functional programming Logic programming Programming paradigms
Concurrent, parallel, distributed systems             
Multiprocessing Grid computing Concurrency control
Software engineering   
Requirements analysis Software design Computer programming Formal methods Software testing Software development process
System architecture       
Computer architecture Computer organization Operating systems
Telecommunication, networking              
Computer audio Routing Network topology Cryptography
Databases         
Database management systems Relational databases SQL Transactions Database indexes Data mining
Artificial intelligence       
Automated reasoning Computational linguistics Computer vision Evolutionary computation Expert systems Machine learning Natural language processing Robotics
Computer graphics        
Visualization Computer animation Image processing
Human–computer interaction   
Computer accessibility User interfaces Wearable computing Ubiquitous computing Virtual reality
Scientific computing      
Artificial life Bioinformatics Cognitive science Computational chemistry Computational neuroscience Computational physics Numerical algorithms Symbolic mathematics
Note: Computer science can also be divided into different topics or fields according to the ACM Computing Classification System.
[hide] v t e
Technology
Outline of technology Outline of applied science
Fields    
Agriculture         
Agricultural engineering Aquaculture Fisheries science Food chemistry Food engineering Food microbiology Food technology GURT ICT in agriculture Nutrition
Biomedical         
Bioinformatics Biological engineering Biomechatronics Biomedical engineering Biotechnology Cheminformatics Genetic engineering Healthcare science Medical research Medical technology Nanomedicine Neuroscience Neurotechnology Pharmacology Reproductive technology Tissue engineering
Buildings and
Construction     
Acoustical engineering Architectural engineering Building services engineering Civil engineering Construction engineering Domestic technology Facade engineering Fire protection engineering Safety engineering Sanitary engineering Structural engineering
Educational       
Educational software Digital technologies in education ICT in education Impact Multimedia learning Virtual campus Virtual education
Energy 
Nuclear engineering Nuclear technology Petroleum engineering Soft energy technology
Environmental 
Clean technology Clean coal technology Ecological design Ecological engineering Ecotechnology Environmental engineering Environmental engineering science Green building Green nanotechnology Landscape engineering Renewable energy Sustainable design Sustainable engineering
Industrial            
Automation Business informatics Engineering management Enterprise engineering Financial engineering Industrial biotechnology Industrial engineering Industrial gas Metallurgy Mining engineering Productivity improving technologies Research and development
IT and communications
Artificial intelligence Broadcast engineering Computer engineering Computer science Information technology Music technology Ontology engineering RF engineering Software engineering Telecommunications engineering Visual technology Web engineering
Military               
Army engineering maintenance Electronic warfare Military communications Military engineering Stealth technology
Transport           
Aerospace engineering Automotive engineering Naval architecture Affordable Spaceflight Space technology Traffic engineering Transport engineering
Other applied sciences  
Cryogenics Electro-optics Electronics Engineering geology Engineering physics Hydraulics Materials science Microfabrication Nanoengineering
Other engineering fields               
Audio Biochemical Ceramic Chemical Polymer Control Electrical Electronic Entertainment Geotechnical Hydraulic Mechanical Mechatronics Optical Protein Quantum Robotics Animatronics Systems
Components     
Infrastructure Invention Timeline Knowledge Machine Skill Craft Tool Gadget
Scale     
Femtotechnology Picotechnology Nanotechnology Microtechnology Macro-engineering Megascale engineering
History 
Prehistoric technology Neolithic Revolution Ancient technology Medieval technology Renaissance technology Industrial Revolution Second Jet Age Digital Revolution Information Age
Theories andconcepts   
Appropriate technology Critique of technology Diffusion of innovations Disruptive innovation Dual-use technology Ephemeralization Ethics of technology High tech Hype cycle Inevitability thesis Low-technology Mature technology Philosophy of technology Strategy of Technology Technicism Techno-progressivism Technocapitalism Technocentrism Technocracy Technocriticism Technoetic Technoethics Technological change Technological convergence Technological determinism Technological escalation Technological evolution Technological fix Technological innovation system Technological momentum Technological nationalism Technological rationality Technological revival Technological revolution Technological singularity Technological somnambulism Technological transitions Technological utopianism Technology lifecycle Technology acceptance model Technology adoption lifecycle Technomancy Technorealism Technoromanticism Technoscience Transhumanism
Other   
Emerging technologies List Fictional technology Technopaganism High-technology business districts Kardashev scale List of technologies Science, technology and society Technology dynamics Science and technology Science and technology by country STEM fields Technology alignment Technology assessment Technology brokering Technology butler Technology companies Technology demonstration Technology education Technical universities and colleges Technology evangelist Technology fusion Technology governance Technology integration Technology journalism Technology management Technology policy Technology shock Technology strategy Technology and society Technology transfer Technophilia Technophobia Technoself Technosignature Technostress Weapon List
Wikipedia book Book Category Category Commons page Commons Portal Portal Wikiquote page Wikiquotes.

Friday, 14 March 2014

Computer

                                               
This is full- from of computer is  common operating machine particularly used for trade,education and research.The computer is only short of saying this word together.......
"Computer technology" and "Computer system" redirect here. For the company, see Computer Technology Limited. For other uses, see Computer (disambiguation) and Computer system (disambiguation).
A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.
Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.
In World War II, mechanical analog computers were used for specialized military applications. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1]
Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by smallbatteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.
Contents  [hide]
1 Etymology
2 History
2.1 First general-purpose computing device
2.2 Analog computers
2.3 The modern computer
2.3.1 Electromechanical computers
2.3.2 Electronic programmable computer
2.3.3 Stored program computer
2.4 Transistor computers
2.5 The integrated circuit
3 Programs
3.1 Stored program architecture
3.2 Bugs
3.3 Machine code
3.4 Programming language
3.4.1 Low-level languages
3.4.2 Higher-level languages
3.5 Program design
4 Components
4.1 Control unit
4.2 Arithmetic logic unit (ALU)
4.3 Memory
4.4 Input/output (I/O)
4.5 Multitasking
4.6 Multiprocessing
4.7 Networking and the Internet
4.8 Computer architecture paradigms
5 Misconceptions
5.1 Required technology
6 Further topics
6.1 Artificial intelligence
6.2 Hardware
6.2.1 History of computing hardware
6.2.2 Other hardware topics
6.3 Software
6.4 Languages
6.5 Professions and organizations
7 Degradation
8 See also
9 Notes
10 References
11 External links
Etymology

The first use of the word “computer” was recorded in 1613 in a book called “The yong mans gleanings” by English writer Richard Braithwait I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number. It referred to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[3]
History

Main article: History of computing hardware
Rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were invented in the 17th century. The first recorded use of the word "computer" is also from the 17th century, applied to human computers, people who performed calculations, often as employment. The first computer devices were conceived of in the 19th century, and only emerged in their modern form in the 1940s.
First general-purpose computing device


A portion of Babbage's Difference engine.
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",[4] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[5][6]
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (themill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers


Sir William Thomson's third tide-predicting machine design, 1879-81
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[7]
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[8]
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.
The modern computer


Alan Turing was the first to conceptualize the modern computer, a device that became known as the Universal Turing machine.
The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[9] On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines isundecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[10] Turing machines are to this day a central object of study intheory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Electromechanical computers


Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.
Early digital computers were electromechanical - electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[11]
In 1941, Zuse followed his earlier machine up with the Z3, the world's first workingelectromechanical programmable, fully automatic digital computer.[12][13] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[14] Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[15] The Z3 was probably a complete Turing machine.
Electronic programmable computer
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in Dollis Hill in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[7] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested theAtanasoff–Berry Computer (ABC) in 1942,[16] the first "automatic electronic digital computer".[17] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[18]


Colossus was the first electronic digitalprogrammable computing device, and was used to break German ciphers during World War II.
During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine,Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[18] He spent eleven months from early February 1943 designing and building the first Colossus.[19] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[20] and attacked its first message on 5 February.[18]
Colossus was the world's first electronic digital programmable computer.[7] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process.[21][22]


ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for the United States Army.
The US-built ENIAC[23] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchlyand J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[24]
Stored program computer
Three tall racks containing electronic circuit boards

A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[18] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details thecomputation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic Calculator’ was the first specification for such a device.John von Neumann at the University of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[7]


Ferranti Mark 1, c. 1951.
The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's firststored-program computer. It was built at theVictoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[25] It was designed as a testbed for the Williams tube the firstrandom-access digital storage device.[26] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[27] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[28] Built by Ferranti, it was delivered to theUniversity of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[29] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [30] and ran the world's first regular routine office computer job.
Transistor computers


A bipolar junction transistor
The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[31] Their firsttransistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magneticdrum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[32] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[33][34]
The integrated circuit
The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components inWashington, D.C. on 7 May 1952.[35]
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[36] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[37] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”[38][39] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[40] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of themicroprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[41] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[42]
Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs forword processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
Stored program architecture
Main articles: Computer program and Computer programming


Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England
This section applies to most common RAM machine-based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:
      mov No. 0, sum     ; set sum to 0
      mov No. 1, num     ; set num to 1
loop: add num, sum    ; add num to sum
      add No. 1, num     ; add 1 to num
      cmp num, #1000  ; compare num to 1000
      ble loop        ; if num <= 1000, go back to 'loop'
      halt            ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[43]
Bugs
Main article: Software bug


The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer
Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouseclicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[44]
Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[45]
Machine code
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[46] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.


A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.
Programming language
Main article: Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Main article: Low-level programming language
Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[47]
Higher-level languages
Main article: High-level programming language
Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[48] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
Program design
Question book-new.svg
This section does not cite any references or sources. Please help improve this section byadding citations to reliable sources. Unsourced material may be challenged and removed.(July 2012)
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
Components

Main articles: Central processing unit and Microprocessor
File:Computer Components.webm

Video demonstrating the standard components of a "slimline" computer
A general purpose computer has four main components: the arithmetic logic unit (ALU), thecontrol unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.
Control unit
Main articles: CPU design and Control unit


Diagram showing how a particular MIPS architecture instruction would be decoded by the control system
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[49] Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (aregister) that keeps track of which location in memory the next instruction is to be read from.[50]
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcodeprogram that causes all of these events to happen.
Arithmetic logic unit (ALU)
Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic.[51]
The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statementsand processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[52] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
Main article: Computer data storage


Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[53]
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
Main article: Input/output


Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[54]Devices that provide input or output to the computer are called peripherals.[55] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives andoptical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computerscontain many smaller computers that assist the main CPU in performing I/O.
Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[56]
One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[57]
Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing


Cray designed many supercomputers that used multiprocessing heavily.
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor andmulti-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[58] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.
Networking and the Internet
Main articles: Computer networking and Internet


Visualization of a portion of the routeson the Internet
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[59]
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called theARPANET.[60] The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies likeEthernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
Computer architecture paradigms
There are many types of computer architectures:
Quantum computer vs Chemical computer
Scalar processor vs Vector processor
Non-Uniform Memory Access (NUMA) computers
Register machine vs Stack machine
Harvard architecture vs von Neumann architecture
Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[61]
Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them fromcalculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
Misconceptions

Main articles: Human computer and Harvard Computers


Women as computers in NACA High Speed Flight Station "Computer Room"
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even ahard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[62] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[63] Any device which processes information qualifies as a computer, especially if the processing is purposeful.
Required technology
Main article: Unconventional computing
Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out oftransistors made of photolithographed semiconductors.
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
Further topics

Glossary of computers
Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligenceand machine learning.
Hardware
Main articles: Computer hardware and Personal computer hardware
The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.
History of computing hardware
Main article: History of computing hardware
Rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were invented in the 17th century. The first recorded use of the word "computer" is also from the 17th century, applied to human computers, people who performed calculations, often as employment. The first computer devices were conceived of in the 19th century, and only emerged in their modern form in the 1940s.
First general-purpose computing device


A portion of Babbage's Difference engine.
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",[4] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[5][6]
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand - this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (themill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers


Sir William Thomson's third tide-predicting machine design, 1879-81
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[7]
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[8]
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.
The modern computer


Alan Turing was the first to conceptualize the modern computer, a device that became known as the Universal Turing machine.
The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[9] On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines isundecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[10] Turing machines are to this day a central object of study intheory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Electromechanical computers


Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.
Early digital computers were electromechanical - electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[11]
In 1941, Zuse followed his earlier machine up with the Z3, the world's first workingelectromechanical programmable, fully automatic digital computer.[12][13] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[14] Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.[15] The Z3 was probably a complete Turing machine.
Electronic programmable computer
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in Dollis Hill in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[7] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested theAtanasoff–Berry Computer (ABC) in 1942,[16] the first "automatic electronic digital computer".[17] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[18]


Colossus was the first electronic digitalprogrammable computing device, and was used to break German ciphers during World War II.
During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine,Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[18] He spent eleven months from early February 1943 designing and building the first Colossus.[19] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[20] and attacked its first message on 5 February.[18]
Colossus was the world's first electronic digital programmable computer.[7] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process.[21][22]


ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for the United States Army.
The US-built ENIAC[23] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchlyand J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[24]
Stored program computer
Three tall racks containing electronic circuit boards

A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[18] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details thecomputation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic Calculator’ was the first specification for such a device.John von Neumann at the University of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[7]


Ferranti Mark 1, c. 1951.
The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's firststored-program computer. It was built at theVictoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[25] It was designed as a testbed for the Williams tube the firstrandom-access digital storage device.[26] Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[27] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.[28] Built by Ferranti, it was delivered to theUniversity of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[29] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [30] and ran the world's first regular routine office computer job.
Transistor computers


A bipolar junction transistor
The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[31] Their firsttransistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magneticdrum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[32] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[33][34]
The integrated circuit
The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components inWashington, D.C. on 7 May 1952.[35]
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[36] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[37] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”[38][39] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[40] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of themicroprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[41] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[42]
Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs forword processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
Stored program architecture
Main articles: Computer program and Computer programming


Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England
This section applies to most common RAM machine-based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:
      mov No. 0, sum     ; set sum to 0
      mov No. 1, num     ; set num to 1
loop: add num, sum    ; add num to sum
      add No. 1, num     ; add 1 to num
      cmp num, #1000  ; compare num to 1000
      ble loop        ; if num <= 1000, go back to 'loop'
      halt            ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[43]
Bugs
Main article: Software bug


The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer
Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouseclicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[44]
Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[45]
Machine code
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[46] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.


A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.
Programming language
Main article: Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Main article: Low-level programming language
Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[47]
Higher-level languages
Main article: High-level programming language
Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[48] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
Program design
Question book-new.svg
This section does not cite any references or sources. Please help improve this section byadding citations to reliable sources. Unsourced material may be challenged and removed.(July 2012)
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
Components

Main articles: Central processing unit and Microprocessor
File:Computer Components.webm

Video demonstrating the standard components of a "slimline" computer
A general purpose computer has four main components: the arithmetic logic unit (ALU), thecontrol unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.
Control unit
Main articles: CPU design and Control unit


Diagram showing how a particular MIPS architecture instruction would be decoded by the control system
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[49] Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (aregister) that keeps track of which location in memory the next instruction is to be read from.[50]
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcodeprogram that causes all of these events to happen.
Arithmetic logic unit (ALU)
Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic.[51]
The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statementsand processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[52] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
Main article: Computer data storage


Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[53]
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
Main article: Input/output


Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[54]Devices that provide input or output to the computer are called peripherals.[55] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives andoptical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computerscontain many smaller computers that assist the main CPU in performing I/O.
Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[56]
One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[57]
Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing


Cray designed many supercomputers that used multiprocessing heavily.
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor andmulti-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[58] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.
Networking and the Internet
Main articles: Computer networking and Internet


Visualization of a portion of the routeson the Internet
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[59]
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called theARPANET.[60] The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies likeEthernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
Computer architecture paradigms
There are many types of computer architectures:
Quantum computer vs Chemical computer
Scalar processor vs Vector processor
Non-Uniform Memory Access (NUMA) computers
Register machine vs Stack machine
Harvard architecture vs von Neumann architecture
Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[61]
Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them fromcalculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
Misconceptions

Main articles: Human computer and Harvard Computers


Women as computers in NACA High Speed Flight Station "Computer Room"
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even ahard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[62] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[63] Any device which processes information qualifies as a computer, especially if the processing is purposeful.
Required technology
Main article: Unconventional computing
Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out oftransistors made of photolithographed semiconductors.
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
Further topics

Glossary of computers
Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligenceand machine learning.
Hardware
Main articles: Computer hardware and Personal computer hardware
The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.
History of computing hardware
Main article: History of computing hardware
First generation (mechanical/electromechanical)              Calculators          Pascal's calculator, Arithmometer, Difference engine, Quevedo's analytical machines
Programmable devices  Jacquard loom, Analytical engine, IBM ASCC/Harvard Mark I,Harvard Mark II, IBM SSEC, Z1, Z2, Z3
Second generation (vacuum tubes)         Calculators          Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120
Programmable devices  Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti Mercury, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22
Third generation (discrete transistors and SSI, MSI, LSIintegrated circuits)             Mainframes       IBM 7090, IBM 7080, IBM System/360, BUNCH
Minicomputer   PDP-8, PDP-11, IBM System/32, IBM System/36
Fourth generation (VLSI integrated circuits)         Minicomputer   VAX, IBM System i
4-bit microcomputer      Intel 4004, Intel 4040
8-bit microcomputer      Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80
16-bit microcomputer    Intel 8088, Zilog Z8000, WDC 65816/65802
32-bit microcomputer    Intel 80386, Pentium, Motorola 68000, ARM
64-bit microcomputer[64]            Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64, ARMv8-A
Embedded computer     Intel 8048, Intel 8051
Personal computer         Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet PC, Wearable computer
Theoretical/experimental            Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer
Other hardware topics
Peripheral device(input/output)               Input     Mouse, keyboard, joystick, image scanner, webcam, graphics tablet,microphone
Output Monitor, printer, loudspeaker
Both      Floppy disk drive, hard disk drive, optical disc drive, teleprinter
Computer busses            Short range        RS-232, SCSI, PCI, USB
Long range (computer networking)         Ethernet, ATM, FDDI
Software
Main article: Computer software
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called “firmware.”
Operating system            Unix and BSD     UNIX System V, IBM AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems
GNU/Linux         List of Linux distributions, Comparison of Linux distributions
Microsoft Windows        Windows 95, Windows 98, Windows NT, Windows 2000, Windows Me, Windows XP, Windows Vista,Windows 7, Windows 8
DOS       86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-DOS, FreeDOS
Mac OS Mac OS classic, Mac OS X
Embedded andreal-time              List of embedded operating systems
Experimental     Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs
Library  Multimedia         DirectX, OpenGL, OpenAL
Programming library       C standard library, Standard Template Library
Data       Protocol               TCP/IP, Kermit, FTP, HTTP, SMTP
File format          HTML, XML, JPEG, MPEG, PNG
User interface   Graphical user interface (WIMP)               Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua
Text-based user interface            Command-line interface, Text user interface
Application         Office suite         Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software
Internet Access                Browser, E-mail client, Web server, Mail transfer agent, Instant messaging
Design and manufacturing           Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
Graphics              Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics,Video editing, Image processing
Audio    Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music
Software engineering    Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development environment, Software performance analysis, Revision control, Software configuration management
Educational         Edutainment, Educational game, Serious game, Flight simulator
Games  Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction
Misc       Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems,File manager
Languages
There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.
Programming languages
Lists of programming languages                Timeline of programming languages, List of programming languages by category, Generational list of programming languages, List of programming languages, Non-English-based programming languages
Commonly used assembly languages      ARM, MIPS, x86
Commonly used high-level programming languages        Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal
Commonly used scripting languages        Bourne script, JavaScript, Python, Ruby, PHP, Perl
Professions and organizations
As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
Computer-related professions
Hardware-related            Electrical engineering, Electronic engineering, Computer engineering, Telecommunications engineering, Optical engineering,Nanoengineering
Software-related             Computer science, Computer engineering, Desktop publishing, Human–computer interaction, Information technology,Information systems, Computational science, Software engineering, Video game industry, Web design
The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
Organizations
Standards groups             ANSI, IEC, IEEE, IETF, ISO, W3C
Professional societies    ACM, AIS, IET, IFIP, BCS
Free/open source software groups         Free Software Foundation, Mozilla Foundation, Apache Software Foundation
Degradation

Rasberry crazy ants have been known to consume the insides of electrical wiring in computers; preferring DC to AC currents. This behavior is not well understood by scientists.[65]
See also

Portal icon           Information technology portal
Computability theory
Computer insecurity
Computer security
List of computer term etymologies
List of fictional computers
Pulse computation
TOP500 (list of most powerful computers)
Notes

Jump up^ In 1946, ENIAC required an estimated 174 kW. By comparison, a modern laptop computer may use around 30 W; nearly six thousand times less. "Approximate Desktop & Notebook Power Usage". University of Pennsylvania. Retrieved 20 June 2009.
Jump up^ Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern “commodity” microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations."Intel Core2 Duo Mobile Processor: Features". Intel Corporation. Retrieved 20 June 2009.
Jump up^ computer, n.. Oxford English Dictionary (2 ed.). Oxford University Press. 1989. Retrieved 10 April 2009.
Jump up^ Halacy, Daniel Stephen (1970). Charles Babbage, Father of the Computer. Crowell-Collier Press. ISBN 0-02-741370-5.
Jump up^ "Babbage". Online stuff. Science Museum. 2007-01-19. Retrieved 2012-08-01.
Jump up^ "Let's build Babbage's ultimate mechanical computer".opinion. New Scientist. 23 December 2010. Retrieved 2012-08-01.
^ Jump up to:a b c d "The Modern History of Computing". Stanford Encyclopedia of Philosophy.
Jump up^ Ray Girvan, "The revealed grace of the mechanism: computing after Babbage", Scientific Computing World, May/June 2003
Jump up^ Proceedings of the London Mathematical Society
Jump up^ "von Neumann ... firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace and others." Letter by Stanley Frankel to Brian Randell, 1972, quoted in Jack Copeland (2004) The Essential Turing, p22.
Jump up^ Zuse, Horst. "Part 4: Konrad Zuse's Z1 and Z3 Computers".The Life and Work of Konrad Zuse. EPE Online. Archived fromthe original on 2008-06-01. Retrieved 2008-06-17.
Jump up^ Zuse, Konrad (2010) [1984], The Computer – My LifeTranslated by McKenna, Patricia and Ross, J. Andrew from: Der Computer, mein Lebenswerk (1984) (in English translated from German), Berlin/Heidelberg: Springer-Verlag, ISBN 978-3-642-08151-4
Jump up^ "A Computer Pioneer Rediscovered, 50 Years On". The New York Times. April 20, 1994.
Jump up^ Zuse, Konrad (1993). Der Computer. Mein Lebenswerk. (in German) (3rd ed.). Berlin: Springer-Verlag. p. 55. ISBN 978-3-540-56292-4.
Jump up^ Crash! The Story of IT: Zuse at the Wayback Machine(archived March 18, 2008)
Jump up^ January 15, 1941 notice in the Des Moines Register,
Jump up^ Arthur W. Burks. The First Electronic Computer.
^ Jump up to:a b c d Copeland, Jack (2006), Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, pp. 101–115, ISBN 0-19-284055-X
Jump up^ "Bletchley's code-cracking Colossus", BBC News, 2 February 2010, retrieved 19 October 2012
Jump up^ The Colossus Rebuild http://www.tnmoc.org/colossus-rebuild-story
Jump up^ Randell, Brian; Fensom, Harry; Milne, Frank A. (15 March 1995), "Obituary: Allen Coombs", The Independent, retrieved 18 October 2012
Jump up^ Fensom, Jim (8 November 2010), Harry Fensom obituary, retrieved 17 October 2012
Jump up^ John Presper Eckert Jr. and John W. Mauchly, Electronic Numerical Integrator and Computer, United States Patent Office, US Patent 3,120,606, filed 26 June 1947, issued 4 February 1964, and invalidated 19 October 1973 after court ruling onHoneywell v. Sperry Rand.
Jump up^ Generations of Computers
Jump up^ Enticknap, Nicholas (Summer 1998), "Computing's Golden Jubilee", Resurrection (The Computer Conservation Society) (20), ISSN 0958-7403, retrieved 19 April 2008
Jump up^ "Early computers at Manchester University", Resurrection(The Computer Conservation Society) 1 (4), Summer 1992,ISSN 0958-7403, retrieved 7 July 2010
Jump up^ Early Electronic Computers (1946–51), University of Manchester, retrieved 16 November 2008
Jump up^ Napper, R. B. E., Introduction to the Mark 1, The University of Manchester, retrieved 4 November 2008
Jump up^ Computer Conservation Society, Our Computer Heritage Pilot Study: Deliveries of Ferranti Mark I and Mark I Star computers., retrieved 9 January 2010
Jump up^ Lavington, Simon. "A brief history of British computers: the first 25 years (1948–1973).". British Computer Society. Retrieved 10 January 2010.
Jump up^ Lavington, Simon (1998), A History of Manchester Computers(2 ed.), Swindon: The British Computer Society, pp. 34–35
Jump up^ Cooke-Yarborough, E. H. (June 1998), "Some early transistor applications in the UK", Engineering and Science Education Journal (IEE) 7 (3): 100–106, doi:10.1049/esej:19980301,ISSN 0963-7346, retrieved 7 June 2009 (subscription required)
Jump up^ Cooke-Yarborough, E.H. (1957). Introduction to Transistor Circuits. Edinburgh: Oliver and Boyd. p. 139.
Jump up^ Cooke-Yarborough, E.H. (June 1998). "Some early transistor applications in the UK". Engineering and Science Education Journal (London, UK: IEE) 7 (3): 100–106.doi:10.1049/esej:19980301. ISSN 0963-7346. Retrieved 2009-06-07.
Jump up^ "The Hapless Tale of Geoffrey Dummer", (n.d.), (HTML),Electronic Product News, accessed 8 July 2008.
Jump up^ Kilby, Jack (2000), Nobel lecture, Stockholm: Nobel Foundation, retrieved 2008-05-15
Jump up^ The Chip that Jack Built, (c. 2008), (HTML), Texas Instruments, Retrieved 29 May 2008.
Jump up^ Jack S. Kilby, Miniaturized Electronic Circuits, United States Patent Office, US Patent 3,138,743, filed 6 February 1959, issued 23 June 1964.
Jump up^ Winston, Brian (1998). Media Technology and Society: A History : From the Telegraph to the Internet. Routledge. p. 221. ISBN 978-0-415-14230-4.
Jump up^ Robert Noyce's Unitary circuit, US patent 2981877, "Semiconductor device-and-lead structure", issued 1961-04-25, assigned to Fairchild Semiconductor Corporation
Jump up^ Intel_4004 (November 1971), Intel's First Microprocessor—the Intel 4004, Intel Corp., retrieved 2008-05-17
Jump up^ The Intel 4004 (1971) die was 12 mm2, composed of 2300 transistors; by comparison, the Pentium Pro was 306 mm2, composed of 5.5 million transistors, according to Patterson, David; Hennessy, John (1998), Computer Organization and Design, San Francisco: Morgan Kaufmann, pp. 27–39, ISBN 1-55860-428-6
Jump up^ This program was written similarly to those for the PDP-11minicomputer and shows some typical things a computer can do. All the text after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)
Jump up^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bugcaused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
Jump up^ Taylor, Alexander L., III (16 April 1984). "The Wizard Inside the Machine". TIME. Retrieved 17 February 2007. (subscription required)
Jump up^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DECPDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of thebooting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
Jump up^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
Jump up^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly, while running, by another program called aninterpreter.
Jump up^ The control unit's role in interpreting instructions has varied somewhat in the past. Although the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, one of the earliest stored-program computers, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
Jump up^ Instructions often occupy more than one memory address, therefore the program counter usually increases by the number of memory locations required to store one instruction.
Jump up^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and Computing. A K Peters, Ltd. p. 54. ISBN 978-1-56881-128-4.
Jump up^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics. CRC Press. p. 45. ISBN 978-0-8247-4067-2.
Jump up^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma & Mielke 1988)
Jump up^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12.
Jump up^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the Microprocessors. Wiley. p. 85.ISBN 978-0-471-05051-3.
Jump up^ Jerry Peek; Grace Todino, John Strang (2002). Learning the UNIX Operating System: A Concise Guide for the New User. O'Reilly. p. 130. ISBN 978-0-596-00261-9.
Jump up^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111. ISBN 978-0-8493-0949-6.
Jump up^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-calledcomputer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
Jump up^ Agatha C. Hughes (2000). Systems, Experts, and Computers.MIT Press. p. 161. ISBN 978-0-262-08285-3. "The experience of SAGE helped make possible the first truly large-scale commercial real-time network: the SABRE computerized airline reservations system..."
Jump up^ "A Brief History of the Internet". Internet Society. Retrieved 20 September 2008.
Jump up^ "Computer architecture: fundamentals and principles of computer design" by Joseph D. Dumas 2006. page 340.
Jump up^ According to the Shorter Oxford English Dictionary (6th ed, 2007), the word computer dates back to the mid 17th century, when it referred to “A person who makes calculations; specifically a person employed for this in an observatory etc.”
Jump up^ "Definition of computer". Thefreedictionary.com. Retrieved 29 January 2012.
Jump up^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit incarnations were introduced.
Jump up^ Andrew R Hickey (May 15, 2008). "'Crazy' Ant Invasion Frying Computer Equipment".
References

Fuegi, J. and Francis, J. "Lovelace & Babbage and the creation of the 1843 'notes'". IEEE Annals of the History of Computing 25 No. 4 (October–December 2003): Digital Object Identifier[dead link]
a Kempf, Karl (1961). Historical Monograph: Electronic Computers Within the Ordnance Corps. Aberdeen Proving Ground (United States Army).
a Phillips, Tony (2000). "The Antikythera Mechanism I". American Mathematical Society. Retrieved 5 April 2006.
a Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits. Massachusetts Institute of Technology.
Digital Equipment Corporation (1972). PDP-11/40 Processor Handbook (PDF). Maynard, MA: Digital Equipment Corporation.
Verma, G.; Mielke, N. (1988). Reliability performance of ETOX based flash memories. IEEE International Reliability Physics Symposium.
Doron D. Swade (February 1993). Redeeming Charles Babbage's Mechanical Computer. Scientific American. p. 89.
Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (13 November 2006). "Architectures Share Over Time". TOP500. Retrieved 27 November 2006.
Lavington, Simon (1998). A History of Manchester Computers (2 ed.). Swindon: The British Computer Society. ISBN 978-0-902505-01-8.
Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.
Zuse, Konrad (1993). The Computer - My life. Berlin: Pringler-Verlag. ISBN 0-387-56453-5.
Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine. Chicago: Washington Institute.
Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the Quantum Computer. New York: John Wiley & Sons. ISBN 0-471-39671-0.
Berkeley, Edmund (1949). Giant Brains, or Machines That Think. John Wiley & Sons.
Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-2625317-9-5.
Ligonnière, Robert (1987). Préhistoire et Histoire des ordinateurs. Paris: Robert Laffont. ISBN 9-782221-052617.
Couffignal, Louis (1933). Les machines à calculer ; leurs principes, leur évolution. Paris: Gauthier-Villars.
Essinger, James (2004). Jacquard's Web, How a hand loom led to the birth of the information age. Oxford University Press. ISBN 0-19-280577-0.
Hyman, Anthony (1985). Charles Babbage: Pioneer of the Computer. Princeton University Press. ISBN 978-0-6910237-7-9.
Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-2625317-9-5.
Bowden, B. V. (1953). Faster than thought. New York, Toronto, London: Pitman publishing corporation.
Moseley, Maboth (1964). Irascible Genius, Charles Babbage, inventor. London: Hutchinson.
Collier, Bruce (1970). The little engine that could've: The calculating machines of Charles Babbage. Garland Publishing Inc. ISBN 0-8240-0043-9.
Randell, Brian (1982). "From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush". Retrieved 29 October 2013.
External links

Find more about Computer at Wikipedia'ssister projects
                Definitions and translations from Wiktionary
                Media from Commons
                Quotations from Wikiquote
                Source texts from Wikisource
                Textbooks from Wikibooks
                Learning resources from Wikiversity
A Brief History of Computing [dead link] – slideshow by Life magazine
[hide]vte
Computer sizes
Classes of computers
Larger  
SuperMinisuperMainframe
Mini      
Midrange
SuperminiServer
Micro   
Personal WorkstationDesktopHomeSFF NettopPlugPortableVideo game arcade cabinet Arcade system boardVideo game console MicroconsoleInteractive kioskSmart TV
Mobile 
Laptop 
Desktop replacement computerSubnotebook NetbookSmartbookUltrabook
Tablet computer             
Ultra-mobile PCMobile Internet device Internet tablet
Information appliance  
Handheld PC Palm-size PCPocket computerPDA Electronic organizerEDAMobile phone Feature phoneSmartphone PhabletPMP DAPE-book readerHandheld game consolePortable/Mobile data terminal
Calculators         
ScientificProgrammableGraphing
Wearable computer      
Digital Wristwatch Calculator watchSmartwatch Watch phoneVirtual retinal displayHead-mounted displayHead-up display
Others 

MicrocontrollerNanocomputerPizza box form factorSingle-board computerSmartdustWireless sensor network