Get Paid To Promote, Get Paid To Popup, Get Paid Display Banner

The History Of Computer Science

The idea of using machines for calculation did not begin with Hollerith. In addition to using fingers, toes, and little marks on clay tablets or parchment to help make arithmetic easier, people in early times used a tool called an abacus. This device consisted of a system of beads sliding along wires or strings attached to a frame. An abacus did not perform the calculations, but helped people keep track of the totals with the movable beads. Babylonians, Greeks, and Romans, among other ancient peoples, developed and used abaci (plural of abacus) several thousand years ago.

French scientist and inventor Blaise Pascal (1623-62) invented one of the earliest adding machines in 1642. Pascal's motivation to build this device would be familiar to citizens today-tax calculations. His machine consisted of a series of interconnected wheels, with numbers etched on the rims. The gears connecting the wheels were designed to advance when the adjacent wheel made a complete revolution, similar to the operation of the odometer (mileage indicator) of a car. Although successful, Pascal's machine was too expensive for everyday use.

British mathematician Charles Babbage (1791-1871) designed devices in the 19th century that were the forerunners of the modern computer-and would have been the earliest computers, had Babbage been able to obtain funds to build them. His first effort, called a "difference engine," was to be a calculating machine. (The term difference in the name was due to a numerical technique called the differences method that the machine would employ.) The machine was well designed but complicated, and manufacturing difficulties exhausted the money Babbage had acquired before construction was complete.

Babbage's next design was for an even more ambitious machine that would have served a variety of purposes instead of fulfilling a single function (such as calculating). Passages from the Life of a Philosopher, published in 1864, Babbage wrote, "The whole of arithmetic now appeared within the grasp of the mechanism." This machine, called an "analytical engine," was to be programmable-information stored on punched cards would direct the machine's operation, so that it could be programmed to solve different problems. Although Babbage also failed to finish constructing the analytical engine, the idea of an efficient, general-purpose machine presaged the computers of today. As Babbage wrote in 1864, "As soon as an Analytical Engine exists, it will necessarily guide the future course of the science."

In Babbage's time, and on into the early 20th century, the term computer referred to a person who performed mathematical calculations. For example, Harvard Observatory employed "computers"- often women-who compiled catalogs of stars based on the observatory's astronomical data. (Some of these "computers," such as Annie Jump Cannon, went on to become astronomers.) Hollerith's machine, described above, was a highly useful calculating machine as well as an important advance in computational technology, but it was not a versatile, programmable device.

Harvard University was the site of one of the earliest machines that could be called a computer in the modern meaning of the word. Guided by engineer Howard Aiken (1900-73), IBM manufactured the components of the Automatic Sequence Controlled Calculator, also known as the Mark I, which engineers assembled at Harvard University in 1944. Using both electrical and mechanical parts, the Mark I was 51 feet (15.5 m) in length, eight feet (2.4 m) in height, and weighed a whopping 10,000 pounds (22,000 kg). A year later, the University of Pennsylvania finished a completely electronic computer, known as the Electronic Numerical Integrator and Computer (ENIAC). Designed by engineers John Mauchly (1907-80) and J. Presper Eckert (1919-95), ENIAC needed more than 1,000 square feet (93 m2) of space, and plenty of fans to keep the components from overheating. British researchers had built similar machines, known as Colossus computers, a little earlier, but the government kept their operation secret because they were used to read the enemy's secret messages during World War II.

These large computers, known as mainframes, received their programming instructions from punched cards or tape. Computations such as ballistic tables-the calculation of artillery trajectories based on wind, distance, and other factors, as needed by the U.S. military during World War II-could be accomplished in a fraction of the time required for manual tabulation. ENIAC, for instance, was capable of 5,000 operations a second. Yet failures were common, as computer expert Grace Hopper (1906-92) discovered in 1945 when she found a bug-a moth-that flew into the Mark II computer and ruined one of the parts. The term bug for a failure or fault did not originate with this incident, since the word had been commonly used years earlier to describe machine defects. But Hopper's discovery did give it a fuller meaning, as she noted in her log: "First actual case of bug being found."

Computer components gradually shrunk in size. An electronics revolution occurred in 1947 when physicists John Bardeen (1908-91), Walter Brattain (1902-87), and William Shockley (1910-89) invented the transistor, a small electrical device that could be used in computer circuits to replace a larger and more energy-consuming component known as a vacuum tube. In 1958, engineer Jack Kilby (1923-2005) developed the integrated circuit (IC), an electrical component that contains (integrates) a number of circuit elements in a small "chip." These devices use semiconductors, often made with silicon, in which electric currents can be precisely controlled. The ability to fit a lot of components on a single circuit decreased the size of computers and increased their processing speed. In 1981, IBM introduced the PC, a small but fast personal computer-this computer was "personal" in that it was meant to be used by a single person, as opposed to mainframe computers that are typically shared by many users.

Mainframes still exist today, although a lot of computing is done with smaller machines. But no matter the size, the basic operation of a computer is similar. A computer stores data-which consists of numbers that need processing or the instructions to carry out that process- ing-in memory. The central processing unit (CPU) performs the instructions one at a time, sequentially, until the operation is complete. This unit is also known as the processor. Humans interface with the computer by inputting information with a keyboard or some other device, and receive the result by way of an output device such as a monitor or printer.

Most computers today are digital. Instead of operating with numbers that take any value-which is referred to as an analog operation-a digital computer operates on binary data, with each bit having two possible values, a 1 or a 0. Computers have long used binary data; John V. Atanasoff (1903-95) and Clifford Berry (1918-63) at Iowa State University designed a digital computer in 1940 that used binary data, as did German engineer Konrad Zuse (1910-95) about the same time.

The use of binary data simplifies the design of electronic circuits that hold, move, and process the data. Although binary representation can be cumbersome, it is easy to store and operate on a number as a string of ones and zeroes, represented electrically as the presence or absence of a voltage. Having more than two values would require different voltage levels, which would result in many errors: Brief impulses known as transients, which occur in every circuit, could distort voltage levels enough to cause the computer to mishandle or misread the data. (For the same reason, music in digital format, as on CDs, usually offers better sound quality than analog formats such as ordinary cassette tapes.)

Binary is the computer's "machine language." Because binary is cumbersome to humans, interfaces such as monitors and keyboards use familiar letters and numbers, but this means that some translation must occur. For instance, when a computer operator presses the "K" key, the keyboard sends a representation such as 01001011 to the computer. This is an eight-bit format, representing symbols with eight bits. (Eight bits is also known as a byte.) Instructions to program computers must also be in machine language, but programmers often use higher- level languages such as BASIC, C, PASCAL (named after Blaise Pascal), FORTRAN, or others. The instructions of these languages are more human-friendly, and get translated into machine language by programs known as compilers or interpreters.

Circuits in the computer known as logic circuits perform the operations. Logic circuits have components called gates, two of which are symbolized in the figure on page 7. The output of a gate may be either a 0 or a 1, depending on the state of its inputs. For example, the output of an AND gate (the bottom symbol in the figure), is a 1 only if both of its inputs are 1, otherwise the output is 0. Combinations of logic gates produce circuits that can add two bits, as well as circuits that perform much more complicated operations.

Computer engineers build these logic circuits with components such as transistors, which in most computers are etched on a thin silicon IC. Barring component failure, the operations a computer performs are always correct, although the result may not be what the human operator wanted unless the sequence of operations-as specified by the software (the instructions or program)-is also correct. If the program contains no errors, a computer will process the data until the solution is found, or the human operator gets tired of waiting and terminates the program. A program that takes a long time to run may not provide the solution in a reasonable period, so the speed and efficiency of programs-and the nature of the problem to be solved-are important. But even an efficient program will tax the user's patience if the computer's circuits-the hardware-are slow. This is one of the reasons why engineers built gigantic machines such as Mark I and ENIAC, as well as the main reason why researchers continue to expand the frontiers of computer technology.

0 komentar:

Posting Komentar