Underestimation, as it turns out, has been a constant in the brief but dazzling history of this amazing machine. Surprisingly, the tale begins in the 19th century, when Charles Babbage, an English mathematician born in 1791, launched a lifelong quest to build information-processing machines–first a calculator called the Difference Engine and theta more elaborate programmable device dubbed the Analytical Engine. He lacked–among other things–electricity, transistors, keyboards and Bill Gates. Yet in the 1830s he came astonishingly close to producing something very much like the computers that would be celebrated decades after he died. Unfortunately, his skill at innovation was not matched by an ability to generate venture capital, and his plans were tossed into the unforgiving core dump of history.

The idea of a programmable machine that performed humanity’s mental labors reappeared in the 1930s. Specifically, the breakthrough came at the hands of another eccentric English mathematician, Alan Turing, who outlined how it was possible to build something that could perform virtually any mathematical task that one could describe. His proof involved an ingenious imaginary device that would be known as the Universal Turing Machine–essentially, a machine that could duplicate the work of any other machine. Even if the “machine” were a human calculator. Turing knew what the rest of us are still trying to wrap our minds around–such a contraption, a computer, can do anything. It’s an invention that breeds invention itself.

But it took a war to bring about the physical devices that would be known as the first real computers. (A small but noisy controversy among computer historians involves whether a device constructed in 1939 by John Atanasoff and his student at Iowa State University, Clifford Berry, deserves the true mantle of First Electronic Computer.) In England Turing himself worked on machines that helped crack the secret codes used by the Germans. In Germany itself, a wizard named Konrad Zuse was working on that country’s computing effort but never fully realized his ideas. And in America, a Hungarian genius named John von Neumann–perhaps the premier mathematician of this century–was pondering mechanical devices to help perform the calculations required for the Manhattan Project. A chance meeting at a train platform in 1944 led him to a team of scientists working at the University of Pennsylvania to create ENIAC (Electronic Numerical Integrator and Computer), which many people consider the true Adam of computers. Designed by J. Presper Eckert and John Mauchly to help crunch numbers for artillery-target estimates, this device used 18,000 vacuum tubes and cost $400,000.

Von Neumann was fascinated, and he worked with the ENIAC people to take computing to the next level: EDVAC, which was essentially a blueprint for the machines that followed: memory, stored programs and a central processor for number crunching. This scheme was sufficiently versatile to launch computers into the commercial realm. But even then, underestimation was as thick as in Babbage’s day. Thomas Watson Sr., the head of the company that was perhaps most prescient of all in embracing the idea–IBM–thought it unimaginable that there would ever be a worldwide need for the machine. “I think there is a world market,” said Watson, “for maybe five computers.

As we know, IBM sold a lot more than five computers. During the ’50s and ’60s big institutions and businesses used these expensive devices to perform complicated tasks, churning out responses to programs fed into the machine on manila cards. But while a quasi-priesthood of caretakers controlled access to the rooms that held these beasts, a small underground protohacker culture also emerged. These adventuresome supernerds used the computer to process words, to draw pictures and even to play chess. (Naysayers predicted that a computer would never master this purely human intellectual pursuit. Garry Kasparov probably wishes they were right.)

What finally bound those two cultures together was the development of the personal computer. This was made possible by the invention of the microprocessor a computer on a chip–by Intel Corp.’s Ted Hoff in 1971. Essentially, what once filled a room and cost as much as a mansion had been shrunk down to the size of a postage stamp and the cost of a dinner. By 1975, the PC was just waiting to be born, and the obstetrician was Ed Roberts, a Florida-born engineer who dreamed of a machine that would deliver to the ordinary man a machine that was the mental equivalent of what the pharaohs had in Egypt: thousands of workers to do one’s bidding. His Altair microcomputer was announced in January of that year, and though it had limited practical value (the only way to put a program in was to painstakingly flick little switches), it caused a sensation among a small cult of tweak-heads and engineers. Like who? A Harvard student named Gates, for one, who instantly began writing Altair software. Another acolyte was Stephen Wozniak, who quickly designed his own machine, the Apple II.

Even then, people still kept underestimating. Consider what Ken Olsen, head of the then powerful Digital Equipment Corp., had to say when asked about the idea of the computer’s becoming a common device: “There is no reason for any individual to have a Computer in his home.

What proved him wrong was the grass-roots development of software for these small devices: word processing, games and, perhaps the most crucial of all, a program called VisiCalc that not only automated the previously tedious task of calculating financial spreadsheets, but made modeling of business plans as easy as sneezing. Electronic spreadsheets were the tool that persuaded big business (which had previously turned its nose up at personal computers) to adopt the machines wholesale. And a new industry was suddenly thriving.

The next big step was the move to computer communications in the ’90s, when a program called Mosaic, written by students at the University of Illinois who later helped found the Netscape company, shot what was already an accelerating global Internet into serious overdrive. The prospect of millions of computers connected worldwide was suddenly a reality. People are still processing the effects of that explosion. And a lot of people, still in denial, are kidding themselves by thinking that the end of the Net transformations is anywhere in sight.

Where are the frontiers of computing? It’s scary to contemplate, because the field is so young and the technology so flexible. But consider what some computer scientists are already working on. Nanocomputers–microscopic devices that may change the way we think of materials, Digital ink that will, in effect, transform paper into something as protean as computer screens. And “artificial life” software that works like biological organisms, so much so that it strives to be classified as itself alive.

Skeptics dismiss the feasibility of many of these ambitious projects. In other words, people still persist in underestimating the power of a machine whose limitations are seemingly unbounded. If history is our guide, even our imaginations cannot grasp what the computer will ultimately become.