Differences Between Generations of Computers

By Dan Ketchum

Since engineers such as David Packard and Bill Hewlett started started building primitive computers in garages back in the late 1930s, computers have evolved rapidly. While the term “computer” casts a wide net and many subcategories of technology can be divided into market-based generations -- like video game console iterations or yearly smart phone updates -- five basic generations define the landscape of computing as a whole. Hardware and software components both characterize these generations, which start in 1942 and span decades beyond.

1942-1954: Born in a Vacuum

The first generation of computing began in the early 1940s, with the invention of machines like the Atanasoff-Berry Computer and the Electronic Numerical Integrator and Computer, or ABC and ENIAC. Until about 1954, computers relied on large, bulb-like vacuum tubes -- which were about 1,000 times faster than electromechanical computers -- for memory, and featured circuit-based central processing units. For data input and output, first-generation computers used punch cards and paper or magnetic tape. These massive machines, often as large as a room, were extremely costly and the heat output of vacuum tubes made them notoriously unreliable. As such, first generation computers were chiefly the domain of large companies and organizations.

1954-1964: The Transistor Transition

The transistor -- first invented by Bell Telephone Laboratories in 1947 -- offered a cheaper, less power-hungry, more reliable and far more compact alternative to vacuum tubes. Magnetic cores served as the key form of memory for second-generation devices, while magnetic tape and new magnetic disks functioned as external storage devices. In contrast to the simple machine code programming language of the first generation, second-generation programmers used assembly language for the first time, and trailblazers such as John Mauchly and J. Presper Eckert introduced stored-program computers to the scene.

1964-1972: Progress by Integration

In the early '60s, Jack Kilby of Texas Instruments ended the reign of hot, bulky, power-sucking computers with the invention of the integrated circuit, commonly dubbed the microchip. The tiny ICs that define the third generation contained transistors, resistors and capacitors as well as their associated circuitry, drastically scaling computers down to desktop size. Powerful though they were, it would take about 65,000 IC chips to produce 8 megabytes of memory in a modern PC. To complement these faster, smaller and more reliable machines, high-level programming languages such as Pascal and BASIC were created.

1972-1990: VLSI Brings It Home

The Very Large Scale Integrated circuit gave rise to the fourth generation of computers, or the microprocessor era. VSLI circuits could host tens of thousands of transistors and other circuit elements, placing the entire central processing unit on a single chip. This led to computers that were more compact and reliable than their third-generation counterparts, but perhaps more importantly, VSLI made computers much more affordable, allowing them to finally make an impact in the consumer market. With the arrival of powerful desktop computers, staple high-level languages such as C, C++ and DBASE came into prominence, as did the graphical user interface and the mass-produced mouse.

1990 and Beyond: Microprocessors Go Ultra

While VLSI circuits featured thousands of transistors, Ultra Large Scale Integration technology -- the key component of the fifth generation of computers -- made it possible for microprocessor chips to house tens of millions of components. Compact construction, affordability and the availability of wireless networking engendered the proliferation of desktop computers and made way for laptops, smart phones, tablets and video gaming consoles with software that makes strides in the area of artificial intelligence. High-level programming languages including C, C++, Java and .Net have all proven useful in fifth generation systems.