Since the earliest days of computers, researchers have tried to create systems that mimic human intelligence. While a silicon Einstein may still be a distant possibility, artificial intelligence, or AI, has brought us phones that recognize human speech, cars that drive themselves and expert systems that compete on television game shows. Over the years, AI research has moved through several evolutions and, as each technology has matured, they have become part of our everyday experience.
Early researchers struggled with limited processing power and computer storage, but still laid the foundation of AI with programming languages like LISP and concepts like decision trees and machine learning. Programs written in LISP could easily analyze games like chess, map all possible moves for several turns, then choose the best alternative. These programs could also modify their decision logic and learn from previous mistakes, getting "smarter" over time. With more powerful computers and cheaper mass storage, this branch of AI spawned the computer gaming industry, as well as a variety of personalized search engines and online shopping sites that not only remember our preferences, but anticipate our needs.
While the first wave of AI researchers relied on computing cycles to simulate human reasoning, the next approach relied on facts and data to mimic human experience. Expert systems gathered facts and rules into a knowledge base then used computer-based inference engines to deduce new facts or answer questions. Knowledge engineers interviewed experts in medicine, automotive repair, industrial design or other professions, then reduced these findings into machine readable facts and rules. These knowledge bases were then used by others to help diagnose problems or answer questions. As the technology matured, researchers found ways to automate knowledge base development, feeding in reams of technical literature, or letting the software crawl the Web to find relevant information on its own.
Another group of researchers tried to reproduce the workings of the human brain by creating artificial networks of neurons and synapses. With training, these neural networks could recognize patterns from what looked like random data. Images or sounds are fed into the input side of the network, with the correct answers fed into the output side. Over time, the networks reorganize their internal structure so that when a similar input gets fed in, the network returns the correct answer. Neural networks work well when responding to human speech or when translating scanned images into text. Software that relies on this technology can read books to blind people or translate speech from one language to another.
Large scale data analysis, often called "big data," harnesses the power of many computers to discover facts and relations in data that the human mind cannot comprehend. Trillions of credit card charges or billions of social network relations can be scanned and correlated using a variety of statistical methods to discover useful information. Credit card companies can find buying patterns that indicate that a card has been stolen, or that a cardholder is in financial difficulty. Retail merchants may find buying patterns that indicate that a customer is pregnant, even before she knows this herself. Big data allows computers to understand the world in ways that we humans never could on our own.