In pop-technology writings, it is common to read about a distinction made between what is called Artificial Narrow Intelligence and Artificial General Intelligence—often abbreviated as ANI and AGI respectively. The terms are meant to refer to categories of machines—both existing and hypothetical. The distinction is problematic and the terms are often used with little precision or consistency.
“General AI” is often described as “human-level AI”. Though only hypothetical, its crowning feature is a general problem solving ability that enables it to learn new tasks across several domains. This feature is emphasized so frequently that those who talk about general AI seem to assume that this is the most relevant feature of human intelligence. Sometimes the term “General AI” is used synonymously with “Strong AI” (not to be confused with its original meaning). The concept of General AI is ambiguous between the simulation/emulation distinction. Is human-level AI human-like in terms of its power to simulate human speech and behavior? Or, is it human-like because it can emulate human intelligence and consciousness? Understanding the difference will be important for future legal, ethical, and social concerns.
When people refer to existing technology as ‘AI”, they often classify it as “Narrow AI”. It is called narrow because it performs tasks that normally require human intelligence, but can only perform tasks in a very specific and narrowly defined domain. The common example is a chess program that can only “play” chess, but cannot learn how to do anything else. “Narrow AI” is often used synonymously with the term ‘Weak AI’ (again not to be confused with its original meaning). The concept of Narrow AI is problematic for 2 reasons.
First, it is a moving target. There are many computer systems that can perform tasks that normally require human intelligence and in a narrow domain, but we don’t consider them to be AI. A simple pocket calculator is such an example. If you compare the speed and accuracy of a pocket calculator to human capabilities, then a pocket calculator is super intelligent. But, we don’t think of a pocket calculator as artificially intelligent even though it fits the above description of Narrow AI. Some may object that current AI technologies are more complex than a pocket calculator. What we currently call AI can process more complex inputs, can do more complex tasks, and have a wider range of responses to more complex situations. So is Narrow AI just a computer system that is more complex than previous computer systems? If so (and how it could be otherwise) then what happens when the technology becomes even more complex in 20 years from now? In 2038, we likely won’t call 2018 technology ‘AI’. Our future view of current technology will be similar to our current view of technology of the past. Added complexity by itself is not sufficient to warrant a comparison between computer systems and human intelligence.
Second, there are many computers systems that people tend to call ‘artificial intelligence’ that perform tasks that people can’t perform at all. For example, a tsunami warning system can detect oceanic changes that are imperceptible to humans. AlphaGo—a go-playing computer—can detect patterns unnoticeable to humans and can make moves in a very un-human-like way. These and other examples of ‘narrow AI’ question the usefulness of comparing computers and humans.
The term “Artificial Intelligence” was coined by John McCarthy in 1955. He described the problem of Artificial Intelligence as “that of making a machine behave in ways that would be called intelligent if a human were so behaving.” My previous 2 blog posts have been about some of the philosophical distinctions that arise the comparison between humans and computers. More than 60 years after the term was coined, it is unclear how useful the human-to-machine comparison has been, at least for our practical development of computer hardware and software. For most practical purposes, the comparison to humans is irrelevant. We have some task or job to be done, and we use computers to help us solve those problems.
The current field of artificial intelligence really isn't about emulating or even simulating human behavior unless you are talking about improving animatronic puppets or making video games more realistic. The current field of AI is really about advances in automation. According to Jerry Kaplan, an AI expert at Stanford University, "Little more than speculation and wishful thinking ties the actual work in AI to the mysterious workings of the human mind."