That will change in the next five years, says IBM. Computers at that time will be much more aware of the world around them, and be able to understand it. The company's annual "5 in 5" list, in which IBM predicts the five trends in computing that will arrive in five years' time, reads exactly like a list of the five human senses predicting computers with sight, hearing, taste, smell and touch.
The five senses are really all part of one grand concept: cognitive computing, which involves machines experiencing the world more like a human would. For example, a cognizant computer wouldn't see a painting as merely a set of data points describing color, pigment and brush stroke; rather, it would truly see the object holistically as a painting, and be able to know what that means.
"That's a foundationally different way of thinking of computing," Bernie Meyerson, IBM's vice president of innovation. "You have to change how you think about absorbing data. You can't just take a picture and file the picture. You have to treat the picture as an entity at a very high level, as opposed to just a bunch o' bits."
"[Cognitive computing] makes for some very interesting shifts in capability," he adds. "That's a rather profound sort of driver."
One of the key differences between a cognizant computer and a traditional one is the idea of training. A cognitive system won't just continue to give the same wrong or unhelpful answer; if it arrives at the wrong conclusion, it can change its approach and try again.
"In a cognitive machine, you set it up and run it, but it observes," Meyerson says. "And that's very different because it statistically calculates an end result. However, if that answer is incorrect and you tell it, it'll actually re-weight those probabilities that led it to get the wrong answer and eventually get to the right answer."
Cognition Does Not Equal Intelligence
Attributing human senses to machines can't help but conjure images of androids or self-aware computers capable of independent thought and action. Meyerson says there's a massive chasm separating cognitive computing and true artificial intelligence.
"This is really an assistive technology," he explains. "It can't go off on its own. It's not designed to do that. What it's designed to do, in fact, is respond to a human in an assistive manner. But by providing a human-style of input, it's freed us from the task of programming and moved to the task of training. It simply has not more intelligence but more bandwidth, and there's a huge difference between the two."