For part I of Marcelo Cicconet’s column, click here.
February 2011. The media started reporting protest movements in the Middle East that would eventually be known as the Arab Spring. But just after Feb. 16, some headlines were reserved for the cognitive achievements of a certain character named Watson. Watson had just won the $1 million first prize on “Jeopardy!” While winning such money is usually big news from the winner’s point of view, almost nobody else would have cared if it weren’t for the fact that Watson was a cluster of 2,880 computer processors with 16 terabytes of RAM built by I.B.M.
That is certainly impressive. But as hardware and software quietly and consistently evolve, even such events are ceasing to attract attention. Not long ago, a computer outperformed a highly skilled human in playing chess. A machine has computed an approximation of pi more accurately than any brain-bearing creature would be able to obtain in a lifetime. Examples of machines outperforming humans in certain cognitive tasks abound, but can it be accurately said that these machines are intelligent? Well, they are definitely able to apply and acquire knowledge and skills — the definition I pulled from my Mac’s dictionary. But let’s bring a more sophisticated view to the table.
In 1950, Alan Turing proposed a test in which the machine should be able to exhibit intelligent behavior that makes it indistinguishable from an actual human. The Turing Test, as it is now known, became a standard for artificial intelligence in computer science circles. But, as it turns out, that’s not a very reassuring definition. You can easily imagine a setting where, with the proper interface, most people would not be able to tell if they are playing chess or “Jeopardy!” against a human or a computer. Computer systems are able to display high levels of intelligence in more artistic tasks as well. For example, researchers at the Sony Computer Science Laboratories in Paris recently developed an algorithm for jazz music composition that passed the Turing Test.
This particular experiment made me feel more comfortable with my opinion about jazz music, but before you judge me, consider this: eventually, machines will probably be able to pass the Turing Test in any cognitive or creative task you can imagine.
Building a drone that resembles and acts like a human such as Steven Spielberg’s film “A.I. Artificial Intelligence” seems much more difficult. But at least in principle — that is, in the form of a mathematical theorem — there is nothing preventing them from being created. If they will ever exist, and how we’ll react to them, is time’s burden to answer and sci-fi authors’ task to imagine. But, so far, there is little to be said against their inevitability. Perhaps the Watson of the future, after winning a championship, will go out with some friends to celebrate over drinks and even brag about its win on its favorite online social network.
Algorithms and machines that outperform humans in increasingly sophisticated tasks keep appearing. We will know about them, be impressed that we were able to build them and get angry that some of them steal our jobs. Then we will move on with our lives, eventually getting grumpy that they are not working properly — the same way we get grumpy at people for futilities.
So, yes, it is fine to call these machines and algorithms intelligent, for they pass the simple criteria of intelligence that we set. In fact, it is important to remember what they are able to achieve and how they improve our lives. They are constantly pushing the threshold for an activity in order to achieve intelligence, which in turn pushes the limits for what we can do as human beings.
Finally, to say that something is intelligent is less risky than saying that it knows, it feels, it thinks or it is conscious. These are deeper philosophical mazes often, and improperly, included in the artificial intelligence debate. These terms are the ones that really cause controversy. I’ll conveniently note that this text is approaching the size limit and skip those topics, thus saving you from the existential thoughts that often cloud discussions of artificial intelligence.
Marcelo Cicconet is a contributing columnist. Email him at [email protected].
Ron • Feb 16, 2013 at 10:41 pm
Nice article, but inevitably some day, you cannot escape those clouding thoughts like whether an AI agent is fully conscious or not, before exposing the human society to the normativity of the creative synthetic systems. We must search for positivity, that is, a machine if intelligent should behave and act more or less rationally like human beings and better understand all of the contexts in which us, human beings are often confronted to.