“Mr. Watson, come here — I want to see you.” A. Bell, 1876
“Come, Watson, come! The game is afoot.” A. Doyle, 1904
“I’ll be rooting for you, Watson.” S. Wolfram, 2011
As many Slaw readers may know, IBM’s system known as Watson will go up against champions on Jeopardy in less than two weeks—on February 14, 15, and 16, to be exact. (If your Valentine’s Day calendar is still empty, you now have something to put in it. I won’t tell anyone.) This is not the first computer vs. human being contest, of course: in 1996 and 1997 Garry Kasparov played chess with IBM’s Deep Blue, winning one bout and losing the other.
Employing “massively parallel computing,” Watson is able to produce natural language answers to natural language questions delivered in English—at least, questions of the Jeopardy kind that seek information. And it can do it in less than three seconds, making it odds on favourite to whup champions Ken and Brad.
You can learn about Watson on this IBM web page, where you’ll find a video that gives you a quick overview. It is a truly impressive accomplishment and one that those of us interested in data retrieval should applaud. Yet, there’s a strong element of brute force about it that seems somehow disappointing. Perhaps it’s only that I’m able to see the “workings”—the racks and racks of servers– that makes me think there’s anything crude about it; and when (all hail Moore’s Law!) the processors are all able to fit into a snuff box, I’ll feel the magic.
There are other ways of getting information out. We can (and should) structure the data we put in, making it possible for even stupid machines to pull it out. Or we can hope that Stephen Wolfram can eventually turn his technique to the wordier side of things.
Take a look at Wolfram’s take on Watson. He’s a fan, of course. It’s a great publicity stunt, and Wolfram has a showman’s appreciation for that. And his project is not in direct competition (yet) with IBM’s, so he’s free to be admiring.
As he says,
Wolfram|Alpha is a completely different kind of thing—something much more radical, based on a quite different paradigm. The key point is that Wolfram|Alpha is not dealing with documents, or anything derived from them. Instead, it is dealing directly with raw, precise, computable knowledge. And what’s inside it is not statistical representations of text, but actual representations of knowledge.
His explanation of what Wolfram|Alpha does is clear, as far as it goes; and it’s certainly interesting. But I have to say that I can’t quite grasp the core of it, likely because I’m very much based in the text world. Interestingly, he speculates that it may be possible at some point in the future to combine the efforts of a Watson-text-processing machine with Wolfram/Alpha’s approach to improve the latter’s ability to work with text-based knowledge. So long as it fits in a suff box, I’m up for it.