On ‘Jeopardy!’ Watson Win Is All but Trivialurl Wednesday, July 6, 2011
On ‘Jeopardy!’ Watson Win Is All but Trivial
Hint:If you paste any URL into thinkery it extracts the relevant content for you, so you can read it later.
Extracted Page: http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r=1&ref=technologyOn ‘Jeopardy!’ Watson Win Is All but Trivial
Carol Kaelson/Jeopardy Productions Inc., via Associated Press
YORKTOWN HEIGHTS, N.Y. — In the end, the humans on “Jeopardy!” surrendered meekly.
Facing certain defeat at the hands of a room-size I.B.M. computer on Wednesday evening, Ken Jennings, famous for winning 74 games in a row on the TV quiz show, acknowledged the obvious. “I, for one, welcome our new computer overlords,” he wrote on his video screen, borrowing a line from a “Simpsons” episode.
From now on, if the answer is “the computer champion on “Jeopardy!,” the question will be, “What is Watson?”
For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million prize, but proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them.
Watson, specifically, is a “question answering machine” of a type that artificial intelligence researchers have struggled with for decades — a computer akin to the one on “Star Trek” that can understand questions posed in natural language and answer them.
Watson showed itself to be imperfect, but researchers at I.B.M. and other companies are already developing uses for Watson’s technologies that could have a significant impact on the way doctors practice and consumers buy products.
“Cast your mind back 20 years and who would have thought this was possible?” said Edward Feigenbaum, a Stanford University computer scientist and a pioneer in the field.
In its “Jeopardy!” project, I.B.M. researchers were tackling a game that requires not only encyclopedic recall, but also the ability to untangle convoluted and often opaque statements, a modicum of luck, and quick, strategic button pressing.
The contest, which was taped in January here at the company’s T. J. Watson Research Laboratory before an audience of I.B.M. executives and company clients, played out in three televised episodes concluding Wednesday. At the end of the first day, Watson was in a tie with Brad Rutter, another ace human player, at $5,000 each, with Mr. Jennings trailing with $2,000.
But on the second day, Watson went on a tear. By night’s end, Watson had a commanding lead with a total of $35,734, compared with Mr. Rutter’s $10,400 and Mr. Jennings’s $4,800.
Victory was not cemented until late in the third match, when Watson was in Nonfiction. “Same category for $1,200,” it said in a manufactured tenor, and lucked into a Daily Double. Mr. Jennings grimaced.
Even later in the match, however, had Mr. Jennings won another key Daily Double it might have come down to Final Jeopardy, I.B.M. researchers acknowledged.
The final tally was $77,147 to Mr. Jennings’s $24,000 and Mr. Rutter’s $21,600.
More than anything, the contest was a vindication for the academic field of computer science, which began with great promise in the 1960s with the vision of creating a thinking machine and which became the laughingstock of Silicon Valley in the 1980s, when a series of heavily financed start-up companies went bankrupt.
Despite its intellectual prowess, Watson was by no means omniscient. On Tuesday evening during Final Jeopardy, the category was U.S. Cities and the clue was: “Its largest airport is named for a World War II hero; its second largest for a World War II battle.”
Watson drew guffaws from many in the television audience when it responded “What is Toronto?????”
The string of question marks indicated that the system had very low confidence in its response, I.B.M. researchers said, but because it was Final Jeopardy, it was forced to give a response. The machine did not suffer much damage. It had wagered just $947 on its result. (The correct answer is, "What is Chicago?")
“We failed to deeply understand what was going on there,” said David Ferrucci, an I.B.M. researcher who led the development of Watson. “The reality is that there’s lots of data where the title is U.S. cities and the answers are countries, European cities, people, mayors. Even though it says U.S. cities, we had very little confidence that that’s the distinguishing feature.”
The researchers also acknowledged that the machine had benefited from the “buzzer factor.”