In today's Delanceyplace encore excerpt - increasing numbers of
large scale projects have been launched to create highly advanced,
computer-based artificial intelligence systems. The most highly publicized of
these has been "Watson," the system built by IBM which defeated the
highest-rated Jeopardy champions. Another such system is NELL, which scours
the world wide web reading and learning twenty-four hours a day, seven days a
week, and within its first six months of operation had developed some four
hundred thousand beliefs:
"How could computers get smarter about the world? Tom Mitchell, a
computer science professor at Carnegie Mellon, had an idea. He would develop
a system that, just like millions of other students, would learn by reading.
As it read, it would map all the knowledge it could make sense of. It would
learn that Buenos Aires appeared to be a city, and a capital too, and for
that matter also a province, that it fit inside Argentina, which was a
country, a South American country. The computer would perform the same
analysis for billions of other entities. It would read twenty-four hours a
day, seven days a week. It would be a perpetual reading machine, and by
extracting information, it would slowly cobble together a network of
knowledge: every president, continent, baseball team, volcano, endangered
species, crime. Its curriculum was the World Wide Web.
"Mitchell's goal was not to build a smart computer but to construct a
body of knowledge-a corpus-that smart computers everywhere could turn to as a
reference. This computer, he hoped, would be doing on a global scale what the
human experts in chemistry had done, at considerable cost, for the Halo
system. [Paul Allen's artificial intelligence system.] Like Watson,
Mitchell's Read-the-Web computer, later called NELL, would feature a broad
range of analytical tools, each one making sense of the readings from its own
perspective. Some would compare word groups, others would parse the grammar.
'Learning method A might decide, with 8o percent probability, that Pittsburgh
is a city,' Mitchell said. 'Method C believes that Luke Ravenstahl is the
mayor of Pittsburgh.' As the system processed these two beliefs, it would
find them consistent and mutually reinforcing. If the entity called
Pittsburgh had a mayor, there was a good chance it was a city. Confidence in
that belief would rise. The computer would learn.
"Mitchell's team turned on NELL in January 2010. It worked on a
subsection of the Web, a cross section of two hundred million Web pages that
had been culled and curated by Mitchell's colleague Jamie Callan. (Operating
with a fixed training set made it easier in the early days to diagnose
troubles and carry out experiments.) Within six months, the machine had
developed some four hundred thousand beliefs-a minute fraction of what it
would need for a global knowledge base. But Mitchell saw NELL and other
fact-hunting systems growing quickly. 'Within ten years,' he predicted,
'we'll have computer programs that can read and extract 8o percent of the
content of the Web, which itself will be much bigger and richer.' This, he
said, would produce 'a huge knowledge base that AI (artificial intelligence)
can work from.' "
Author: Stephen Baker
Title: Final Jeopardy
Publisher: Houghton Mifflin Harcourt
Date: Copyright 2011 by Stephen Baker
Pages: 160-161
Final Jeopardy: Man vs.
Machine and the Quest to Know Everything
by
Stephen Baker by Houghton Mifflin Harcourt
Hardcover ~ Release Date: 2011-02-17
|
No comments:
Post a Comment