Computer Science: Machines who learn
Computers generated a great deal of excitement in the 1950s when they began to beat humans at checkers and to prove math theorems. In the 1960s the hope grew that scientists might soon be able to replicate the human brain in hardware and software and that “artifi cial intelligence” would soon match human performance on any task. In 1967 Marvin Minsky of the Massachusetts Institute of Technology, who died earlier this year, proclaimed that the challenge of AI would be solved within a generation.
That optimism, of course, turned out to be premature. Software designed to help physicians make better diagnoses and networks modeled after the human brain for recognizing the contents of photographs failed to live up to their initial hype. The algorithms of those early years lacked sophistication and needed more data than were available at the time. Computer processing was also too tepid to power machines that could perform the massive calculations needed to approximate something approaching the intricacies of human thought.
By the mid-2000s the dream of building machines with human-level intelligence had almost disappeared in the scientific community. At the time, even the term AI” seemed to leave the domain of serious science. Scientists and writers describe the dashed hopes of the period from the 1970s until the mid-2000s as a series of “AI winters.” ...
Schreiben Sie uns!
Beitrag schreiben