There are a lot of books about artificial intelligence. The interlibrary site Worldcat lists over 36,000. Amazon claims to have over 20,000 for sale. Many contain histrionic titles, such as Life 3.0: Being Human in the Age of Artificial Intelligence, You Look Like a Thing and I Love You: How Artificial Intelligence Works, Why It’s Making the World a Weirder Place, and especially The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Melanie Mitchell’s new book is more modestly titled but it is, in my opinion after surveying much of this literature, the most intelligent book on the subject. Mitchell is Professor of Computer Science at Portland State University as well as External Professor and Co-Chair of the Science Board at the Santa Fe Institute. And, unlike most active practitioners in the field, her evaluation of the current state of AI and its prospects is measured, cautious, and often skeptical.
The book begins with an introduction (“Prologue: Terrified”), a personal story of how she became involved with AI, inspired by Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid. Through a mixture of luck, audacity, and persistence, Mitchell first became Hofstadter’s research student and then a doctoral student under him. Decades later, in 2014, at a Google conference she attended with him, she learned that Hofstadter was upset that one AI program has defeated the world Chess champion and another has generated a music “composition” indistinguishable from (even judged better than) a genuine composition by Chopin. Hofstader’s concerns inspired her to write about the pursuit of human-level AI (and beyond).
The book is divided into five parts: Background; Looking and Learning; Learning to Play; Artificial Intelligence Meets Natural Language; and The Barrier of Meaning. Although Mitchell states up front that the book isn’t intended to be a general survey or history of AI, she still manages to tell enough of its history—especially of its hubristic inauguration in 1956 by John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—to put today’s enthusiastic optimism in perspective.
As Mitchell explains, one of the first branches in the pursuit of AI was artificial neural nets (ANN)—the foundation of today’s deep learning algorithms. She provides an example of such an ANN, the “perceptron” designed to “learn” how to recognize hand-lettered digits. The illustrative grid is 18×18, and each square has four shades: white, light gray, dark gray, and black. Curiously, she doesn’t mention the fundamental problem of even such a relatively modest ANN: the number of different possible different inputs in this case is 2 to the 326th power (2326), or:
136,703,170,298,938,245,273,281,389,194,851,335,334,573,089,430,825,777,276,610,662,900,622,062,449,960,995,201,469,573,563,940,864 […]
No comments:
Post a Comment