Thursday, November 09, 2017

Why We Should Be Concerned About Artificial Superintelligence

This article is a bit long in its entirety,  but if you find the subject as fascinating as I do, it is worth the time to read it.

BY MATTHEW GRAVES
The human brain isn’t magic; nor are the problem-solving abilities our brains possess. They are, however, still poorly understood. If there’s nothing magical about our brains or essential about the carbon atoms that make them up, then we can imagine eventually building machines that possess all the same cognitive abilities we do. Despite the recent advances in the field of artificial intelligence, it is still unclear how we might achieve this feat, how many pieces of the puzzle are still missing, and what the consequences might be when we do. There are, I will argue, good reasons to be concerned about AI.
The Capabilities Challenge
While we lack a robust and general theory of intelligence of the kind that would tell us how to build intelligence from scratch, we aren’t completely in the dark. We can still make some predictions, especially if we focus on the consequences of capabilities instead of their construction. If we define intelligence as the general ability to figure out solutions to a variety of problems or identify good policies for achieving a variety of goals, then we can reason about the impacts that more intelligent systems could have, without relying too much on the implementation details of those systems.
Retro AI destroying city
Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.
The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.
The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI. […]



No comments: