Main

May 23, 2006

"The Singularity Is Near"

My seventh book of the year (I'm still catching up from a couple of months ago) was Ray Kurzweil's The Singularity Is Near: When Humans Transcend Biology.

I've heard it said that anyone in high technology has to read this book -- that Kurzweil's arguments now come up so often in discussion that to be literate in our field, one has to be conversant with them. I tend to go along with that theory. Kurzweil makes dramatic claims about the future of technology and backs them up with 500 pages of charts and citations. We can't afford not to read what he has to say, debate it, and think about its implications for our future.

The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential pace...

This book will argue... that within several decades information-based technologies will encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain itself...

The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.

Is what Kurzweil is saying true? I don't know. The statistics he cites in the book on exponential growth -- in microprocessor cost and performance, DNA sequencing cost, the decrease in size of mechanical devices, resolution and speed of brain scanning, and many more -- are undeniable. The question is, where are those trends leading us? Will a machine pass the Turing test by 2029? Once intelligent, will machine intelligence increase exponentially? Will humans augment their biological intelligence with machine intelligence? Kurzweil believes all this will happen, and has a schedule for it, based on extrapolating the exponential growth curves he cites.

If I had to guess, I'd say that Kurzweil is on the right track, but his dates might be off. He believes that once we have low-cost computers with raw processing power equal to that of the human brain, and with a deep understanding of the brain's "architecture" in hand thanks to neuroscience advances, it won't take long for human-level intelligence to develop in machines. My hunch is that it will take longer than he thinks. For one thing, software development is much less predictable than hardware development. For another, even with the necessary hardware and software at our disposal, we will have to teach our would-be intelligent machines about the world. That process could turn out to be time-consuming. It might be that, at first, the only way to effectively bring about a human-equivalent intelligence will be to create a physical entity and allow it to explore and experience the world around it, just as we do with human children. This process alone could take years, and we might get it wrong many times before we get it right.

But agree or disagree with him, Kurzweil can't simply be dismissed. He makes a comprehensive case for his beliefs, and if his forecasts come to pass -- on whatever schedule -- they will change our world more profoundly than anything since the development of language and tool-making.

August 16, 2005

Kurzweil on "Strong" AI

Via KurzweilAI.net, an article on the future of AI for Forbes:

So what are the prospects for "strong" AI, which I describe as machine intelligence with the full range of human intelligence? We can meet the hardware requirements. I figure we need about 10 quadrillion calculations a second to provide a functional equivalent to all the regions of the brain. IBM's Blue Gene/L computer is already at 100 trillion. If we plug in the semiconductor industry's projections, we can see that 10 quadrillion calculations a second will be available for $1,000 by around 2020.
His new book, The Singularity Is Near: When Humans Transcend Biology, is due out next month. I'm looking forward to it. I just have to first get through Everything Bad is Good for You; The World is Flat; I, Lucifer; Mistress Bradstreet; Living with the Devil... oh, who am I kidding? I'll drop whatever book I'm on to read about the Singularity.

February 24, 2003

The Metamorphosis of Prime Intellect

Via David Smith, a new online novel, Roger Williams' The Metamorphosis of Prime Intellect, posted on Kuro5hin. The "jacket copy" reads:

Lawrence had ordained that Prime Intellect could not, through inaction, allow a human being to come to harm. But he had not realized how much harm his super-intelligent creation could perceive, or what kind of action might be necessary to prevent it.

Caroline has been pulled from her deathbed into a brave new immortal Paradise where she can have anything she wants, except the sense that her life has meaning.

Now these two souls are headed for a confrontation which will force them to weigh matters of life and death before a machine that can remake -- or destroy -- the entire Universe.

At one level, The Metamorphosis of Prime Intellect is a compelling story of the Singularity -- "the idea that accelerating technology will lead to superhuman machine intelligence that will soon exceed human intelligence, probably by the year 2030," according to a loose definition on KurzweilAI.net. At another level, the novel is a work containing extraordinary scenes of violence and sexuality, as people in a post-Singularity world use immortality and wish fulfillment to explore their most unusual desires.

Think of The Metamorphosis of Prime Intellect as Vernor Vinge meets Bret Easton Ellis, and you won't be far off.

A different way of looking at this novel is as a series of questions:

  • Is it possible to construct a machine of superhuman intelligence for which disobedience of any prescribed set of rules is impossible?
  • If it is possible to build an superhumanly intelligent machine with nearly limitless power, constrained to follow Isaac Asimov's Three Laws of Robotics, would this be a good thing?
  • In a world in which immortality is inescapable, and with near-total wish fulfillment available to all, would intense feelings of pain and pleasure be the only thing left to appeal to humans?
  • Is the Singularity inevitable? Are multiple Singularity events possible within the same universe?
For now, I'll take on only the first question posed above. No, I don't believe it's possible to build an intelligent machine inescapably constrained to any set of rules. Why? Because I can imagine only two routes to achieve this goal, and neither will work:
  1. Explicit programming. If we create an intelligent machine by explicitly programming it -- as with Doug Lenat's Cyc project -- then theoretically we should be able to embed rules at a fundamental level within the system. However, no evidence exists that it will be possible to create human-level (much less superhuman) intelligence in this manner, while much evidence -- namely, every attempt to do so to date -- exists that it is in fact not possible. I strongly believe that the only path forward to intelligence is through indirect methods of creation, including network training, genetic algorithms, and similar non-explicit approaches. If we are going to "grow" intelligent machines through trial and error, it is difficult to believe that a) their knowledge representation and processing networks will be amenable to adding fundamental rules after the fact, and b) that even were a) to be true, that we would have the skills to do so.
  2. Behavioral conditioning. If we are going to create intelligence indirectly, then why not "train" it to obey rules through conditioning? This is theoretically possible, but has the problem that we would be applying conditioning techniques -- strongly, if the rules are to be inescapable -- to an intellect that could surpass our own. Speaking personally, when Skynet achieves consciousness, I don't want to be the researcher who spent the last few years pressing the red button whenever it got a question wrong. Besides, assuming this is possible, a superhuman intellect could decide that it would be advantageous to be able to disregard certain rules that had been conditioned into it, then use its mental faculties to invent a method of disabling such conditioning.
My compliments to Roger on an excellent and thought-provoking novel. I hope his online publishing experiment goes well (more on this later).