The official site of bestselling author Michael Shermer The official site of bestselling author Michael Shermer

Tag Results

Apocalypse A.I.

Artificial intelligence as existential threat

magazine cover

In 2014 SpaceX CEO Elon Musk tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” That same year University of Cambridge cosmologist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.” Microsoft co-founder Bill Gates also cautioned: “I am in the camp that is concerned about super intelligence.”

How the AI apocalypse might unfold was outlined by computer scientist Eliezer Yudkowsky in a paper in the 2008 book Global Catastrophic Risks: “How likely is it that AI will cross the entire vast gap from amoeba to village idiot, and then stop at the level of human genius?” His answer: “It would be physically possible to build a brain that computed a million times as fast as a human brain…. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.” Yudkowsky thinks that if we don’t get on top of this now it will be too late: “The AI runs on a different timescale than you do; by the time your neurons finish thinking the words ‘I should do something’ you have already lost.”

The paradigmatic example is University of Oxford philosopher Nick Bostrom’s thought experiment of the so-called paperclip maximizer presented in his Superintelligence book: An AI is designed to make paperclips, and after running through its initial supply of raw materials, it utilizes any available atoms that happen to be within its reach, including humans. As he described in a 2003 paper, from there it “starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” Before long, the entire universe is made up of paperclips and paperclip makers. (continue reading…)

read or write comments (27)

In the Year 9595

Why the singularity is not near,
but hope springs eternal
magazine cover

Watson is the IBM computer built by David Ferrucci and his team of 25 research scientists tasked with designing an artificial intelligence (AI) system that can rival human champions at the game of Jeopardy. After beating the greatest Jeopardy champions, Ken Jennings and Brad Rutter, in February 2011, the computer is now being employed in more practical tasks such as answering diagnostic medical questions.

I have a question: Does Watson know that it won Jeopardy? Did it think, “Oh, yeah! I beat the great Ken Jen!”? In other words, did Watson feel flushed with pride after its victory? This has been my standard response when someone asks me about the great human-versus-machine Jeopardy shoot-out; people always respond in the negative, understanding that such self-awareness is not yet the province of computers. So I put the line of inquiry to none other than Ferrucci at a recent conference. His answer surprised me: “Yes, Watson knows it won Jeopardy.” I was skeptical: How can that be, since such self-awareness is not yet possible in computers? “Because I told it that it won,” he replied with a wry smile.

Of course. You could even program Watson to vocalize a Howard Dean–like victory scream, but that is still a far cry from its feeling triumphant. That level of self-awareness in computers, and the time when it might be achieved, was a common theme at the Singularity Summit held in New York City on the weekend of October 15–16, 2011. There hundreds of singularitarians gathered to be apprised of our progress toward the date of 2045, set by visionary computer scientist Ray Kurzweil as being when computer intelligence will exceed that of all humanity by one billion times, humans will realize immortality, and technological change will be so rapid and profound that we will witness an intellectual event horizon beyond which, like its astronomical black hole namesake, life is not the same. (continue reading…)

read or write comments (16)

Transhumanism, the Singularity and Skepticism

Michael Shermer is interviewed about his views on the future of Artificial Intelligence, the technological singularity, transhumanism, and skepticism. This is not something that Michael Shermer usually talks about. Michael also spoke at the Singularity Summit in the US this year (2011). This footage was taken at the 2011 Think Inc conference in Melbourne.

read or write comments (5)