Post by Watchman on May 22, 2008 14:49:01 GMT -5
Could robots take over the world? In many ways, they already have.
By: MIKE MILIARD
5/21/2008 2:30:12 PM
Massively intelligent artificial brains with no further use for humans. Armies of robotic clones, ever replenishing their ranks. Nimbly mechanized BigDog quadrupeds that can’t be toppled as onward they march. Self-replicating swarms of ecosystem-destroying “gray goo.” Military killing machines with no moral compass built in.
We’re on the cusp of a perilous era. Our pitiful carbon bodies are evolving much slower than the silicon and steel gizmos we’re inventing. And the guys in the lab coats and pocket protectors are starting to worry we’ve opened Pandora’s hard drive.
Technology rules us. All day, every day, we interact with machines. What if they decided, some sunny afternoon, that they no longer wanted to interact with us?
“Open the pod bay doors, HAL.”
Smart robots could indeed stage the big takeover. Many experts think it’s inevitable. And the “technocalypse” won’t necessarily come courtesy of bipedal humanoids wasting us with lasers. It could be more insidious: surpassingly cerebral supercomputers simply deciding they don’t like us, or planet-devouring microtechnology run amok.
Our best hope is to become more like them. To make the great leap forward from human to cyber-enhanced post-human. Only then might the billion-casualty war between Cosmists and Terrans be avoided. (Er, we’ll explain that one later.)
The AP reported a couple months ago that Japan is well on its way “to a future . . . where humans and intelligent robots routinely live side by side and interact socially.” There are more than 370,000 robots employed at factories across that country — nearly 40 percent of the worldwide total. Robots in Japan are “serving as receptionists, vacuuming office corridors, and spoon-feeding the elderly,” the story reported. “With more than a fifth of [Japan’s] population 65 or older, the country is banking on robots to replenish the workforce and care for the elderly.”
Just think of all the time we’ve wasted fretting about climate change and looming recession, nuclear war and bioterrorism. Perhaps we should worry instead about destruction or subjugation at the steely hands of these man-made monsters?
More and more, the innocent subservience of Kraftwerk’s “Die Roboter” — from 1978’s classic The Man-Machine, in which helpful automatons chant “Ja tvoi sluga, Ja tvoi robotnik” (“I’m your slave, I’m your worker”) — seems a wistful relic of the past. These days it’s better summed up by Flight of the Conchords, cavorting in silver cardboard boxes as they cheer the downfall of their meat-puppet masters: “The humans are dead. The humans are dead. We used poisonous gasses. And we poisoned their asses.”
20,000 years of progress
Don’t think it could happen? To understand how quickly and irrevocably we’ve arrived in this grave new world, check out that Kraftwerk video and the clunky “technology” that used to be considered cutting-edge.
Only three decades later, we’ve got iPhones and wireless Web and hi-def TV. And what do you suppose things will look like in another 30 years? Precisely.
Such is the exponential technical growth possible under Moore’s Law — the postulation, put forth by Intel co-founder Gordon Moore in 1965, that the number of transistors that can be fit inexpensively onto an integrated circuit is doubling about every two years. It’s a law that’s held true for more than 40 years, and shows no signs of being broken.
Extrapolating from Moore’s Law is Ray Kurzweil, the renowned inventor and futurist — he does most of his mind-bending cogitation at Kurzweil Technologies in North Andover — who sees us fast approaching a technological critical mass.
Describing his own “Law of Accelerating Returns,” Kurzweil writes on his Web site that “we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate).” Within a few decades, he maintains, “machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”
Is your mind sufficiently blown?
Transcending biology
Kurzweil’s vision for the future, if a little hard to wrap one’s head around, at least sounds reassuringly sanguine. (Publisher’s Weekly calls him “technology’s most credibly hyperbolic optimist.”)
But as Bill Joy, a co-founder and chief scientist of Sun Microsystems, asked in his famous 2000 Wired article “Why the Future Doesn’t Need Us,” should we be banking not so much on Moore’s Law but on Murphy’s? With technology and innovation unfolding so blindingly fast, it would seem an awful lot could go wrong along the way. In that article, Joy argued that such technologies as “genetics, nanotechnology, and robotics” (GNR) could imperil mankind, leading to “whole new classes of accidents and abuses.”
One of his worries was that the rise of “superior robots” might edge out their creators. Noting that “biological species almost never survive encounters with superior competitors,” Joy (joylessly) painted a grim future where “robotic industries would compete vigorously among themselves for matter, energy, and space.” Humanity would be pushed to the margins, and eventually “squeezed out of existence.”
Kurzweil doesn’t see it like that. Rather than facing extinction, he argues in his best-selling 2004 book, The Singularity Is Near: When Humans Transcend Biology (a combo sci-fi/documentary film version is due out later this year), that the very essence of humankind will soon be augmented and improved by GNR technologies, “transcending biology” as we gain extraordinary intelligence and durability.
(Kurzweil, who’s 60, believes we’re very near to this profoundly different post-biological era: he pegs its onset for the year 2045. As such, he’s doing everything he can to stay alive to see it happen, including downing dozens of vitamin supplements each day.)
Kurzweil’s predictions — and he’s got a good track record so far — had better come to pass, and fast. Indeed, some argue, becoming “immortal software-based humans” may be the only way to keep us from being conquered by our own mechanical creations. None other than world-renowned theoretical physicist Stephen Hawking has said that, with computing power doubling regularly — much, much faster than our own evolution — it’s imperative that humans alter themselves via genetics and cyber technology, lest we be outpaced permanently. Otherwise, he told a German magazine, “the danger is real that [computer] intelligence will develop and take over the world.”
The gigadeath scenario
Consensus on enormous issues such as these — fundamental questions about what it is to be human and what humanity’s place is on the globe — promises to be difficult to come by. You thought the battle over stem-cell research was bad? Just wait until the Artilect War.
Yes, it’s possible, says Australian AI researcher Hugo de Garis, that robots may never have a chance to seize the controls of spaceship Earth. We may all end up killing each other first, as we argue how best to adapt to this daunting new dimension.
De Garis is an intelligent man. His studies in “evolvable hardware” are focused on emulating the neural transmissions of the human mind in robots. As director of the China-Brain Project at Xiamen University, he’s literally overseeing the creation of a robot brain.
And it won’t be long before these artificial brains could vastly surpass human intelligence. “Once neuroscience tells us how neuro microcircuits function in the brain, we can put those ideas into the artificial brains people like me are starting to build,” says de Garis, who predicts that, by the 2020s, nanotech will have developed sufficiently to give neuroscientists a whole slew of tiny new tools. By 2030 or so, “a huge industry rises selling genuinely smart, useful household robots.” As that happens, he predicts, by 2040 “the IQ gap between human and robot decreases.”
That’s when the trouble starts. In his book The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005), de Garis envisions a planet split between two factions. “Cosmists” are in favor of the creation of such surpassingly smart robotic artificial intellects (“artilect,” of course, being a portmanteau of those words). “Terrans” are opposed to it.
It won’t end well. “The Terrans will be horrified” at these astonishing advances, he says, and “the Cosmists will be gripped by a religious awe of building artilect gods.” In the advanced-weapon war between Cosmists and Terrans that de Garis sketches out, “billions, not millions, will die. Utterly depressing.”
Such a “gigadeath” scenario is clearly not something to look forward to. “How to prevent it? I wish I knew,” says de Garis. “I’m so pessimistic about it. I’m glad to be alive right now. I will die peacefully in my bed, but my grandkids will perish” either at the hands of Terrans, “who will argue it’s better to kill a few million Cosmists before the artilects or cyborgs are too smart,” or at the hands of “hugely superior” artilects who “might get rid of us for whatever reason.”
Yes, he concedes, “it sounds like science fiction today.” (That’s putting it mildly.) But de Garis insists that, with technology increasing at warp speed, it’s not as far-fetched as it seems. He cites Leó Szilárd, who conceived of the nuclear chain reaction in 1933. “They thought he was nuts. One bomb? Destroy a whole city? Get outta here,” he says. “A mere 12 years later: Hiroshima.”
Indeed, in the excellent AI documentary Building Gods, de Garis likens “brain builders” such as himself to this past century’s nuclear physicists — profoundly ambivalent about the simultaneously awe-inspiring and terrifying implications of their technological know-how: “Do I want to be a part of this? Shouldn’t my consciousness be saying, ‘I’m a stepping stone to this massive horror?’ Shouldn’t I stop now?”
A matter of intelligence
In the foreword to The Artilect War, Kevin Warwick, a professor of cybernetics at the UK’s University of Reading, writes that he and de Garis share the common view that, “later this century, humanity will have to confront the prospect of being replaced by a new dominant species, namely ultra-intelligent robots controlled by ultra-intelligent artificial brains.”
But if de Garis is a Cosmist with a heavy conscience, Warwick is more toward the Terran end of the spectrum. Just not all the way. Rather, he writes, “I would consider myself more as a Cyborgian,” someone who technologically upgrades one’s body to become “part machine, part human.”
He doesn’t have his wires crossed. Warwick received lots of attention a decade ago, thanks to the inroads made by his “Project Cyborg,” in which an RFID (radio-frequency identification) chip was implanted into his arm, allowing him to have domain over certain nearby computer-controlled devices. A few years later, a more advanced neural interface tapped into his nervous system: he was able to control a colleague’s arm motion via his own, and even communicated, rudimentarily, via telepathy with his wife.
In books such as I, Cyborg and March of the Machines: The Breakthrough in Artificial Intelligence, Warwick tries to suss out a future where becoming something other than human will become an almost-necessary adaptation.
“I believe ‘intelligence’ to be the key,” he writes in an e-mail from Prague. “Humans are mainly in the driving seat on Earth because, intellectually, we can outperform other creatures. But we are developing more and more intelligent machines — machines that are intelligent in the areas that matter,” such as defense systems, finance, and food production. “With networking, this intellectual power is becoming more dangerous — and we’ve become more dependent on it — and we can’t switch it off.”
The answer, then, is to get cracking: to beat the robots to the proverbial punch. “Either intelligent, networked machines will ultimately ‘take over’ or humans will have to upgrade our intellectual abilities, literally link our brains into the network, become Cyborgs,” says Warwick. “The future on Earth will be one dominated either by intelligent machines or by Cyborgs. Humans as we know them today will be a sub-species.”
youtubestartxyN4ViZ21N0youtubeend
VIDEO: A demonstration of a voiceless telephone call
Something different
In fact, in many ways, we’ve started our cyborgian transformation in earnest already. And few people seem to mind. Just look at the YouTube clip from a couple months back showing a demonstration at Texas Instruments in which a wireless neckband facilitates the first voiceless cell-phone call — intercepting nerve signals and allowing the wearer to “talk” without opening his or her mouth.
And that’s not to mention the ever-increasing array of “cyborg” technology that exists and is accepted all around us, such as pacemakers and insulin pumps, retinal implants and deep-brain stimulators, says James Clement, executive director of the World Transhumanist Association (WTA). “We don’t look at people who use prosthetic legs or who have cochlear ear implants or these neuropacemakers and say, ‘They’re not human. They’re cyborgs. They’re something different.’ ”
The WTA believes that technology, “used ethically, can allow individuals to extend beyond their normally evolved genetic capabilities,” says Clement. “We can dramatically enhance cognitive, biological, and mood capabilities in humans through nanotechnology, biotechnology, [and] artificial intelligence.”
He doesn’t pretend to know precisely what the future holds. “Any prediction that goes beyond 20 to 30 years is basically meaningless,” he says. “We are moving at such a fast pace of technological change that I don’t believe anyone can predict how we’re going to incorporate these changes in our lives.”
The chief concern he has “is whether we’re going to allow individuals to have access to these technologies to change and improve themselves if they’d like to. I don’t see this as being an either/or scenario — that if we allow these technologies, they’re necessarily going to run rampant, or will inevitably lead to some intractable divisiveness between post-humans and humans.”
As for machines evolving past us? “That’s not gonna happen,” says Clement. “This idea that technology only advances in one field and not other fields is totally false. As technology advances in microprocessing, it’s also advancing in biotech and the cognitive-science areas — and we’re simply going to incorporate smaller, faster, better chips for our own use,” thereby keeping pace with our silicon friends.
We’re getting there faster than some realize. “How far away is this from The Matrix?”, Warwick marvels in his e-mail exchange. “In my own [cyborg] work, with my nervous system linked into the Internet, I experienced new (ultrasonic) senses, was able to control technology directly from my brain signals on another continent . . . and (most of all) was able to communicate in a whole new way. Who needs speech (a trivial coded-message system) when we can communicate directly brain to brain? This is not science fiction, it is science of today.”
More human than human
That all sounds so exciting. Let’s just hope the teeming, self-replicating nanobots don’t eat us all first.
I watched a video the other week that was at once fascinating and disturbing: the simplistic white machine writhed and twisted, sort of like a synthetic pupa, slowly but inexorably growing and splitting off into two. It was reproducing itself. All by itself.
“This is very much in its infancy,” says Hod Lipson, a professor at Cornell who studies biologically inspired robotics and worked on that prototype. “But the basic idea is not so much self-replication as, essentially, self-repair. Self-replication is the ultimate form of self-repair: basically, the machine builds another copy of itself. It’s made out of lots of small pieces or modules, and each of these is interchangeable. The module robot is able to take a bunch of modules and assemble them into an identical robot. And then that robot again takes a set of modules and makes another robot. And so forth.”
Even as some people strive to make themselves more like robots, the inspiration for this project is the opposite, to make robots more like multicellular bio-organisms. “In biology, we’re made of lots of units, these units are swapped in and out: cells die, new cells are formed, and so forth,” says Lipson. “We’re not made of the same cells we were made of 10 years ago. But that’s how our bodies sustain.”
Sure, things are pretty basic now. And humans are hardly out of the loop — everything from the power supply to the placement of the modules must be arranged just so for the robot to avail itself of its man-given capabilities.
But Lipson acknowledges the uncanniness of a machine that can do something that was once the sole province of living organisms: repair and reproduce. “It touches on some of the basic questions about life,” he says, “and the fuzzy boundary between the natural and the synthetic.”
Nonetheless, if Warwick is already invoking The Matrix when speaking about cyborg technology, Lipson isn’t quite ready to talk The Terminator when it comes to self-replication. All the same, he says, “I think the idea could extend in the future to machines that are made of many more smaller modules — sort of high-resolution replication.” The better, perhaps, for the machines to more easily mimic human appearance as they reproduce within our ranks — and plot our destruction.
Nanomania!
While the technology itself is still new, people have been theorizing and fantasizing about out-of-control self-replication — especially when it comes to nanotechnology — for a while now. In 1998, Wil McCarthy, an aerospace engineer and sci-fi writer, wrote a book, Bloom, in which planet Earth was overrun by “technogenic life” — ravenous microscopic machines that devour the ecosystem in short order as they reproduce infinitely.
It’s not as crazy as it sounds. Bloom was based on science: the “gray goo” theory — so named for what the Earth would eventually be enveloped in, as these tiny, tiny robots reproduced ever faster — first put forth by K. Eric Drexler, the “father of molecular nanotechnology,” in 1986.
Drexler himself has since backed off from some of these more fantastical scenarios. “The ‘gray goo’ scare stories that have been circulating for the past 20 years or so are based on a distortion of some ideas in my first book on the subject, Engines of Creation,” he says. “These ideas became obsolete a few years later. People sometimes picture advanced nanotechnology as being about breeding nanobugs, [but] there’s no reason to try to make anything like that. Nanomanufacturing will [instead] be based on factories, and current designs look like desktop boxes with fans, power cords, and little rubber feet. These boxes would contain a lot of equipment remarkably similar to what you’d see in a modern automated factory, but with the smallest gears, conveyors, and so on being near the molecular size scale.”
“When people write about nanotechnology, they have a tendency to assume, incorrectly, that it can do anything,” says McCarthy. “But just because the machines are microscopic in size doesn’t mean that the laws of physics are any different for them. They still have to consume energy, they still have to obey certain conservation laws. Are there self-replicating nanomachines? No. Are there likely to be in the next 10 years? No. Is it a good idea for people to do? I don’t know.”
So breathe easy. For now. Could it ever happen? “I think, on a 15-year horizon, things like that will start to become possible, although difficult,” says McCarthy. Even then, though, “I don’t think it could happen by accident. I don’t think something like that could get out of control, unless a lot of people made a lot of very bad choices along the way. I don’t worry about it in that sense.”
And even if it was possible to unleash this world-devouring “ecophagy” by accident, says McCarthy, “there are a lot of ways to prevent the scenario from occurring. One of the ways that Drexler proposed was just to require the nanomachines to eat some particular food that’s not found in nature: some artificial amino acid or something; if they find themselves in an environment where that chemical is absent, they shut down.”
Should the technology ever progress to that point, however, one thing does give McCarthy pause. “It falls under the same heading as biological and nuclear weapons. There are enough people out there with very poor judgment and a lot of anger. As technology gets easier to access — as the tools for making things become cheaper and more plentiful — we may find that crazy people have a larger capability to write their craziness onto the world. And that worries me.”
youtubestartW1czBcnX1Wwyoutubeend
VIDEO: The BigDog project
Beware of bigdog
Sometimes even our own weapons can alarm the hell out of us. Boston Dynamics, the Waltham-based robotics and engineering firm, released a video clip two months ago showing the latest work on its BigDog project, a walking quadruped with stereo vision that’s been in the works since 2005, funded by the military’s Defense Advanced Research Projects Agency (DARPA).
It is, in a word, fuckingterrifying.
The Defense Department devised the thing as a pack mule of sorts, meant to accompany light-traveling soldiers on wheel-unfriendly terrain. But log on to YouTube and watch as its spindly but powerful limbs carry its bulk fleet-footedly through forest underbrush. And try not to shudder.
It’s not just the deeply creepy buzzing of its engine, like the threnody of a thousand angry bees. It’s the human fluidity with which its limbs move — as if two black-clad members of Mummenschanz were hidden under its carapace. More remarkable is its seeming indestructibility. A researcher kicks it with all his might: it wobbles drunkenly, nearly falls, but then rights itself and keeps moving. On slick ice, its legs splay out like a newborn colt, but it always manages to find its footing.
It may be meant for cargo carrying; still, one can’t help but picture this crazy, lifelike thing with guns strapped to its side. In short, the BigDog makes the exploring and ordnance-disposing PackBot — another Massachusetts-made military machine, invented at Burlington’s iRobot — seem R2-D2-friendly by comparison.
When people fantasize of robots ruling the planet, as they long have, this is usually how it starts: more-or-less humanoid mechanized creatures, usually weaponized, destroying their hubristic creators en masse.
Could it actually happen? The fact that several experts are up in arms about an irresponsible “robot arms race” they feel could get out of control suggests it’s at least possible.
This past February, in advance of a London conference on “The Ethics of Autonomous Military Systems,” Noel Sharkey, an AI and robotics professor at the University of Sheffield, told New Scientist magazine that he was “really scared” by how many nations — not least the US — are developing military robots that could eventually kill without any human guidance whatsoever.
There are currently more than 4000 semi-autonomous robots in Iraq alone, and Sharkey said giving machines like them the power to decide when to “pull the trigger” isn’t too bright. The Pentagon, New Scientist reported, is nonetheless “nearly two years into a research program aimed at having robots identify potential threats without human help.”
Just this past week, news broke that one Air Force colonel was advocating the implementation of a “botnet” — a series of linked computers that, in emulation of tactics used by hackers and spammers, could be used to wage cyber warfare. It’s just the latest example of what some argue is the mechanization of the military in ways not everyone may entirely understand or be able to control. In a world where, as Warwick points out, networked neural systems can “learn from experience,” is this really such a smart plan? (Wired doesn’t think so, calling it, for various reasons, “the most lunatic idea to come out of the military since the gay bomb.”)
“The US military has set the pace,” says Warwick. “By 2020, there will be few human soldiers on the front line. Networked communications are already so much improved to support this.” But “the reasoning of such a system is very different to that of a human: it is a machine-networked system, with machine values and machine ethics. It has been created to destroy targets. The only missing piece is the decision-making element — what makes the system select a target, and what target does it select?”
Weird science
If it makes you feel any better, we’re probably safe in the near-term, says Paulina Varshavskaya, a post-doc student in the Distributed Robotics lab at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
For the record: it’s still really hard to even get a robot to walk. “Or see, or speak, or understand a sentence. Or anything related to perception or manipulation of the physical world.”
That said, Varshavskaya is making strides in her field of machine learning and distributed systems — science that enables “several robots or several computers or several artificial agents to work together and improve their behavior.”
A-ha! So she is building a mechanical army. And it’s massing right here in our backyard!
“Precisely not like that.”
But, she says, the more basic question of whether machines will one day rule us has pretty much already been answered.
“What does that mean, to take over the world? The Internet has taken over the world — the First World, at least — and nobody seems to mind.” Instead of focusing on a “crazy malevolent scenario” where “some evil robots subjugate humans,” we should recognize what’s already come to pass.
“There are so many things now that we don’t even think of doing without the aid of machines,” says Varshavskaya. “We have machines that are stronger than us. The Internet already knows more than any single person. That doesn’t seem to alarm anyone.”
This “sci-fi scenario of humanoid-looking robots that have this weird thing called AI that are somehow trying to get their say by destroying humans, or making them their servants? I find that silly,” she says. “There’s no way in the future that I can see that technology will make up its own mind about how to use us.”
Yes, there’s a lot of weird science out there right now. And as the decades zoom past, it could get a lot weirder. But as with any technology — be it the Blackberry you can’t put down or the MMORPG you can’t stop playing — “anytime there’s something new,” says Varshavskaya, “it liberates us and subjugates us.”
Mike Miliard, for one, welcomes our new robot overlords. They can contact him at mmiliard@phx.com .
Copyright © 2007 The Phoenix Media/Communications Group