Read Armageddon Science Online

Authors: Brian Clegg

Armageddon Science (20 page)

BOOK: Armageddon Science
9.2Mb size Format: txt, pdf, ePub
ads

With nonprogrammable nanobots we can avoid hacking. What isn’t possible, though, is to prevent malicious people from corrupting the “seed” devices from which the nanobots will be produced; nor can we stop rogue technologists from producing their own nanomachines that can cause damage. But the genie is already out of the bottle. Just as it was impossible to forget the possibility of atomic weapons once the concept had been devised, we can’t go back to a time when the idea of nanobots hadn’t occurred to anyone. We are already at the stage of “the good guys had better build them, or it will only be the bad guys doing it.”

And so we come back to the gray-goo scenario. In self-reproduction we have got a plausible mechanism to achieve the volume of nanobots required (provided we overlook the lack of a basic means to construct them in the first place), but we still need to have the “blueprints” of what the assemblers are going to build, we need to convey instructions to these assembler machines, and we need to provide them with the raw materials and power they require.

Where the object to be constructed is simple and repetitive, the instructions to the assembler could be input by hand. For example, a diamond consists of a set of carbon atoms in a simple, well-understood repeating pattern. All that is required to establish a true diamond with its familiar characteristics of hardness, transparency, and luster is the exact measurements of that repeating pattern. It would not prove difficult to set this up, just as we might provide instructions to a lathe to construct a chair leg.

Things become harder, though, if our assemblers are going to build a complex structure like a computer, or an organic substance, like a piece of roast beef. Here, Eric Drexler has suggested, we would need to have disassemblers, the mirror counterpart of assemblers. The role of these devices would be to swarm over an object and strip away its atoms, layer by layer, capturing the information needed to reproduce such an item. In the process the original object would, inevitably, be vaporized.

The obvious route for feeding the instructions from disassemblers to the assemblers that would make the new version of the vaporized object would be to learn from nature. In the natural world, chemical and electrical signals are used to pass control messages from biological machine to biological machine. Similarly, a sea of nanobot assemblers would have to receive instructions from their surroundings, whether by chemical, electrical, acoustic, or light-based means. One potentially valuable source of guidance here is in the control mechanisms used by superorganisms.

For centuries we have been fascinated by the way that apparently unintelligent insects like bees, ants, and termites manage collectively to undertake remarkable feats of engineering and mass activity. What has gradually come to be realized is that, for example, the bees in a hive are not really a set of individuals somehow managing to work together. Instead, the whole colony is a single organism.

When seen from that viewpoint, their capabilities and strange-seeming actions make much more sense. It is no longer difficult to understand why a particular bee might sacrifice itself for the colony. In the same way, in the human body, cells often “commit suicide” for the good of the body as a whole, a process known as apoptosis. Similarly it’s less puzzling how the bees manage the kind of organization, hive building, and harvesting activities they do. If you consider each bee as more like a single cell in a multi-celled organism, it is much easier to understand how the colony functions.

Although assemblers would have to be simpler than bees or ants, as they would be built on a much smaller scale, they could still make use of the mechanisms that a colony employs to communicate and to control the actions of individual insects. Bees, for example, pass on information in “waggle dances,” and use chemical messages to orchestrate the behavior of the many individuals that make up the superorganism. In designing a swarm of assemblers, we would have similar requirements for communication, and might well employ similar superorganism techniques.

When it came to raw materials and power, we would have to enable the assemblers to take atoms to use in construction from somewhere nearby—they could hardly run down to the nearest store and fetch their materials—and would need to have them consume energy from sunlight, using photochemical or photoelectric means, or from chemical energy by burning fuels, just as most biological creatures do.

Given those possibilities, a number of gray-goo scenarios emerge. Imagine rogue assemblers, where the instructions have become corrupted, just as the instructions in our DNA routinely become corrupted to form mutations (and each of us has some of these flaws in his or her DNA). Instead of being told to use, say, a pile of sand—and only that pile—as raw materials, the assemblers could choose whatever is nearby, whether it’s a car, a building, or a human being.

Similarly, if we opt for chemical fuel rather than light-based power, it’s not hard to imagine a swarm of runaway assemblers that are prepared to take any carbon-based material—crops, animals, human beings—and use them as the fuel that they digest. And bearing in mind that assemblers have to be able to produce other assemblers—in effect, to breed—it’s not difficult to imagine a sea of the constructs, growing ever larger as it consumes everything before it.

Then there is the terrifying possibility of an out-of-control swarm of disassemblers. The good news here is that disassemblers couldn’t breed. To enable them to both construct and disassemble seems unnecessarily complex—disassemblers would be the product of a sea of assemblers. So a swarm of disassemblers would not be ever growing and self-reproducing as a group of assemblers could be. But disassemblers could, without doubt, cause chaos and horrendous damage, acting like a cartoon shoal of piranhas, stripping all the flesh from a skeleton. Only disassemblers wouldn’t be limited to flesh—they could take absolutely
anything
and everything apart, completely destroying a city and every single thing in it.

However, these dangers are so clear, even at this very theoretical stage, that it’s hard to imagine the situations where rogue assemblers could get out of hand being allowed to come close. It would be easy enough, for example, to avoid assemblers eating food (or humans) by limiting them to solar power; the leap to chemical energy sourcing would be too great for any realistic mutation (and unlike with biological mechanisms, we can build in many more checks and balances against mutation occurring). Similarly, disassemblers would be easy enough to limit, by giving them either limited lifespans or an aversion to various substances, reducing the threat they pose.

Furthermore, it’s also worth stressing again that we may never be able to build effective nanomachines. There have been experiments producing promising components—for example, nanogears assembled out of molecules, and nanoshears, special molecules like a pair of scissors that can be used to modify other molecules—but just think of how much further we have to go. Not only do we need to build a complex mechanism on this scale, but we will have to give it a power source, a computer, and the mechanism to reproduce. Of these, the only one we can manage at the moment is to build the computer, and that is on normal macro scales, rather than something so small we can’t see. (Yes, we have power sources, but battery technology isn’t transferable to the scale of a nanomachine. There isn’t room to carry much fuel on board, so the nanobot would need to harvest energy.)

Although there have been predictions of self-building (in effect, reproducing) robots for a long time, the reality lags far behind the hype. Back in March 1981, a NASA scientist announced on CBS Radio news that we would have self-replicating robots within twenty years. We are still waiting. And this is just to have something that can reproduce itself on the normal macro scale, without all the extra challenges of acting on the scale of viruses and large molecules.

Even if the technology were here today, there would probably be scaling issues. Just as you can’t blow a spider up to human size because of scaling, you can’t shrink something that works at the human scale down to the nano level and expect it to function the same way. Proportions change. The reason we will never have a spider the size of a horse is that weight goes up with volume, so doubling each dimension produces eight times the weight. But the strength of the legs goes up with cross-section, so doubling each dimension only produces four times as much strength. Weight gets bigger more quickly than the strength of a creature’s limbs. The monster’s legs would collapse.

Similarly, different physical influences come into play when working on the scale of nanotechnology. The electromagnetic effects of the positive and negative charges on parts of atoms begin to have a noticeable effect at this level. A quantum process called the Casimir effect means that conductors that are very close to one another on the nanoscale become powerfully attracted to one another. At this very small scale, things stick together where their macro-sized equivalents wouldn’t. This could easily mess up nanomachines, unless, like their biological equivalents, they resort to greater use of fluids. Almost everyone making excited predictions about the impressive capabilities of nanomachines wildly underestimates the complexity of operating at this scale.

But let’s assume we do develop the technology to successfully make nanobots. How likely is the gray-goo scenario? This envisages a nanomachine that has, thanks to a random error, got out of control, producing a “better” machine from its own point of view, though not for humanity, as it simply duplicates itself and consumes, becoming a ravening horde that destroys everything from crops to human flesh. It’s a ghoulish thought. Yet the parallel with biology isn’t exact. The big difference between a machine and a plant or an animal is that the machine is designed. And the right design can add in many layers of safeguard.

First there is the resistance to error. Biological “devices” are much more prone to error than electronic ones. Yes, there could still be a copying error in producing new nanobots, but it would happen much less often. Second comes error checking. Our biological mechanisms do have some error checking, but they aren’t capable of stopping mutation. It is entirely possible to build error checks into electronic devices that prevent a copy from being activated if there was an error in the copying. Depending on the level of risk, we can implement as many error checks as we like. There is a crucial difference between design—where we can anticipate a requirement and build something in—and the blind process of evolution.

Third, we can restrict the number of times a device can duplicate itself, as a fallback against rampant gray goo. Finally, we can equip devices with as many other fail-safes as we like. For instance, it would be possible for nanomachines to have built-in deactivators controlled by a radio signal. All these layers of precautionary design would make any nanobots much less of a threat than those who find them frightening would suggest.

However, it doesn’t really matter how clever our precautions are if those who are building the nanotechnology are setting out not to produce something safe and constructive, but rather something intentionally destructive. It’s one thing to feel reasonably comfortable that we will not be threatened by the accidental destructive force of nanotechnology, but what happens if someone actively designs nanotechnology for destruction?

According to Friends of the Earth, “Control of nanotech applications for military purposes will confer huge advantages on those countries who possess them. The implications of this for world security will be considerable. The military uses of nano scale technology could create awesome weapons of mass destruction.” It’s important we understand just how much truth there is in this, and why scientists are so fascinated by nanotechnology.

In looking at military uses of nanotechnology, we need to build up to Friends of the Earth’s supposed weapons of mass destruction, because there can be no doubt that nanotechnology itself will make its way into military use. Probably the earliest military application will be in enhanced information systems—the ability to drop a dust of sensors, practically invisible to the onlooker but capable of relaying information back to military intelligence.

Equally close down the line is the use of nanotechnology in special suits to support soldiers in the field, potentially using a whole range of technologies. There could be ultrastrong, lightweight materials constructed from nanotubes, exoskeletons powered by artificial muscles using nanotechnology, and medical facilities to intervene when a soldier is injured, using nanotechnology both to gain information and to interact with the body.

But to envisage the weapons of mass destruction that Friends of the Earth predicts, we need to return to variants of the gray-goo scenario, driven not by accident but by intent. Each of the most dramatic of the nanobot types of attack—disassemblers, assemblers using flesh as raw materials, and assemblers using flesh as fuel—has the potential to be used as a dramatic and terrifying weapon.

There is something particularly sickening about the thought of an unstoppable fluid—and swarming nanobots are sufficiently small that they will act like a liquid that moves under its own volition—that flows toward you and over you, consuming your skin and then your flesh, layer by layer, leaving nothing more than dust. It’s flaying alive taken to the ultimate, exquisite level of horror.

This is a weapon of mass destruction that is comparable to chemical or biological agents in its ability to flow over a battlefield and destroy the enemy, but with no conventional protection possible—whatever protective gear the opposing forces wore, it could be consumed effortlessly by the nanobot army. It is a truly terrifying concept. But I believe that is an unlikely scenario.

The first hurdle is simply achieving the technology. I would be surprised if this becomes possible in the next fifty years. As was demonstrated by the NASA scientist who envisaged self-replicating robots by 2001, it is very easy to overestimate how quickly technology can be achieved in such a complex and challenging area.

BOOK: Armageddon Science
9.2Mb size Format: txt, pdf, ePub
ads

Other books

Crónica de una muerte anunciada by Gabriel García Márquez
Takoda by T. M. Hobbs
The Place of Dead Kings by Wilson, Geoffrey
Dom Wars Round Two by Lucian Bane
A Family Affair by Mary Campisi
To Catch a Copperhead by Pro Se Press
He Huffed and He Puffed by Barbara Paul
The Lady and the Lake by Rosemary Smith
50 - Calling All Creeps! by R.L. Stine - (ebook by Undead)