Read Armageddon Science Online

Authors: Brian Clegg

Armageddon Science (21 page)

BOOK: Armageddon Science
3.58Mb size Format: txt, pdf, ePub
ads

Second, there is a matter of control. It is unlikely that a nanobot attack could be deployed in the same way as a chemical or biological weapon, as it could consume the friendly army as easily as the enemy. It would need to be deployed more like a nuclear weapon, provided the nanobots were programmed to have a limited lifespan—otherwise it’s not clear why the destruction would ever stop.

Finally, provided the enemy has similar technology, some form of countermeasures become possible. If nanobots are programmed not to attack friendly personnel by using chemical scents or special radio signals, these protective measures could be duplicated to misdirect them. It’s also possible to envisage a suit with outer layers of protective nanobots which feed on the attacking nanobots, neutralizing the attack.

The result of such defensive technology is that nanobot weapons would tend to fail to destroy the military, only massacring innocent citizens. This is a form of weapon—worse than indiscriminate—that is likely to be strongly legislated against if it ever becomes a reality, just as biological and chemical weapons are today. This doesn’t mean that they would never be used. Rogue nations and terrorists could deploy them, just as they have other illegal weapons. But the international community would act to suppress their use.

It is fascinating how much interest nanotechnology has raised in the scientific community, out of all proportion to the benefits that the concept of working on this scale has delivered at the time of writing. The key technologies of the twenty-first century are often identified as GNR—genetics, nanotechnology, and robotics. Of these, only genetics has yet to have any real significance, unless you are prepared to count integrated circuits as nanotechnology.

Nanotechnology has such an appeal to scientists in part because it is often the path taken by nature. We may build on the macro scale, but many natural “machines” depend on workings at a subcellular level. For example, the power sources of the cells in our body, mitochondria, are just a few hundred nanometers across, yet they are vital to our existence. And then there is the fantasy (at least as we currently should see it) of Eric Drexler’s assemblers in
Engines of Creation
.

If we could get assemblers to work, we would have, in principle, the opportunity to make almost anything at hardly any cost over and above the price of the raw materials. We would have a technology that could extend human life almost indefinitely, and that could enable us to haul satellites into space for a fraction of the current cost, or to analyze materials to a whole new level of subtlety. This is a stunning possibility that will keep nanotechnology in the minds of those who direct the future of science funding, even while the actual benefits currently being delivered by nanotechnology are much more mundane. But we have to bear in mind just how far out in the future this is. It is entirely possible that there won’t be significant nanotechnology of the kind described in
Engines of Creation
this century.

Nanoscale robots are very much an idea for the future, but we don’t have to look any distance ahead to see how misuse of another technology—information technology—could pose a real threat to our present-day world.

Chapter Seven
Information Meltdown

It has been said that computer machines can only carry out the processes that they are instructed to do. This is certainly true in the sense that if they do something other than what they were instructed then they have just made a mistake.

Alan Mathison Turing (1912–54),
A. M. Turing’s Ace Report of 1946 and Other Papers,
ed. B. E. Carpenter and R. W. Doran (1986)

In 2005, a rat and a clumsy workman managed to bring down all of New Zealand’s telecommunications, causing nationwide chaos. Science has given us a superb capability to exchange information, far beyond anything that nature can provide, but our increasing dependence on electronic networks leaves us vulnerable to large-scale destruction of data. In 2008 we saw how relatively small upsets in the banking system can cause devastation in the world economy—but this upheaval is trivial compared to the impact of the total loss or the wide-scale corruption of our networks of global data. The financial world would be plunged into chaos—and so would business, transport, government, and the military.

Such a science-based disaster might seem relatively harmless—it’s only a loss of information—but our complex, unnatural society can exist for only a short time without its information flows. Without the systems that enable food to be provided to stores, without the computer-controlled grids that manage our energy and communications, we would be helpless, and the result would be an end to familiar, comfortable life, and could result in millions of deaths.

Let’s go back to New Zealand in June 2005. Two small, totally independent acts came together to disrupt a nation. On North Island, a hungry rat chewed its way through a fiber-optic cable, one of the main arteries of electronic communication in the country. At about the same time, a workman was digging a hole for a power line in a different part of the country and sliced through a second cable. Although the communication system has redundancy, giving it the ability to route around a network failure, two major breakdowns were enough to close the stock exchange and to knock out mobile phones, the Internet, banking, airlines, retail systems, and much more. For five hours, the country was paralyzed by these twin attacks.

There is always the potential for accidental damage, such as happened in this New Zealand failure—but there is a much greater risk from intentional harm. The chance of such a dual hit on communications lines happening by accident is very small, but terrorists would consider it the obvious approach to achieve maximum effect. Governments are now taking the terrorists of the electronic frontier much more seriously than they once did. We have to accept that computer scientists, from the legitimate technologist to the undercover hacker, have the potential to present a real danger to society.

The reason that intentional damage presents a higher level of risk is that an accident will typically strike in one spot. It was only due to bad design and an unlikely coincidence that the rat and the workman managed to bring down the New Zealand network. But malicious attackers can make use of the interconnectedness of computers to spread an attack so that it covers a wide area of the country or even the world, rather than being concentrated in any one spot. The very worldwide nature of the Internet that provides so many benefits for business and academia also makes it possible for an enemy to attack unpredictably from any and all directions.

Although the threat seems new, it is a strategy that goes back a surprisingly long way—well before the time when most of us had computers—and we need to travel back in time to the earliest days of the Internet’s predecessor to see how this all started.

We’re used to the Internet as a complex, evolving hybrid of personal, commercial, and educational contributors, but it all started off with the military, and specifically ARPA. ARPA (the Defense Department’s Advanced Research Projects Agency, later called DARPA) was set up in 1958 as a direct response to the panic resulting from the Soviet Union’s launch of Sputnik, the first artificial satellite, the previous year. ARPA was established to ensure that the United States would lead the world in future high-technology projects.

From the early days, ARPA had funded computers in a number of universities and wanted a way to enable a user with a terminal in, say, Washington to be able to log on to a computer in California without having to travel to the specific site to use it. It was from this desire to log on to remote computers that the ARPANET was first established. But it soon became clear that the network, and the “packet-switching” approach that was adopted as its communications protocol, could also be used for computers to connect to one another, whether to pass data among programs or to allow the new concept of electronic mail to pass human messages from place to place.

Eventually a part of the ARPANET would be separated off to form MILNET, where purely military unclassified computers were sectioned off on their own, and the remainder of the ARPANET would be renamed the Internet, forming the tiny seed of what we now know and love. The Internet wouldn’t really take off until 1995, when it was opened up to commercial contributors—initially it was the sole domain of universities.

In 1988 there were around sixty thousand academic computers connected via the ARPANET. The vast bulk of these were medium to large computers, running the education world’s standard operating system, UNIX, though some would have been using the proprietary operating systems of companies like the then highly popular Digital Equipment Corporation (DEC) minicomputer manufacturer. In late 1988, operators running some of the computers noted that their machines were slowing down. It was as if many people were using them, even though, in fact, loads were light. Before long, some of the computers were so slow that they had become unusable. And like a disease, the problem seemed to be spreading from computer to computer.

To begin with, the operators tried taking individual computers off the network, cleaning them up, and restarting them—but soon after reconnecting, the clean machines started to slow down again. In the end, the whole ARPANET had to be shut down to flush out the system. Imagine the equivalent happening with the current Internet: shutting down the whole thing. The impact on commerce, education, and administration worldwide would be colossal. Thankfully, the ARPANET of the time was relatively small and limited to academia. But its withdrawal still had a serious cost attached.

This disastrous collapse of a network was the unexpected and unwanted side effect of youthful curiosity, and the typical “let’s give it a try and see what happens” approach of the true computer enthusiast. The ARPANET’s combination of a network and many computers running the same operating system seemed an interesting opportunity to a graduate student at Cornell University by the name of Robert Morris.

Morris’s father, also called Robert Morris, worked for the National Security Agency (NSA) on computer security. But student Morris was more interested in the nature of this rapidly growing network—an interest that would bring him firmly into his father’s field, with painful consequences. Because the ARPANET was growing organically it was hard to judge just how big it was. Morris had the idea of writing a program that would pass itself from computer to computer, enabling a count of hosts to be made. In essence, he wanted to undertake a census of the ARPANET—a perfectly respectable aim. But the way he went about it would prove disastrous.

Although what Morris created was referred to at the time as a computer virus, it was technically a worm, which is a program that spreads across a network from computer to computer. Morris had noted a number of issues with the way UNIX computers worked. The ubiquitous sendmail program, which is used by UNIX to transfer electronic mail from computer to computer, allowed relatively open access to the computers it was run on. At the same time, in the free and easy world of university computing of the period, many of those who ran the computers and had high-level access had left their passwords blank. It proved easy for Morris to install a new program on someone else’s computer and run it. His worm was supposed to spread from computer to computer, feeding back a count to Morris.

There’s no doubt that Morris knew he was doing something wrong. When he had written his self-replicating worm, instead of setting it free on the ARPANET from Cornell he logged on to a computer at MIT and set the worm going from there. But all the evidence is that Morris never intended to cause a problem. It was because of a significant error in his coding that he wound up with a criminal record.

When the worm gained access to a computer, its first action would be to check whether the worm program was already running there. If it was, there was no job to do and the new copy of the worm shut itself down. But Morris realized that canny computer operators who spotted his worm in action would quickly set up fake versions of the program, so that when his worm asked if it was already running, it would get a yes and would not bother to install itself. Its rampant spread would be stopped, and it wouldn’t be able to conduct a full survey of the network.

To help overcome this obstacle, Morris added a random trigger to the code. In around one in seven cases, if the worm got a yes when it asked if it was already running, it would install itself on the computer anyway, and would set a new copy of itself in action. Morris thought that this one-in-seven restriction would keep the spread of his worm under control. He was wrong.

The Internet, like the ARPANET before it, is a particular kind of network, one that often occurs in nature. Because it has a fair number of “hubs” that connect to very many other computers, it usually takes only a few steps to get from one location on the network to another. What’s more, it was designed from the beginning with redundancy. There was always more than one route from A to B, and if the easiest route became inaccessible, the system software would reroute the message and still get it through. Because it was originally a military system, the builders of the ARPANET believed that at some point in the future, someone would attempt to take out one or more parts of the network. The network software and hardware were designed to get around this.

The combination of the strong interconnectedness of the network and the extra routes to withstand attacks meant that Morris’s one-in-seven rule allowed a positive feedback loop to develop. We came across these in the chapter 4, “Climate Catastrophe”—it’s the reason for the squeal of sound when you put a microphone too near a speaker. Imagine you had a mechanical hand turning the control that made the hand move. As the hand pushed the control, it would increase the power, strengthening the push, which would move the control more, strengthening the push even more, and so on.

This same kind of effect was happening with the computers on the ARPANET as Robert Morris’s worm took hold. The more the worm was installed on a computer, the more it tried to pass itself on to other computers—and the more copies of the worm were installed on the computer. Before long, hundreds and then thousands of computers were running more and more copies of the worm. And each copy that ran slowed the computer down until the machine ground to a halt.

Just how dramatic this effect was can be judged by a time-based report on the status of the first computer to be infected, a DEC VAX minicomputer at the University of Utah. It was infected at 8:49 p.m. on November 2 and twenty minutes later began to attack other computers. By 9:21 p.m., the load on the VAX was five times higher than it would normally be—there were already many copies of the worm running. Less than an hour later, there were so many copies running that no other program could be started on the computer.

The operator manually killed all the programs. It then took only twenty minutes for the computer to be reinfected and grind to a halt again. The only way to avoid repeated reinfection was to disconnect it from the network. Ironically, Clifford Stoll, one of the operators responsible for a computer taken over by Morris’s worm, rang up the NSA and spoke to Robert Morris Sr. about the problem. Apparently, the older Morris had been aware for years of the flaw in sendmail that the worm used. But at the time of the call, no one knew that it was Morris’s son who had started the worm on its hungry path through the network.

The ARPANET survived the unintentional attack, and Morris became the first person to be tried under the Computer Fraud and Abuse Act, receiving a hefty fine, community service, and probation. Morris’s worm was, by the standards of modern computer viruses and worms, very simple. It wasn’t even intended to cause a problem. But there is a more modern strand of attacks on our now-crucial information and communication infrastructure that is deliberate, intended to cause as much disruption and damage to our society as it can. It goes under the name of cyberterrorism.

Since 9/11 we have all been painfully aware of just how much damage terrorists can do to human life, property, and the liberty of a society to act freely. Although its name contains the word “terrorism,” cyberterrorism is at first sight a very different prospect. It may not result in immediate deaths—although at its worst it could result in carnage—but it has a worldwide reach that is impossible for real-world, physical terrorisms. A cyberterrorist attack could target essential sites all around the world at the same time.

The “cyber” part of “cyberterrorism” comes from the word “cybernetics.” Taken from the Greek word for steering something (it’s the Greek equivalent of the Latin
gubernare,
from which we get the words “govern” and “governor”), this term was coined in the 1940s to cover the field of communication and control theory. At the time it had no explicit linkage with electronics, although the first crude vacuum tube computers had already been built; but it would soon become synonymous with electronic communication and data processing. So cyberterrorism implies acts of terrorism that make use of electronic networks and data, or have our information systems as a target.

Although we tend to think of the threat of cyberattack being primarily at the software level, like the ARPANET worm, because the nature of computer networks makes it easy to spread an attack this way, it is quite possible that a cyberterrorist attack could be at the hardware level. Information technology is always a mix of hardware and software, each being a potential target.

One approach to hitting the hardware is crude and physical. Although the Internet does have redundancy and can find alternative pathways, there are some weak spots. As was demonstrated in New Zealand, destroy a few of the major “pipes” that carry the Internet traffic and it would be at the least vastly degraded. Equally, there are relatively few computers responsible for the addressing systems that allow us to use human language for the location of a computer like
www.google.com
, rather than the Internet’s true addressing mechanism, which uses a string of four numbers, each with three digits. A concerted attack on these addressing computers could cause electronic devastation.

BOOK: Armageddon Science
3.58Mb size Format: txt, pdf, ePub
ads

Other books

Rebel's Claw by Afton Locke
A Small Town in Germany by John le Carre
The Fires of the Gods by I. J. Parker
Potent Charms by Peggy Waide
The Indwelling: The Beast Takes Possession by Lahaye, Tim, Jenkins, Jerry B.
The Lazarus War: Legion by Jamie Sawyer
Fractions by Ken MacLeod
Coming Around Again by Billy London