Summer 2004
The New Pandora’s Box: Technology and Data Terror
Terrorism can be best summed up in the words of La Fontaine: “tous né mouraient pas, mais tous etaient frappes” (all did not die, but all were struck). This is what terrorism seeks to achieve: to hurt the (relatively) few in a way that impacts and terrorises the many. Our world is so dependent on the integrity, the flow and the processing of data that this offers a whole new field of opportunities to terrorist groups, whether for financial, political or religious aims.
We need only imagine the catastrophic effects of such events as terrorist groups
- gaining control through remote tampering of the data networks of nuclear power plants, air traffic or train control systems, water or electricity distribution systems, satellite-based information systems (telecoms, GPS, television)
- tampering with massive financial systems such as credit card systems, pension and/or medical payment systems, central financial clearing systems
- unleashing computer viruses that not only penetrate millions of individual computers but take control of them and/or corrupt/erase data
- impersonating individuals and/or organisations in order to corrupt/subvert information through an otherwise healthy system
The purpose of this short list is simply to demonstrate the damage potential of terrorist attacks targeted at specific data systems. Imagine the state of a population where pensions are no longer paid out because the computers have been ‘knocked out’.
The list of major data targets mentioned above includes many massive, centralised systems. These all too often suffer from multiple fundamental flaws when viewed from an anti-terrorist aspect. First, they were primarily designed as computerised versions of previous non-computer systems. Second, their basic architecture dates back, almost without exception, to the pre-network days, and communications through networks have been added on at a later stage rather than incorporated as basic parameters that involved structural security aspects. Third, international terrorism and specifically data terrorism was not then the threat that it is — or should be recognised as — now.
The result is that the defence of such systems is based on the simple principle of “keeping the bad guys out”. That, sadly, was the principle behind the Maginot Line. In the everlasting race between cops and robbers, the leading edge of technology-based robbery is today in the hands of a very large population of predominantly young people, of just about every nationality, religion and political creed in existence. Hence it is just a matter of time until a very talented group of nerds and hackers espouses the goals and aims of conventional terrorist organisations. Keeping tabs on the nuclear elite of the former Soviet Union involves “only” thousands of people, all of whom have documented histories. By contrast cutting-edge would-be hackers number in their millions, a number of whom are still in college or university. The creator of the Sasser computer virus, for example, was an 18-year old German taking computer-science as a part-time study. Yet Sasser penetrated millions of computers and did an estimated US$4 billion of damage. Had its creator designed it to erase data, rather than just to stop and restart infected computers, a simple task for a designer of his ability, the damage costs would have been five times higher.
Other examples of attacks against single, massive data targets are the so-called DDOS, or Distributed Denial Of Service, which are the result of hundreds of computers targeting the same (and single) point of entry into a data network, Internet or other. Such is the number of requested connections, that the onset of saturation prevents any service. This can “shut out” any network-based system and is (relatively speaking) quite simple to achieve. And as all information systems today are in one form or other interconnected, they all have entry points.
Yet there are ways of avoiding such attacks, or preventing them from doing catastrophic damage. Just as al Qaeda mutated from a single visible and vulnerable organisation into a network of connected but separate entities and just as companies working in the huge office space of the Twin Towers are now spread over many buildings spanning three states, distributing data over multiple systems would make it much more difficult for an attacker to crash the system. A good example of how to protect a data system against a crash is the flight control system of Airbus planes. The plane’s flight is controlled by a computer that prevents it from executing manoeuvres that are deemed abnormal and/or dangerous. To ensure that this computer’s eventual failure does not crash the system, there is another computer, designed by another manufacturer and running other software that executes the same computations at all times. If the two independent systems do not produce the same results, it is a sign that one of them is faulty and a third independent computer takes over. Airbus have decided that they cannot make a failure-proof system; so they have “merely” made it failure tolerant.
So, while networks and their myriad connections by unknown hackers can be seen as the ultimate enemy of data security, they are also where the solution comes from, in making systems not impregnable, but unstoppable. The United States has chosen to make its own country the largest example of the impregnable “keep the bad guys out” policy, by excluding suspected foreigners and employing massive all-encompassing databases, rather than implementing a “we shall survive” approach. The effective answer to the challenge they face should be not in attempting to make a data-terrorist-proof system, but to make it data-terrorist-tolerant.
The second major obstacle to serious data security is concern over people’s privacy. Tracking data flows is one of the best ways of tracking terrorists who initiate them, such as locating people via their cell phones, websites and e‑mails. But the missing link for effective anti-terrorist action is a link between the machine and its data flow on the one hand, and the person using it on the other. Privacy and civil rights advocates are sure to object strongly against the concept of a secure, world-wide ID as a requirement to accessing cell-phones and connected computers. What they don’t realise is how much of what they would object to in principle already exists, such as a secure world “phone book” of mobile phone users. That is how you can be reached in Hong Kong via a Berlin-based phone number. Similarly, a world registry of Internet addresses already exists: which is how e‑mail reaches its destination. Both vital databases, by the way, are distributed over vast networks, making them eminently survivable in case of terrorist attacks. What is the reason for this? They are recently created, are not replicas of previous pre-computer models, and are not in the hands of the public sector. Also cell phone technology already “gives away” people’s location if the users have subscriptions rather than prepaid cards. Just as fixed IP Internet connections can be traced back to their users. So generalising ID requirements actually breaks no new ground in terms of privacy. Far worse surely is the fact that large numbers of private companies collate data which they collect from people’s daily lives — from what they buy, whom they call and where they surf on the Net to where they live and their preferred hobbies. Why let them do this and refuse to apply the same techniques to collective security?
What is the technology to achieve this vital link between man and machine? The obvious answer has to be the smart card. Cell phones already use them in the form of SIM cards to recognise and authorise users. Modern computers offer smart cards as the intelligent option to protect passwords and authorise use of the machine. The cards can also encrypt data if added security is needed. Smart cards have reduced credit card fraud by over 95% compared to previous systems. So why cannot users employ a national ID smart card to connect to cell phones and computers? How can we justify individuals’ rights to enter, modify and tamper with data that belongs to others while preserving their anonymity?
The obvious objection to such a system, from a technical point of view, has to be that it would be simple to forge fake cards. They already exist for many fake pay-TV decoders. Similarly, the encryption algorithms that are used to protect DVD copyrights or credit card transactions have been published on the Internet. This objection illustrates once more the Maginot line mentality. It is not in making fool proof cards or unbreakable data encryption that we can prevent terrorism. It is in multiplying the data that constitutes an identity. If a person has an ID smart card which he uses as a cell phone card, a forger needs to create not only a ‘believable’ name, and address, but also an ‘acceptable’ phone number. If the card incorporates biometric data (a picture, fingerprints, eye prints), yet more fake data must be created. If that card, in addition, is also a key to an IP address, one more ‘suitable’ address needs to be forged. And every layer of functionality added to such a card — a driver’s licence, payment card, or a Social Security card — interconnects it to yet more systems, each and every one of which becomes a key component of overall security even as its practicality and user benefits increase.
History teaches us that we all too often prepare for the last war rather than the next one. Our world today is profoundly underpinned by data. Even DNA and the code modifications of GM foods are data. And while we recognise terrorism as the great threat of our time, very little is done to prevent data terrorism, be it a global attack on a symbolic target or a racketeering operation by Mafia-type groups. How many companies have introduced biometrics to restrict access to their computers, or even take their passwords seriously? How many of them have distributed and interlocked their data in a way that make even a successful central attack non destructive? A clear sign of the terrorist potential of data attacks is that most which succeed against banks and insurance companies go unreported even to the police, as public knowledge of them would undermine trust in the institution. Terrorists today understand the benefits of distribution and interconnection far better than the forces chasing them. It is time to reverse this.
Philippe Berend is a Paris-based technology writer and consultant