I just published the first chapter of my book for free via Ars Technica. Full text over at Ars

Johnny Ryan’s A History of the Internet and the Digital Future has just been released and is already drawing rave reviews. Ars Technica is proud to present three chapters from the book, condensed and adapted for our readers. This first installment is adapted from Chapter 1, “A Concept Born in the Shadow of the Nuke,” and it looks at the role that the prospect of nuclear war played in the technical and policy decisions that gave rise to the Internet.

A Concept Born in the Shadow of the Nuke

The 1950s were a time of high tension. The US and Soviet Union prepared themselves for a nuclear war in which casualties would be counted not in millions but in the hundreds of millions. As the decade began, President Truman’s strategic advisors recommended that the US embark on a massive rearmament to face off the Communist threat. The logic was simple:

A more rapid build-up of political, economic, and military strength… is the only course… The frustration of the Kremlin design requires the free world to develop a successfully functioning political and economic system and a vigorous political offensive against the Soviet Union. These, in turn, require an adequate military shield under which they can develop.

The report, NSC-68, also proposed that the US consider pre-emptive nuclear strikes on Soviet targets should a Soviet attack appear imminent. The commander of US Strategic Air Command, Curtis LeMay, was apparently an eager supporter of a US first strike. Eisenhower’s election in 1952 did little to take the heat out of Cold War rhetoric. He threatened the USSR with “massive retaliation” against any attack, irrespective of whether conventional or nuclear forces had been deployed against the US. From 1961, Robert McNamara, Secretary of Defense under Presidents Kennedy and Johnson, adopted a strategy of “flexible response” that dropped the massive retaliation rhetoric and made a point of avoiding the targeting of Soviet cities. Even so, technological change kept tensions high. By the mid 1960s, the Air Force had upgraded its nuclear missiles to use solid-state propellants that reduced their launch time from eight hours to a matter of minutes. The new Minuteman and Polaris missiles were at hair-trigger alert. A nuclear conflagration could begin, literally, in the blink of an eye.

Yet while US missiles were becoming easier to let loose on the enemy, the command and control systems that coordinated them remained every bit as vulnerable as they had ever been. A secret document drafted for President Kennedy in 1963 highlighted the importance of command and control. The report detailed a series of possible nuclear exchange scenarios in which the President would be faced with “decision points” over the course of approximately 26 hours. One scenario described a “nation killing” first strike by the Soviet Union that would kill between 30 and 150 million people and destroy 30-70 per cent of US industrial capacity. Though this might sound like an outright defeat, the scenario described in the secret document envisaged that the President would still be required to issue commands to remaining US nuclear forces at three pivotal decision points over the next day.

The first of these decisions, assuming the President survived the first strike, would be made at zero hour (0 H). 0 H marked the time of the first detonation of a Soviet missile on a US target. Kennedy would have to determine the extent of his retaliatory second strike against the Soviets. If he chose to strike military and industrial targets within the Soviet Union, respecting the “no cities doctrine,” US missiles would begin to hit their targets some thirty minutes after his launch order and strategic bombers already on alert would arrive at H + 3 hour. Remaining aircraft would arrive at between H + 7 and H + 17 hours.

Next, the scenario indicated that the President would be sent an offer of ceasefire from Moscow at some time between 0 H and H + 30 minutes. He would have to determine whether to negotiate, maintain his strike or escalate. In the hypothetical scenario, the President reacted by expanding US retaliation to include Soviet population centres in addition to the military and industrial targets already under attack by the US second strike. In response, between H + 1 and H + 18 hours, the surviving Soviet leadership opted to launch nuclear strikes on western European capitals and then seek a ceasefire. At this point, European nuclear forces launched nuclear strikes against Soviet targets. At H + 24 the President decided to accept the Soviet ceasefire, subject to a withdrawal of the Soviet land forces that had advanced into western Europe during the 24 hours since the initial Soviet strike. The President also told his Soviet counterpart that any submerged Soviet nuclear missile submarines would remain subject to attack. The scenario concludes at some point between H + 24 and H + 26 when the Soviets accept, though the US remain poised to launch against Soviet submarines.

In order for the President to make even one of these decisions, a nuclear-proof method of communicating to his nuclear strike forces was a prerequisite. Unfortunately, this did not exist. A separate briefing for Kennedy described the level of damage that the US and USSR would be likely to sustain in the first wave of a nuclear exchange. At the end of each of the scenarios tested both sides would still retain “substantial residual strategic forces” that could retaliate or recommence the assault. This applied irrespective of whether it had been the US or the Soviet Union that had initiated the nuclear exchange. Thus, despite suffering successive waves of Soviet strikes, the United States would have to retain the ability to credibly threaten and use its surviving nuclear arsenal. However, the briefs advised the President, “the ability to use these residual forces effectively depends on survivable command and control…” In short, the Cold War belligerent with the most resilient command and control would have the edge. This had been a concern since the dawn of the nuclear era. In 1950 Truman had been warned of the need to “defend and maintain the lines of communication and base areas” required to fight a nuclear war. Yet, for the next ten years no one had the faintest idea of how to guarantee command and control communications once the nukes started to fall.

A nuclear detonation in the ionosphere would cripple FM radio communications for hours, and a limited number of nuclear strikes on the ground could knock out AT&T’s highly centralized national telephone network. This put the concept of mutually assured destruction (MAD) into question. A key tenet of MAD was that the fear of retaliation would prevent either Cold War party from launching a first strike. This logic failed if a retaliatory strike was impossible because one’s communications infrastructure was disrupted by the enemy’s first strike.

RAND, a think tank in the United States, was mulling over the problem. A RAND researcher named Paul Baran had become increasingly concerned about the prospect of a nuclear conflagration as a result of his prior experience in radar information processing at Hughes. In his mind improving the communications network across the United States was the key to averting war. The hair-trigger alert introduced by the new solid fuel missiles of the early 1960s meant that decision makers had almost no time to reflect at critical moments of crisis. Baran feared that “a single accidental[ly] fired weapon could set off an unstoppable nuclear war.” In his view, command and control was so vulnerable to collateral damage that “each missile base commander would face the dilemma of either doing nothing in the event of a physical attack or taking action that could lead to an all out irrevocable war.” In short, the military needed a way to stay in contact with its nuclear strike force, even though it would be dispersed across the country as a tactical precaution against enemy attack. The answer that RAND delivered was revolutionary in several respects—not least because it established the guiding principles of the Internet.

Nuclear-proof communications

Baran came up with a solution that suggested radically changing the shape and nature of the national communications network. Conventional networks had command and control points at their center. Links extended from the center to the other points of contact in a hub-and-spoke design. In 1960 Baran began to argue that this was untenable in the age of ballistic missiles. The alternative he began to conceive of was a centrifugal distribution of control points: a distributed network that had no vulnerable central point and could rely on redundancy. He was conscious of theories in neurology that described how the brain could use remaining functions effectively even when brain cells had died. An older person unable to recall a word or phrase, for example, would come up with a suitable synonym. Using the neurological model, every node in the communications network would be capable of relaying information to any other node without having to refer to a central control point. This model would provide reliable command and control of nuclear forces even if enemy strikes had wiped out large chunks of the network.

In his memorandum of 1962, “On Distributed Communication Networks,” Baran described how his network worked. Messages travelling across the network would not be given a pre-defined route from sender to destination. Instead they would simply have “to” and “from” tags and would rely on each node that they landed at on their journey across the network to determine which node they should travel to next to reach their destination in the shortest time. The nodes, by a very simple system that Baran describes in less than a page, would each monitor how long messages had taken to reach them from other nodes on the network, and could relay incoming messages to the quickest node in the direction of the message’s destination. By routing the messages like “hot potatoes,” node-to-node, along the quickest routes as chosen by the nodes themselves, the network could route around areas damaged by nuclear attacks.

Rewiring the nation’s communications system in this manner was a conundrum. The analog systems of the early 1960s were limited in the number of connections they could make. The process of relaying, or “switching,” a message from one line to another more than five times significantly degraded signal quality. Yet, Baran’s distributed network required many relay stations, each capable of communicating with any other by any route along any number of relay stations. His concept was far beyond the existing technology’s capabilities. However, the new and almost completely unexplored technology of digital communications could theoretically carry signals almost any distance. This proposal was radical. Baran was suggesting combining two previously isolated technologies: computers and communications. Odd as it might appear to readers in a digital age, these were disciplines so mutually distinct that Baran worried his project could fail for lack of staff capable of working in both areas.

Baran realized that digital messages could be made more efficient if they were chopped up into small “packets” of information. (Acting independently and unaware of Baran’s efforts, Donald Davies, the Superintendent of Computer Science Division of the UK’s National Physics Laboratory, had developed his own packet-switched networking theory at about the same time as Baran.) What Baran, and Davies, realized was that packets of data could travel independently of each other from node to node across the distributed network until they reached their destination and were reconstituted as a full message. This meant that mixing different types of transmissions such as voice and data could be mixed, and that different parts of the same message could avoid bottlenecks in the network.

Remarkably, considering the technical leap forward it represented, the US did not keep Baran’s concept of distributed communications secret. The logic was that:

we were a hell of a lot better off if the Soviets had a better command and control system. Their command and control system was even worse than ours.

Thus, of the twelve memoranda explaining Baran’s system, only two, which dealt with cryptography and vulnerabilities, were classified. In 1965 RAND officially recommended to the Air Force that it should proceed with research and development on the project.

Baran’s concept had the same centrifugal character that defines the Internet today. At its most basic, what this book calls the “centrifugal” approach is to flatten established hierarchies and put power and responsibility at the nodal level so that each node is equal. Baran’s network focused on what he called “user-to-user rather than… center-to-center operation.” As a sign of how this would eventually empower Internet users en masse, he noted that the administrative censorship that had occurred in previous military communications systems would not be possible on the new system. What he had produced was a new mechanism for relaying vast quantities of data across a cheap network, while benefiting from nuclear-proof resilience. Whereas analog communications required a perfect circuit between both end points of a connection, distributed networking routed messages around points of failure until it reached its final destination. This meant that one could use cheaper, more failure-prone equipment at each relay station. Even so, the network would be very reliable. Since one could build large networks that delivered very reliable transmissions with unreliable equipment, the price of communications would tumble. It was nothing short of a miracle. AT&T, the communications monopoly of the day, simply did not believe him.

When the Air Force approached AT&T to test Baran’s concept, it “objected violently.” There was a conceptual gulf between the old analog paradigms of communication to which AT&T was accustomed and the centrifugal, digital approach that Baran proposed. Baran’s centrifugal model was the antithesis of the centralized, hierarchical technology and ethos on which AT&T had been founded. AT&T experts in analog communications were incredulous at Baran’s claims made about digital communications. AT&T, used to analog communications that relied on consistent line quality that relayed a message as cleanly as possible from point to point, could not accept that cutting messages into packets as Baran proposed would not hinder voice calls. Explaining his idea in a meeting at AT&T headquarters in New York, Baran was interrupted by a senior executive who asked:

Wait a minute, son. Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?

Yet the theoretical proofs that digital packet switching could work were beginning to gather. In 1961, a young PhD student at MIT named Leonard Kleinrock had begun to investigate how packets of data could flow across networks. In the UK, Donald Davies’s packet-switching experiment within his lab at the National Physics Laboratory in 1965 proved that the method worked to connect computer terminals and prompted him to pursue funding for a national data network in the UK. Though Davies was unable to secure sufficient funding to pursue a network project on the scale that would emerge in the US, his laboratory did nonetheless influence his American counterparts. Also in 1965, two researchers called Lawrence Roberts and Tomas Marill connected a computer at MIT’s Lincoln Laboratory in Boston with a computer at the System Development Corporation in California.

Despite these developments, AT&T had little interest in digital communications, and was unwilling to accept that Baran’s network, which had a projected cost of $60 million in 1964 dollars, could replace the analog system that cost $2 billion per year. One AT&T official apparently told Baran, “Damn it, we’re not going to set up a competitor to ourselves.” AT&T refused the Air Force’s request to test Baran’s concept. The only alternative was the Defense Communications Agency (DCA). Baran believed that the DCA “wasn’t up to the task” and regarded this as the kiss of death for the project. “I felt that they could be almost guaranteed to botch the job since they had no understanding for digital technology . . . Further, they lacked enthusiasm.” Thus, in 1966 the plan was quietly shelved, and a revolution was postponed until the right team made the mental leap from centralized analog systems to centrifugal digital ones.

Innovation incubator: RAND

The breadth of Baran’s ideas and the freedom that he had to explore them had much to do with the organization in which he worked. RAND was a wholly new kind of research establishment, one born of the military’s realization during the Second World War that foundational science research could win wars. Indeed, it is perhaps in the Second World War rather than in the Cold War that the seeds of the Internet were sown. Even before America’s entry into the War, President Roosevelt had come to the view that air power was the alternative to a large army and that technology, by corollary, was the alternative to manpower. In Roosevelt’s mind it had been German air power that had caused Britain’s acquiescence in the Munich Pact. The US, which had hitherto neglected to develop its air forces, resurrected a program to build almost two and a half thousand combat aircraft and set a target capacity to produce ten thousand aircraft per year. When it did enter the War the US established a “National Roster of Scientific and Specialized Personnel” to identify “practically every person in the country with specialized training or skill.” Senior scientists understood the War as “a battle of scientific wits in which outcome depends on who can get there first with best.” Chemists held themselves “aquiver to use their ability in the war effort.” Mathematicians, engineers and researchers could point to the real impact of their contribution to the war effort. Vannevar Bush, the government’s chief science advisor, told the President in 1941 that the US research community had “already profoundly influenced the course of events.”

The knowledge race captured the public’s imagination too. The US government appealed to the public to contribute ideas and inventions for the war effort. While Vannevar Bush regarded tyros, individuals who circumvented established hierarchies to inject disruptive and irrelevant ideas at levels far above their station, as “an unholy nuisance,” he and the military research establishment were open to the ideas of bright amateurs. The National Inventors’ Council, “a clearing house for America’s inventive genius,” reviewed inventions from the public that could assist the war effort. It received over 100,000 suggestions, and is distinguished, among other things, as being one of the many organizations and businesses that rejected the concept of the photocopier. The Department of Defense released a list of fields in which it was particularly interested to receive suggestions including such exotica as “electromagnet guns.” In one startling example, two ideas of Hugo Korn, a sixteen-year-old from Tuley High School in Chicago, were apparently given practical consideration. One was an airborne detector “to spot factories in enemy country by infrared radiation.” The other was “an aerial camera which would be used in bad weather conditions.” During the First World War the Naval Consulting Board had performed a similar function, though out of the 110,000 proposals submitted to it all but 110 were discarded as worthless and only one was implemented.

Researchers during the War basked in public recognition of their critical importance. This new status, the president of MIT mooted, might “result in permanently increased support of scientific research.” As the end of the War drew near, political, military and scientific leaders paused to consider the transition to peacetime. The significance of the moment was not lost on Roosevelt. He wrote to Vannevar Bush in late 1944 asking:

New frontiers of the mind are before us, and if they are pioneered with the same vision, boldness, and drive with which we have waged this war we can create a fuller and more fruitful employment and a fuller and more fruitful life . . . What can the Government do now and in the future to aid research activities by public and private organizations . . . so that the continuing future of scientific research in this country may be assured on a level comparable to what has been done during the war?

In response Bush drew together the senior scientists of the nation to draft Science: the Endless Frontier, a report that established the architecture of the post-war research environment. At the core of its recommendations was a general principle of openness and cross-fertilization:

Our ability to overcome possible future enemies depends upon scientific advances which will proceed more rapidly with diffusion of knowledge than under a policy of continued restriction of knowledge now in our possession.

Though he argued for the need for federal funding, Bush was against direct government control over research. While not directly involved in its establishment, Bush’s emphasis on cross disciplinary study, openness and a hands-off approach to funded research would percolate and become realized in RAND. Science: the Endless Frontier also proposed the establishment of what would become the National Science Foundation, an organization that was to play an important role in the development of the Internet many decades later.

Also considering the post-war world was General “Hap” Arnold, the most senior officer in the US Army Air Force. He wrote that:

the security of the United States of America will continue to rest in part in developments instituted by our educational and professional scientists. I am anxious that the Air Force’s post war and next war research and development be placed on a sound and continuing basis.

General Arnold had a natural appreciation for military research. He had been a pioneer of military aviation at the Wright Brothers’ flight school in 1911, where he and a colleague became the first US military officers to receive flight instruction. Despite a personal ambivalence towards scientists and academics, whom he referred to as “long-hair boys,” he placed a priority on the importance of research and development. As he told a conference of officers, “remember that the seed comes first; if you are to reap a harvest of aeronautical development, you must plant the seed called experimental research.”

At the close of the Second World War Arnold supported the establishment of a new research outfit called “Project RAND,” an acronym as lacking in ambition as its bearer was blessed (RAND is short for “Research and Development”). The new organization would conduct long-term research for the Air Force. Edward Bowles, an advisor to the Secretary of War on scientific matters, persuaded Arnold that RAND should have a new type of administrative arrangement that would allow it the flexibility to pursue long-term goals. It was set up as an independent entity and based at the Douglas Aircraft Company, chosen in part because of a belief that scientists would be difficult to recruit if they were administered directly by the military and because Douglas was sufficiently distant from Washington to allow its staff to work in relative peace. RAND’s earliest studies included the concept for a nuclear powered strategic bomber called the “percojet,” which suffered from the fatal design flaw that its pilots would perish from radiation before the craft had reached its target; a strategic bombing analysis that took account of over 400,000 different configurations of bombers and bombs; and a “preliminary design of an experimental world-circling space ship.” This was truly research at the cutting edge of human knowledge.

RAND was extraordinarily independent. General Curtis LeMay, the Deputy Chief of Air Staff for Research and Development, endorsed a carte blanche approach to Project RAND’s work program. When the Air Force announced its intention to freeze its funding of Project RAND in 1959 at 1959 levels, RAND broadened its remit and funding base by concluding research contracts with additional clients that required it to work on issues as diverse as meteorology, linguistics, urban transport, cognition and economics. By the time Paul Baran examined packet-switched networking at RAND the organization was working at levels both below and above the Air Force and with clients outside the military structure.

In 1958, a year before Baran joined RAND, a senior member of RAND’s staff wrote in Fortune magazine that military research was “suffering from too much direction and control.”

There are too many direction makers, and too many obstacles are placed in the way of getting new ideas into development. R and D. is being crippled by . . . the delusion that we can advance rapidly and economically by planning the future in detail.

The RAND approach was different. As another employee recalled, “some imaginative researcher conceives a problem . . . that he feels is important [and] that is not receiving adequate attention elsewhere.” Before joining RAND Baran had been “struck by the freedom and effectiveness of the people” there. RAND staff had “a remarkable freedom to pursue subjects that the researcher believes would yield the highest pay off to the nation.” One RAND staff member recalled “anarchy of both policy and administration… [which] is not really anarchy but rather a degree of intellectual freedom which is… unique.” The staff were given freedom to pursue their interests and indulge their eccentricities. “We have learned that a good organization must encourage independence of thought, must learn to live with its lone wolves and mavericks, and must tolerate the man who is a headache to the efficient administrator.” Though scientists at RAND may have been more politically conservative than their counterparts in academia, many were oddballs who did not fit in: “One man rarely showed up before two o’clock, and we had another who never went home.”

Reflecting in 2003, Baran recalled a freedom for staff to pursue projects on their own initiative that has no contemporary comparison. This was the environment in which Baran developed the concept of packet switching, a concept so at odds with established thinking about communications that the incumbent could not abide it.

Systems analysis, the RAND methodology, promoted the perspective that problems should be considered in their broader economic and social context. Thus by the time Baran joined RAND in 1959 the organization incorporated not only scientists and engineers, but also economists and, after some initial teething problems, social scientists. This might explain why, though he wrote in the context of a sensitive military research and development project, Baran posed a remarkable question at the conclusion of one of his memoranda on distributed communications:

Is it now time to start thinking about a new and possibly non-existent public utility, a common user digital data plant designed specifically for the transmission of digital data among a large set of subscribers?

From the outset Baran’s ideas were broader than nuclear-proof command and control. His vision was of a public utility.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s