Internet History Program Archives - CHM https://computerhistory.org/blog/category/internet-history-program/ Computer History Museum Mon, 05 Oct 2020 18:22:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 The Bourne Collection: Online Search Is Older than You Think! https://computerhistory.org/blog/the-bourne-collection-online-search-is-older-than-you-think/ https://computerhistory.org/blog/the-bourne-collection-online-search-is-older-than-you-think/#respond Wed, 18 Mar 2020 21:00:32 +0000 https://computerhistory.org/?p=15575 Charlie Bourne was an expert in computerized search for 40 years before Google. CHM has recently finished cataloging his unique collection of materials documenting the history of online search and information systems from the 1950s onward, supported by a generous grant from the National Archives.

The post The Bourne Collection: Online Search Is Older than You Think! appeared first on CHM.

]]>

Charles Bourne recalls pioneering experiment with Doug Engelbart at SRI in 1963. Watch the full oral history.

Charlie Bourne was an expert in computerized search for 40 years before Google. CHM has recently finished cataloging his unique collection of materials documenting the history of online search and information systems from the 1950s onward, supported by a generous grant from the National Archives.

Many of us assume that retrieving and browsing information online arose with the web in the 1990s, instantly catapulting us from thumbing through dusty card catalogs to the millisecond response time of modern search engines. Older computer insiders may have vaguely heard of one or two specialized earlier computerized services, like LexisNexis for journalists and lawyers, or the pricey Dialog.

LexisNexis

LexisNexis

The real history is longer and richer. Full-text online search was prototyped in the early 1960spartly through Charlie’s work – and commercialized by decade’s end. But pre-computer machine-aided search goes all the way back to punched card sorters. These were conceived in the 1830s and built in the 1890s, during a period of huge advances in card catalogs and other manual retrieval techniques. Real-time, interactive search was pioneered in the 1920s with Emmanuel Goldberg’s microfilm “search engine,” built into a desk.

By the late 1950s, manufacturers were selling a Rube Goldbergian mix of different storage and retrieval technologies to governments, corporations and the military: Rapid Selectors capable of searching 330 pages per second on microfilm, magnetic media or microfilm integrated into punched cards, and various futuristic looking viewers. Some were already computer controlled, and major conferences were starting up around how computers would soon revolutionize the entire field.

Semen Korsakov, modern illustration of the function of his 1830s punched card concepts for researching ideas (Ideascope) and more. Source: Wikipedia

This is the background against which Charlie Bourne, a student of computing great Harry Huskey, was turned on to information retrieval by another one of his professors at UC Berkeley, Douglas Engelbart. As we’ll see he has spent the rest of his long career at the intersection of the two fields.

One reason the early history of online information remains unfamiliar to computing folks is that much of it took place under the auspices of library science research, and professional organizations like Association for Information Science and Technology (ASIS). Even in recent decades, the computing and information retrieval professions operate largely in parallel tracks, broken by occasional moments of cross-fertilization—like the NSF-funded Digital Library Project that led to Google.1

Charlie Bourne’s collection, which contains materials both from his own varied work and from the research for his books, offers a truly unique chronicle of the two fields’ shared history. Processing of the collection was supported by an Access to Historical Records grant from the National Archives’ National Historical Publications and Records Commission (NHPRC). The NHPRC supports projects that promote access to America’s historical records to encourage understanding of our democracy, history, and culture.

Goldberg Bourne Blog

Operating principle of Goldberg’s microfilm retrieval machine, from patent drawing.

After graduating Charlie took a job at Stanford Research Institute (now SRI International), where he evaluated and wrote specifications for a number of retrieval systems: a microfilm system to handle three million records for the Air Force, an automated system to coordinate collecting and translating Soviet bloc literature, a Navy database to inventory every kind of radio signal from enemy equipment for shipboard use, and so on.

His old professor Doug Engelbart soon moved to SRI himself, and in 1963 Charlie helped him with a pioneering experiment he described in his 2015 oral history for CHM, excerpted at the top of this blog.

Charlie wrote the specification for perhaps the earliest example of modern online search, where you search the full text of documents on a remote computer. Lynn Chaitin did the programming. The remote computer was one of the behemoths custom-built for the SAGE nuclear warning system. Engelbart had arranged to use it through his funder, computing giant J.C.R. Licklider at ARPA.

The test worked perfectly, even allowing Boolean qualifiers like “and” and “or.” Licklider himself was researching what would become his 1965 book Libraries of the Future, which predicted that by the year 2000 all literature would be online, and searchable, with the massive task of cataloguing eased by weak AI.

In 1963 computerized search itself was not new. All of the search features they tested from SRIand many othershad been demonstrated before on batch-processed systems using punched cards. These included natural language queries, relevance scoring, stem and “wild card” searching, proximity and phonetic searches, alternate and weighted search terms, and automatic searching on synonyms. What was new was searching in real time, in a live back and forth session with a computer rather than loading up a deck of cards and waiting for a result.

Charlie himself had been busy, earning his master’s degree from Stanford in 1963 as a young father and completing his first book. Methods of Information Handling won the American Documentation Institute (ADI) Book-of-the-Year award. He left SRI in 1966 to serve as a vice-president at Information General Corporation while consulting widely in the information industry, as he did for most of his long career.

One early client was the CIA, for whom he evaluated a gigantic computerized system for automatically translating intercepted Russian documents into English (it wasn’t quite ready). Others would include the Stanford University Libraries, UNESCO, the National Academy of Science, the Library of Congress, the National Agricultural Library, US Patent Office, and United Nations, and Central Intelligence Agency. Some of the early systems Charlie evaluated were fully computerized, but those handling images usually included an analog component such as microfilm. Computer memory was too expensive to make high quality graphics practical until the 1980s. Charlie was also active in professional organizations, serving as president of ASIS where he helped demonstrate Doug Engelbart’s work to both computing and information science colleagues.

Bourne '63 Minicard blog

Analog information retrieval equipment, from Bourne’s 1963 Methods of Information Handling.

In 1971 he became a professor at the School of Librarianship and Information Studies at UCB (now the School of Information), while also directing the University’s innovative Institute of Library Research. He oversaw seminal work in taking UC libraries card catalogs online. His 1980s book Technology in Support of Library Science and Information Service drew on those experiences.

In 1977 he moved to pioneering online information provider Dialog Information Services, working his way up to Vice President of the General Information Division. Dialog was a key early example of the crossover between information science and the computing industry. Founder Roger Summit had been part of Lockheed Missiles and Space Corporation’s mid-1960s Information Sciences Laboratory (1964). He had built his ideas about iterative searcha “dialog” between the user and the computerinto a separate online search division for Lockheed. (This was very different from the “take your best shot” approach of modern search engines, where you generally need to run a new search to refine irrelevant results). Dialog licensed access to leading databases in a variety of fields, which you could search with its powerful tools. While the overall amount of information was far smaller than on the modern web, it was far, far more relevant and better organized.  

But Dialog wasoften more than the equivalent of $50 per hour. Even as computer equipment plummeted in price between the mid ’60s and the early ’90s, the subscriptions to a growing variety of databases remained a major cost. Dialog and competitors like LexisNexis were for corporate budgets. Only in the web era would this kind of deep, general search trickle down to the rest of us, both with keyword search engines like InfoSeek, AltaVista, and Google, and with more traditional hierarchical directories like the early Yahoo! or the later Wikipedia.

Charlie retired from Dialog in 1992 and continued his consulting work while preparing a third book. A History of Online Information Services, 1963-1976, which he coauthored with Trudi Bellardo-Hahn, came out in 2003. It won the Association for Information Science and Technology (ASIS&T) Book-of-the-Year award. Charlie lives in Menlo Park.

About the Collection

The detailed Finding Aid to the Charles Bourne Collection is here. The contents of the collection range in date from 1947 to 2016, consisting of materials related to Bourne’s pioneering career in the database and information retrieval industry, including his work at Stanford Research Institute (now SRI International), UC Berkeley, and Dialog Information Services. The collection contains Bourne’s personal project files, which include papers, presentations, and other activities related to his professional work, including his book A History of Online Information Services, as well as the unpublished work Cost Analysis of Library Operations. The collection also holds Bourne’s subject files on a range of topics, including organizations developing search systems, people working in the field, and database suppliers. These subject files contain technical reports, instruction manuals, internal reports, clippings, articles, correspondence, meeting notes, and some images and recordings. Additionally, there is a large collection of serials, conference proceedings, and books relevant to Bourne’s computer and information science interests. The materials from a number of late 1950s and 1960s conferences on computerized search and browsing

In addition to papers, the collection includes examples of several kinds of pre-computer information retrieval media, such as punched cards with embedded microfilm.

Bourne blog

The Bourne Collection includes examples of pre-computer information retrieval media. This illustration of edge-notched cards is from Bourne’s 1963 Methods of Information Handling.

A Note on the Bourne Collection from Professor Michael Buckland

Michael Buckland of the UC Berkeley School of Information is a leading information scientist who introduced Charlie Bourne to me, and suggested to Charlie that he offer his collection to CHM. Dr. Buckland served as an advisor on pre-computer “world brains” for the web gallery of our permanent exhibition Revolution, and is internationally known for his groundbreaking research into Emmanuel Goldberg of Zeiss-Ikon. Goldberg’s actually-built 1920s microfilm “search engine” presaged the remarkably similar Memex concept of Vannevar Bush by over a decade.

Charlie Bourne and His Papers

By Michael Buckland

When I first met Charles Bourne 50 years ago in 1969 he was already a leading figure in the world of documentation and information science. He was actively engaged as convention chairman for the forthcoming annual meeting of the American Society for Information Science (ASIS) in San Francisco that fall. Reflecting his personal outlook, the convention was planned with two special emphases: An effort to include participants from other professional groups with interests related to ASIS and the incorporation of attention to new techniques for information dissemination and exchange. The latter including attention to online systems and ways to match attendees with sessions relating to their interests. He was also, already at that time, President-elect of ASIS, a status which is a unique mark of respect by one’s peers. Later I had the benefit of being one of his colleagues when he was a professor at the University of California, Berkeley’s School of Library and Information Studies, where he directed an innovative, multidisciplinary multi-campus research organization, the Library Research Unit, which engaged in a wide range of useful studies of information storage and retrieval systems.

Charlie achieved his reputation not only by his ability but also by being well-organized. He made it his business to find out who else was interested in document management, data management, and library automation, especially the application of emerging technologies including punch cards and photography as well as the steadily expanding use of digital technologies. Working for SRI, he needed to know the state of the art of whatever problem he was addressing. In any case, it was his nature to want to be familiar with the landscape in which he was working. Affable, polite, and widely known as “Charlie,” his broad knowledge soon made him the “go to guy” and he was in demand as an instructor, as a speaker, and as consultant. He was retained for applied research and consultation by a very wide range of institutions both at home and abroad.

From early on Charlie demonstrated abilities to cope and excel in quite diverse ways. He worked on a steamboat, as a cook, picked fruit, and supported his college studies as a judo instructor. Competent, thorough, practical, and systematic are adjectives that spring to mind. His academic degrees in both electrical engineering and industrial engineering gave him an excellent grounding for his professional career.

Personal papers are often a disorganized, incomplete, and, in effect, an eclectic mess. The ideal is likely to result when the person involved has three characteristics: First, that person should retain a collection that, if not exhaustive, is at least comprehensive in its coverage. In other words, the papers retained should be relatively complete, which requires steering between gaps in coverage and the packrat mentality that result in collections that are exhaustive – and exhausting. Second, he or she should understand the topics covered in the papers and how the papers relate to the whole. The third requirement is that the papers be well organized. These qualities are not often found, but Charlie Bourne’s deposited papers are strong in all three aspects.

They are, therefore, as an archive of historical papers, a rich and most promising resource for future. But it is not merely a promise. The proof is already at hand because the historical value of Charles Bourne’s papers has already been very richly demonstrated since they formed the basis for the encyclopedic History of online information services, 1963-1976 that he co-authored with Trudi Bellardo Hahn (MIT Press, 2003). Thanks to the hospitality of the Computer History Museum the benefits derived from Charles Bourne’s career will continue permanently.

Berkeley, 2019

Notes

  1. Steven Levy, In The Plex (p. 16), 2011, Simon & Schuster, Inc. Kindle Edition.

Additional Resources

FacebookTwitterCopy Link

The post The Bourne Collection: Online Search Is Older than You Think! appeared first on CHM.

]]>
https://computerhistory.org/blog/the-bourne-collection-online-search-is-older-than-you-think/feed/ 0
Where to Next? https://computerhistory.org/blog/where-to-next/ https://computerhistory.org/blog/where-to-next/#respond Wed, 27 Nov 2019 00:59:31 +0000 https://computerhistory.org/?p=13832 It's a Tuesday morning in 2037. You hurriedly brush your teeth and dress to meet the self-driving car arriving downstairs. As it pulls away from the curb, what world awaits?

The post Where to Next? appeared first on CHM.

]]>
It’s a Tuesday morning in 2037. You hurriedly brush your teeth and dress to meet the self-driving car arriving downstairs. As it pulls away from the curb, what world awaits? Will you pass children frolicking in lush playgrounds built over now unneeded parking lots? Or lines of homeless ex-truckers waiting for a rare remaining bus, as autonomous luxury RVs carry snoozing techies to work along a new shoreline born of rising seas?

It might not just be cars driving themselves, either. The same technology that lets a Volvo safely navigate a world of stray dogs and road construction can eventually get cheap and small enough to help a walking hors d’oeuvres table thread its way through a crowd. Or a public health microbot navigate the leg hairs of a target in search of louse eggs, after infiltrating his socks.

Self-driving cars may indeed prove the killer app that turns smart navigation into an industry. But the implications of such navigation are far broader, from an internet of moving things to a rethink of nearly every way we transport both objects and ourselves. More than we’re consciously aware, transport today is shaped by the attentions spans, comfort, egos, and budgets of human drivers. Once we unravel those links, transport of all kinds will find new centers of gravity, business models, and policy goals. But there may be twists and turns along the way.

In mid 2014 I curated an exhibit on the evolution of autonomous vehicles called Where To? As a temporary exhibit, it was quick to create and meant to last a few months. But the enormous interest in the topic surprised us. Buzz Aldrin, who rode largely autonomous rockets to the Moon in 1969, came to the press opening with Sebastian Thrun, the father of exhibit sponsor Google’s self-driving car program. The sustained public curiosity made it our longest-lived temporary exhibit ever—we’ve finally taken it down after five-and-a-half years. Interested parties can still access the online version of the exhibit, plus added content.

Where To? A History of Autonomous Vehicles, on exhibit at CHM May 9, 2015–December 1, 2019.

Much of that ongoing interest stems from suspense, the simple fact that we still don’t know how the story ends. Self-driving cars are an 85-year-old dream still in the process of (probably) becoming real and are jumpstarting a set of technologies with implications far beyond the road.

Back in 2014 autonomy for cars seemed a rather quixotic whim pursued by the ultra-rich founders of Google and a few others. It felt like a solution in search of a problem. Why spend zillions automating a difficult task that most humans not only do surprisingly well, but often enjoy? The use case was far less obvious than for a dishwasher or washing machine. We’ll explore some of the motivations below. But the first questions were still technical—how could it work? is it safe?—and historical.

Where To? traced the growing success of autonomous vehicles from the 1860s torpedo to the WWII German V-2 guided missile, which developed into the rockets that carried Aldrin and Armstrong to the Moon. Later autonomous vehicles have explored the deepest oceans, hunted people from the air (drones), left our solar system (Voyager), and harvested our breakfast cereals in the form of self-driving combines.

The exhibit showed the extent to which autonomous vehicles already surround us, from open pit mines to warehouses to hospital corridors, except for the one place we notice most—public roads. That final frontier has remained a dream since the 1930s, when science fiction stories explored the topic and General Motors mocked up an autonomous future at the 1939 World’s Fair.

New York World’s Fair, Futurama: Highways & Horizons, 1939. At the time visions of freeways cutting right through city centers were still considered chic and futuristic. Credit: General Motors

Five and half years later, even after several billion dollars of investment combined from over a dozen car and other companies, general-purpose autonomous cars remain over the horizon. If they never arrive, of course, safety and other concerns will be irrelevant. But robocars are looking likely enough that it’s time to start asking serious questions how this next revolution might reshape society.

We’re standing at the cusp of a change in how people and things move about as great as those facing visionaries in the 1930s, when the truck was freeing farmers from the clutches of railroad monopolies and giant freeways remained mostly a thrilling, utopian dream.

Perhaps even as great as a Tuesday night in 1837, when Charles Wheatstone and William Cooke demo’d the first electrical telegraph as a control system for a new kind of transport startup—a railroad company.

Euston Railway Station

Euston Railway Station, London, showing the original wrought iron roof of 1837. That year, Charles Wheatstone and William Cooke demonstrated the first electrical telegraph between here and the station at Camden Town, as a control system for then-new railroads. Credit: Wikimedia Commons

Our ability to peer over the horizon feels little better than it must have then. In 1837, the press was full of literally hysterical fears of what the speed of trains might do to human health. Some speculated that women’s uteruses might be dislocated from the motion or that passengers would be unable to breathe. The belief that a train’s speed and motion could drive men temporarily insane persisted for decades. Some of today’s fears around self-driving may look as odd in retrospect. Or as rational as another 1830s fear, boiler explosions. The telegraph wasn’t always better understood; a contemporary article described a customer asking to send sauerkraut to a soldier at war over the telegraph wires.

Back then there was no precedent for guessing how mechanized transport and light-speed information nets might transform a world where little had ever moved faster than a horse. 1930s visions had considerably more to go on both for the automobile and for the dream of making automobiles self-guiding like trains. Designers like Norman Bel Geddes drew freeways cutting proudly through city centers in a way reminiscent of Fritz Lang’s film Metropolis, sometimes—as at Futurama—with unpiloted vehicles rolling on self-guiding highways. Real freeways had been pioneered by Mussolini’s autostrada system. Sears had knitted together transport and information technology into a delivery network that puts Amazon to shame.

The first science fiction story we could find that talks about self-driving, neuropsychiatrist David Keller’s 1935 “The Living Machine,” started off with a vision whose main points could be lifted from a Waymo marketing piece:

Old people began to cross the continent in their own cars. Young people found the driverless car admirable for petting. The blind for the first time were safe. Parents found they could more safely send their children to school in the new car than in the old cars with a chauffeur.

But it turned darker when the cars began hunting pedestrians and purposefully crashing to kill their own passengers. The reason was an especially pulp 1930s touch—traces of cocaine in their gasoline had driven them insane.

Illustration by Frank R. Paul for “The Living Machine” by David H. Keller, the first story we could identify that talks about autonomous cars. In Wonder Stories, May 1935. Continental Publications, edited by Hugo Gernsback.

Today, if we put our thinking caps on right, we can try to learn from a nearly 200-year mingling of telecom, automation, and mechanized transport. One lesson is that not all effects are predictable, like the greatest unintended consequence of them all—climate change.

A more recent revolution in moving things around, the web and internet, has shown how small early decisions can make even the most promising technology go partly awry. Like a mischievous genie, reality has a way of turning the loveliest, purest wishes into very mixed results. The best we can do is try to anticipate some of the genie’s more obvious moves in advance.

So as you cruise along on that Tuesday morning in 2037, what might self-driving really look like? The dozens of companies trying to make it commonplace assure us you’ll be rolling through a green paradise. They point out that because today’s private cars sit idle more than 95 percent of the time, replacing them with a few shared, energy sipping robocars will free up former street parking spaces to become bike paths and parklets. The community gardens you pass will be ex parking garages. There will be far less congestion, of course, and little pollution as silent electric flitters whisk us from place to place, the air clear and blue above playgrounds thick with sunshine and butterflies.

In this world, a child can chase an errant ball with little fear of being mowed down. Hospitals will be smaller and sleepier places, freed of the tragic burden of over four million yearly emergency room visits from car accident in the US alone. But on the rare occasion that a citizen needs emergency care, say for a serious gardening injury, the response will be seamless. If no airlift is available, the few vehicles on the streets will automatically make way for the ambulance as traffic lights turn from red to green, alerted by the wireless network that connects all cars. Out on the freeways “road trains” of electric robotrucks spaced evenly nose to tail will flexibly carry our impulse purchases with nearly the energy efficiency of real trains.

But perhaps when you round the corner, things have not turned out so egalitarian. It’s worth noting that the dreams above are meant to be reassuring, much as early cars looked like and were marketed as friendly “horseless carriages” rather than harbingers of unstoppable change. Or the way the massive neoclassical arch at the entrance to Euston train station framed the era’s bleeding-edge tech of steam and iron in the reassuring grandeur of the past.

The green dreams for self-driving don’t make it obvious why a dozen companies including carmakers are investing serious sums in the technology. Even if you take those companies’ public answer at face value—that they want to save a million lives a year lost to road accidents—this number might be more cheaply and directly pursued by funding, say, better access to vaccines. Or by beefing up road safety programs in developing countries with high accident rates.

That’s not to deny the passion of key people working on the topic. Some have lost close friends to accidents and being in a position to potentially save millions of lives through your work is a rare blessing. But it strains credulity that some of the same carmakers that bitterly resisted seat belts and collapsible steering columns (that prevent impaling drivers through the chest) have chosen self-driving as a way to finally atone.

Some of the green arguments, too, at first seem to require a certain cognitive dissonance. In order to reduce the world’s vast use of carbon-emitting personal vehicles, we should teach every one of those vehicles to . . . drive itself, which can also free it to drive empty on any number of errands or circle the block with an advertising banner. Even the greenest scenario—where people carpool and share use of private cars—doesn’t actually seem to require self-driving. BlaBlaCar and Getaround are two apps that let you do those things today.

If you follow the arguments of the self-driving faithful through to the end, however, everything makes sense. Though parts of the vision require the kind of nerdish faith in logic that animated the early web community—that because something is possible and should happen, it will. But even if you are considerably more cynical there are a lot of very real, very good things that can potentially come out of the driverless revolution.

The simplest part of the answer to “why self-driving?,” of course, is the estimated $7 trillion annually to be made directly from revolutionizing the ground transport industry. The other part is all the knock-on applications for the technology, from delivery drones that can safely navigate around crowds to intelligent lawnmowers.

Suppose as you get on the main road you’re passed with a whoosh by a limo carrying startup CEO Andin, who is paying a thick premium to force cars out of his way almost as if he were an ambulance. He wishes VTOL flying cars were allowed in this part of town. AramCoins hemorrhage invisibly from his corporation’s account to fatten the wallet of FastTrak Platinum through the AutoNet. He’s late for a stockholder meeting. As he reclines in the buttery leather owner’s chair of the Tesla Olympus Mons, he’s editing his talking points on the big screen in front of him. He wonders for the dozenth time whether he should have followed the Feng Shui consultant’s recommendation to orient the screen toward the left. Not only does it block the Matisse, but sitting sideways makes even mild braking uncomfortable.

 

Misconceptions about new technologies are nothing new. The common Victorian belief that a train ride could cause instant, temporary insanity persisted even into the 20th century. Illustrated Police News, Saturday, 10 August 1904. Newspaper image © The British Library Board. All rights reserved. With thanks to The British Newspaper Archive (www.britishnewspaperarchive.co.uk).

Without glancing out the window Andin passes Jorge, a retired Lyft driver waiting in line for one of the remaining public buses. He’s hoping it’s not too full of tents to find a place. His social credit score isn’t high enough to let him ride in one of the community shuttles provided by Googazon since the merger. He’s still trying to clear it of an old ticket for rolling through a stop sign, when human driving was still allowed at night.

Just down the road Elona is trying to sneak onto a short stretch of driverless road in their parent’s old manual Volt to avoid a two hour detour on the few remaining all-purpose roads. Registration fees for manual driving have been rising like crazy, but are still far cheaper than a new car. But no luck. The flashing lights of the police drone fill the windshield as they hurriedly dump their last few microdoses of gluten out the rust hole in the floorboards.

For some, this Tuesday morning didn’t start at home. Cornelia is dreaming of waves when she gently stirs and starts to wake up. The big Waymo RV must have hit a bump even the smart suspension can’t fully damp out. She prays it’s not the remains of another homeless encampment on the shoulder, where the screens shows her the Waymo was trying to route around a broken down Apple iRoll. There’s always a risk somebody was inside a flattened tent.

She’d gone to bed early after putting the kids to bed at her mom’s in Redding. But now she’s only at Vacaville, on her way to the office in Mountain View. Smarter cars can’t change the fact that too many people are doing night commutes from the affordable upper Central Valley. What used to be two hours at 120 mph can now be eight or more. She’ll have to join the meeting via telepresence from the road. As she sits up and stretches she tries connecting to Robin with her brain-computer interface but just gets static. Maybe her therapist is right, they aren’t on the same wavelength.

We can imagine dozens more scenarios both likely and not, from the potent results of combining self-driving with hookup apps, to an old man walking blocks to get around a smart intersection where cars whiz seemingly through each other’s paths—like a living cloth woven of speeding vehicles—at over 60 miles an hour.

Concept video showing smart intersection where cars don’t stop but pass between each other, coordinated by 
the self-driving network. “Shanghai 2030” video, General Motors

I personally think some of the issues to watch are around the fine-grained control of people’s movements, like the scenario where Jorge can’t ride a shuttle because his social credit score is too low, or perhaps welfare recipients not being allowed to spend income on “frivolous” trips.

In Western democracies we tend to take freedom of movement for granted. But many countries require internal visas for travel and especially immigration between regions. Self-driving combined with digital payment systems could bring such controls down to the micro level. Governments might use them to make public assembly more difficult or even enforce an electronic apartheid against certain ethnic groups. Of course, other restrictions might have broad and justified support, like using smart transport to automatically enforce a restraining order against an abusive spouse. But all would bring up new questions about civil liberties.

Another sensitive area is public transport. In theory, self-driving can vastly expand its reach in rural and under-served areas. Autonomous public vans and shuttles can affordably cover routes that would be prohibitive today. Vehicles could be sized to actual demand, rather than running huge standard buses which are empty much of the time.

But without the right framework it’s just as easy to imagine a scenario more like San Francisco today, where Uber, Lyft, and corporate shuttles steadily lure better-heeled passengers out of public transport, widening the social divide. Self-driving optimists assure us the roboshuttles will be cheaper than buses, serving everybody. Even if that’s eventually true, the transition period could be a rough ride.

In the early phases of self-driving, even the most hopeful concede that congestion might go up, rather than down. Self-driving may also speed the current “retail apocalypse,” as stores are emptied out by the convenience of online ordering.

There is another set of questions about what passengers want, i.e. what ordinary people would like to use self-driving for. When we posed options to some of our visitors in a small survey, “Apply makeup, shave, or catch some last-minute zzz’s during your commute to work” came first, followed by making a car the designated driver for a night on the town. Others wrote in to chauffeur their aging parents after they took the car keys away, or to avoid the stress of night driving.

The desire to sleep or put on makeup shows a potential flaw in the shared robotaxi model assumed by many self-driving companies. Illustrations of the self-driving future tend to show something like first-class air travel, with well-dressed people playing games or working on screens in sleek interiors with generous legroom. But as people begin to do more and more non-driving things in their cars, won’t they increasingly want a more personal space? Do you plan to cut your toenails in front of a stranger, or sleep long stretches in your clothes? One egalitarian solution is to divide robotaxis into little compartments, like first class seats on some planes. But both Andin in his limo and Cornelia in her RV are projections of where self-driving might just as easily go—toward rock star tour buses than lightweight pods. Especially when you no longer need to park or maneuver the great beasts.

So which of this wild mix of scenarios is most likely to come true? Perhaps all of them. Only not in the same place, or the same time. Just as the automobile was deployed in very different ways by different cities and regions, the world of robodriving will share global commonalities, but have local models. For instance, freeways in Germany and the Western USA are free and used by all. In Italy and France their steep tolls make them a high road for the rich. Varying regulations make bike and scooter sharing schemes wildly different city to city. The early years of self-driving, especially, may be a Cambrian explosion of different models, before best practices—or influential monopolies—emerge.

We can also imagine lovely, green, walkable scenarios beyond those commonly proposed by the self-driving firms. A number depend on the spread of self-driving passenger vehicles, to the many self-driving things that will not be cars at all.

The Rest of the Robots

Shakey the robot, “grandfather” of self-driving cars. Along with the Stanford Artificial Intelligence Laboratory Cart, Shakey at SRI pioneered techniques for navigating through an unfamiliar environment with artificial intelligence and machine vision. Photo © Mark Richards

As Where To? pointed out, self-driving vehicles have surrounded us for decades. We just haven’t noticed, since they’re not passenger cars. But within the industry the connections are clear. Just look at the work history of the woman who led engineering for the Waymo Firefly robocar CHM currently has on display in its lobby. Jaime Waydo’s prior job was 12 years leading the mechanical design of Mars rovers for the NASA and the Jet Propulsion Laboratory. Robodriving is even in her family. She told me that her parents, like most large-scale farmers, have used self-driving harvesters, tractors, and other major equipment for years.

Harvesters, extraterrestrial rovers, drones, and robot forklifts have only to deal with relatively static or controlled environments. Mars has few pedestrians. That’s how these vehicles can function with only modest smarts or on premade tracks, and in the case of rovers, frequent human intervention.

Safely carrying our loved ones at high speed between cars, buildings and other litigious humans is a different class of problem. If we solve it, we will have made a number of practical breakthroughs that involve key areas of AI—machine vision with real time interpretation, human behavior modeling, machine learning with training, and more—with applications far, far beyond the automotive industry.

This broader promise was further impressed on me when we gave a tour of our permanent Revolution exhibition to Sebastian Thrun, father of Google’s self-driving car efforts, and his successor Chris Urmson. Both men had been finalists in the DARPA grand challenge that kicked off modern self-driving technology. And both nearly fell to their knees in homage when I showed them Shakey, the icon of our AI and Robotics gallery, and the Stanford Cart. These 1960s robots were two of the very first to be built to navigate an unfamiliar environment. I realized that however passionate Thrun and Urmson may be about making self-driving safe and real, for them it is just one key application of far broader goals in AI. Self-driving cars are a form of robot. Making them work well is a problem in artificial intelligence.

Minuteman Missile Guidance Computer, c. 1960. Among the first autonomous vehicles to use digital computers for control were nuclear-tipped missiles. Computer History Museum/Photograph © Mark Richards

This is all a long-winded way to explain that not all the things driving themselves that Tuesday morning in 2037 or beyond need have humans inside. Or look like cars, or be car-size. Not by a long shot.

Imagine that between Andin in his speeding limo and Jorge waiting for the bus, a wide local freight lane occupies the area formerly taken up by street parking or the soft shoulder. Long distance freight still travels by train or robotruck. But as he waits, Jorge is passed by a clanking mini-excavator on its way to a job site, an oversize house trailer with flashing lights, parts of a pop-up restaurant, several pizzabots, and a pile of smallish packages rolling by at jogging speed under the solar awning of a smart UPS dolly. The dolly’s delivery drones buzz away carrying individual packages with the agility of birds. Traffic is even heavier at night when freight rates are lowest.

Now, three kids on hacked auto hoverboards weave in and out of the moving goods at inhuman speed, nearly brushing the edges of rolling stock of every kind and whooping as music blares from their backpacks. An old man with a wispy goatee and stretchmarked tattoos curses them out from his old brakeless fixie bicycle, the only nonautomated vehicle in the lane.

Off the road entirely, a headless “mule” packbot is walking gingerly down a large pile of soft redwood chips, having scooped up a load. The mule’s chassis is a direct descendant of the Boston Dynamics walking robots that terrified a generation of YouTube viewers in the 2010s with their eerily agile antics. But now mated with the kind of serious navigation skills once reserved for self-driving cars. It can lope through a crowd. Safely cross a traffic-filled street. And still climb stairs.

Andin is too busy with his PowerPoint to notice the minidrone that splats on his windshield, tiny rotors and legs still whirring in a spastic two-step. Unlike its bigger brethren it was designed to be cheap, not infallible. This one will never pollinate an almond crop or stuff a hornet’s nest with anti-breeding pheromones. But thousands of others will.

Whoa, Nelly! Boston Dynamics, Legged Squad Support System Robot prototype for DARPA, 2012. Around the size of a horse. Credit: Wikimedia Commons

Augmentation and Autonomy

It’s a lucky accident that the way our nervous systems developed for guiding our bodies through trees and crowds happens to comfortably scale up to controlling huge motor vehicles moving at enormous speed. It even works in three dimensions—airplanes.

Over the past century the automotive industry has fitted its products to our native abilities as closely as a shoe to a human foot. Motor vehicles augment us in as physical a manner as a suit of armor, an excavator’s shovel, or a fighting cock’s spurs. But the luck may hold in reverse. If researchers can finally manage the hard task of giving a car the ability to drive itself safely, it means they will have given it a general set of navigation skills that partly imitate our own; skills that can move it through all sorts of environments. (That still won’t make cars especially smart in general terms. Cockroaches are far better at “driving” themselves than any car on the near horizon).

They could also apply such skills to various machines besides cars, from nanobots to movable buildings. How quickly depends on a version of Moore’s Law, that is, that self-driving tech will get cheaper and smaller as it finds more markets. Since we opened Where To? key sensors like LIDAR have dropped in both size and price by an order of magnitude, as have dedicated self-driving computers.

Giving more ordinary machines better vision and navigation abilities can seem prosaic, if still useful—apparently a not uncommon Roomba problem is the machine spreading pet poop over the entire carpet. The importance of good driving may be easier to understand in the negative. Without such skills AI and especially robotics will remain fundamentally limited, restricted to controlled environments and specialized applications, despite other advances. There’s a reason the ability to move around reliably is nearly universal across the animal kingdom.

If full self-driving gets cheap, some of the first candidates for liberation could be the simple autonomous vehicles currently used in closed environments—hospital carts, farm equipment, mining trucks, forklifts, and so on. Drones, too, get by today with primitive autonomy, because the air is generally emptier than the ground. But if they get more common and deliver things closer to people, drones will need better smarts.

Coming to a street near you? La Contessa, galleon vehicle at Burning Man, 2005. By Neil Girling, Creative Commons license

From there the possibilities open up. Huge burning man-style galleons cruising down suburban streets for a hired event as other self-driving vehicles move aside to give them room? Lawnmowers doing the rounds of a neighborhood on their own, or pop-up restaurants assembled from modular parts that drive themselves to the next destination?

Feel free to submit your own ideas in our Comments section.

Like self-driving, the Internet of Things can sometimes seem a solution in search of a problem. Visionaries and VCs alike have struggled for decades to explain why you need your toaster or fridge to be online. But the value proposition is clearer for anything that starts moving about. First, current self-driving technology needs access to maps and other online data to compensate for its own limited artificial intelligence. Second, a lot of the magic comes when autonomous vehicles are networked together, as we’ve seen in several examples. There may be solid reasons for an Internet of Moving Things.

That net could even extend to items which are not autonomous vehicles, but which they might encounter. For instance, why not have your cat’s collar or chip identify it as a cat to the network, rather than leave that up to the paint sprayer’s computer vision to figure out in real time?

Hopes

The world we are now seeing is a vision, an artistic conception, which may undergo many changes as it develops into the great realities of tomorrow.

— New York World’s Fair, Futurama: Highways & Horizons, 1939

We are the biggest constraint on vehicle design. Today’s vehicles move us quickly because we’re impatient. They’re powerful because that makes us feel powerful, as well as giving us a physical thrill. They are some of the most designed and decorated machines in the world, because they are a public expression of our status and taste. They have windows in certain places so we can see, are roughly rectangular to make them predictable to maneuver and park, and the front seats always face forward, which is also away from other passengers.

The vast majority of them are cars or SUVs, a kind of all-purpose vehicle designed to be the only one a family need own. Cars are a workable compromise on a long series of tradeoffs: big enough for five people but small enough to maneuver and park, somewhat affordable, fast enough for the freeway with enough fuel for at least 350 miles, comfortable and quiet yet not too heavy decent mileage and handling, and so on. Being all-purpose means cars can’t be optimized for any of them.

Self-driving can potentially unravel any of these constraints. As we’ve seen, some of the results might be anything but green or livable. But the potential is just as great on the other side. There are a couple of obvious savings, some of which are already touted by self-driving car companies. Much energy in transport is wasted on being constantly ready to give the human driver a fleeting experience of power at any time. Powerful engines reassure the driver that a press on the pedal will have the car respond like a tiger. (Electric car makers like Tesla compete with fueled cars partly by making their models even more wildly fast).

Being a passenger is different. Do you care if your subway car can do 0 to 60 in five seconds? Generally, no. In fact it would be uncomfortable if it did. Many vehicles today also sacrifice efficiency to feel “drivable.” For instance long hoods on trucks and vans make them drive in a more car-like way, but waste interior space.

The shape of cars to come? This image is of a current production one-person vehicle that still requires a human driver. Shoprider Flagship 4-Wheel Cabin Scooter. American Quality Health Products

Equally important can be anything that erodes the current norm of a one-size-fits-all vehicle for individual ownership and use. While theoretically smartphones can let us share cars just fine today, with car sharing apps like Getaround and carpooling apps like BlaBlaCar or now Waze, the logistics are a lot tougher. Human carpooling almost always forces either driver or passenger to go somewhat out of their way, or make multiple connections. Car sharing requires the sharer to get to where the car is, rather than vice versa.

Self-driving can smooth over both. Let’s imagine your Tuesday drive in 2037 is in a world where both sharing and pooling have taken off. The car at your house had dropped off the last passenger ten minutes before and come to you on the cheaper, slower freight route. The car itself might be owned by a company, a city if it’s part of the public transport fleet, or by an individual renting it out when not in use – your ride app can draw from all three.

Shared cars permit specialization. Because you’re going on a shortish hop you reserved a tiny one person flivver, a convertible. But later you’ll choose a bigger car to go home with your son after school. For next weekend’s outing with relatives you’ll take a minibus. For an Alaska vacation you might choose an all-wheel drive camping van, or for a thrilling date a self-driving motorcycle or ultralight plane. Because you’ll only use many of these vehicles for shortish trips, they don’t need to all lug around a thousand pounds of batteries like an all-purpose electric car today. This is especially important since battery production is a highly polluting, carbon-emitting process.

In terms of efficiency and saving energy, the most important specialization of all may be to decouple transporting people from transporting everything else. Brad Templeton has written extensively on self-driving and served as an advisor for our Where To? exhibit. He pointed out some years ago that things have very different transport requirements than people. We mostly don’t care how fast things go, what route they take, or what time of day they travel. Crashes are far less of an issue. Things can be put on smaller, lighter vehicles that do less damage than a car if something goes wrong. Those vehicles can avoid going near people or passenger cars, or travel so slowly that even crude autonomy is enough to prevent serious accidents. And if the worst does happen, things can usually be replaced. He suggested that years before self-driving was mainstream for us and our children, cheap, slow “deliverbots” might be practical for carrying freight on back roads at night.

Atoms and Bits

Big enabling technologies don’t always advance in step. The printing press made representing information on pages dozens of times faster and cheaper than before. Yet for the next 400 years, transporting those pages—and the knowledge they contained—still happened by slow, expensive horse and ship.

Other leaps do overlap. Some even reinforce each other. By 1837, the year Wheatstone and Cooke tested the first successful telegraph as a control system for rail, transport was seriously catching up. The steam ship and train were about to revolutionize the movement of goods, including vast volumes of printed matter, and the original information vector—people.

Another emerging technology, automation, was moving from clockmaker’s workshops to farms and factory floors.

By the start of the 20th century semiautomated telecom functioned like a mature nervous system for the brute forces of global transport. The right message along the wires could command kilotons of grain across the world, or have sheets of paper printed a million times, or bring the news that would raise—or disband—an army.

But the next transport revolution has mostly happened on its own parallel track, without much direct integration with information technology or automation. As we’ve seen the automobile age was a spectacular triumph of mechanical augmentation—taking human muscles and vision and reflexes and using them to delicately control giant, roaring machines of steel at the speed of an express train and beyond.

During the automobile age, both automation and the technology of information have made enormous leaps. Most knowledge now travels instantly and at virtually no cost over the kind of electronic networks pioneered by Cooke and Wheatstone that long-ago night. Automation in the form of computing is at the heart of our networks, and mild AI helps us search for information while monitoring our conversations and showing us ads.

But aside from some serious innovations like container shipping, the vast bulk of transport happens pretty much the way it did in 1935 when Dr. Keller wrote “The Living Machine.” Computers have entered our vehicles, to be sure, but mostly as adjuncts to an act of driving that would have felt familiar to a young Henry Ford.

Likewise, 75 years into the computer age, the vast majority of computer applications remain trapped behind two dimensional screens, moving nothing heavier than pixels.

All of that may soon change.

FacebookTwitterCopy Link

The post Where to Next? appeared first on CHM.

]]>
https://computerhistory.org/blog/where-to-next/feed/ 0
The First 50 Years of Living Online: ARPANET and Internet https://computerhistory.org/blog/history-of-the-future-october-29-1969-fifty-years-of-a-connected-world/ https://computerhistory.org/blog/history-of-the-future-october-29-1969-fifty-years-of-a-connected-world/#respond Fri, 25 Oct 2019 23:30:13 +0000 https://computerhistory.org/?p=12911 On the evening of October 29, 1969, two young programmers sat at computer terminals 350 miles apart: Charley Kline at UCLA and Bill Duvall at the Stanford Research Institute (SRI) in Northern California.

The post The First 50 Years of Living Online: ARPANET and Internet appeared first on CHM.

]]>
Editor’s Note: This is part of an ongoing series dedicated to the web anniversaries of 2019, including the 50th anniversary of general purpose computer networks connected over the ARPANET, the 30th anniversary of the web’s conception, and anniversaries for things from mass Wi-Fi to familiar giants like Amazon and Facebook.

Bill Duvall, SRI computer room, late 1960s.

As you groggily prop yourself up in bed to thumb through the morning weather report on your phone, you are not alone. Six billion other people use similar networks. A good chunk of them are also online right now. They are checking their social media, watching videos, reading the news, working, seeking romance, browsing porn, placing bets, submitting tax forms, chatting, buying cattle, job hunting, writing a birthday greeting, or listing their old toaster for sale.

Today, we live a good portion of our waking hours online. Nearly every form of communication, every medium, every transaction, every kind of information has been brought into a single kind of system. Not since our distant ancestors learned to talk has there been as big a change in the minute by minute process of primate communication.

And there’s no end in sight. The net is creating new gatekeepers, new centers of gravity and power. Society is building the long-term institutions that will determine who can access information, and add to itand profit from its sale.

The net itself will draw even closer. Augmented Reality and Virtual Reality have been mature in vertical markets for decades. They’re still just searching for the killer app to reach the rest of us, and pull us even deeper into the online world. If the startups working on brain-computer interfaces have their way, the net may someday pierce the final veil—the wet, bony one surrounding the very seat of our thoughts. Whether through implanted electrodes or gentler non-invasive scanning, we could end up literally sharing those thoughts online. Or posting the soundtracks in our heads, or being monitored for (literal) thought crimes.

Today nearly everyone alive is a user of general purpose computer networks: four billion on the Internet, plus another two billion on mobile cell phone networks. 50 years ago this coming Tuesday, there were zero users. And then there were two.

On the evening of October 29, 1969, two young programmers sat at computer terminals 350 miles apart: Charley Kline at UCLA and Bill Duvall at the Stanford Research Institute (SRI) in Northern California. Kline was trying to login to Duvall’s computer. “The first thing I typed was an L,” Kline says. Over the phone, Duvall told Kline he had gotten it. “I typed the O, and he got the O.” Then Kline typed the G. “And he had a bug and it crashed.” And that was it. The first message between hosts on the new network was “lo.” The bug was quickly fixed, and the connection fully up before they went home.

The First ARPANET sessionBill Duvall and Charles Kline.

Thus began one of the first big trials of a then-radical idea: Networking different kinds of computers together. The project was called ARPANETan experimental network built by the US ARPA (Advanced Research Projects Agency). It was designed to connect islands.

By the 1960s multiple users could have accounts on a single big computer using terminals, and share messages and files. Such timesharing systems were already being used to prototype many of the features of modern computing. These ranged from virtual reality and graphics at the University of Utah to the Web-like oNLine System (NLS) at SRI, as well as cloud-like computing power for rent with Tymshare and Compu-Serve and thriving educational communities at MIT, Dartmouth, and the University of Illinois. Many had been partly funded by ARPA. Timesharing got users accustomed to being “online” together over phone lines.

But each of those timesharing systems was a little island, an isolated community restricted to its own host computer. By reliably connecting those different kinds of host computers, several networking efforts in the US and Europe hoped to connect those islands to each other into archipelagos, and one day an entire online world.

.     .     .

General computer networking may have been born in 1969, but it had a long gestation. The first special-purpose networks between similar machines had been pioneered in the late 1950s by the SAGE air defense system. SAGE inspired similar systems by the Soviets (and even bigger dreams of a USSR-wide public network that could also run the planned economy). But such dedicated systems required purpose-built machines and World War IIIsize budgets.

By the early 1960s J.C.R. Licklider, the visionary first leader of ARPA’s computing division, was funding early timesharing and other computers at a number of research sites around the US He proposed a future “Intergalactic Computer Network” as a way to increase efficiency. Instead of duplicating functions between sites, each could focus on what it did best and log in remotely over the network for whatever other software or functions it needed.

The main elements of packet switching—a key concept of much modern networking—were conceived by the mid-1960s and tried in small experiments shortly after. The result? By late in the decade, networking ideas were still utter heresy to mainstream computing, though a heresy that was slowly winning acceptance in the rarefied atmosphere of key labs.

That’s not to say it was mainstream, or easy. Even timesharing was a newish thing, and connecting timesharing computers to each other was on the reddest bleeding edge. “Nineteen-sixties computers were not interconnected and most were not even interactive; people gave them jobs to process and came back later for the results,” said Bill Duvall. “Bob Taylor and Larry Roberts at ARPA understood not only the potential of computer networking, but also the challenge of networking during an era when computers were generally not standardized, and did not use a common language or alphabet.” Taylor was the director of ARPA’s Information Processing Techniques Office from 1965 to 1969. He recruited networking pioneer Larry Roberts as technical architect and together they chose the people and places to build the ARPANET, assigning unique roles to three institutions.

“The development of the ARPANET, which had no commercial application at the time, underscores the power of coordinated basic research and the importance of that research to our society. In the 1960s, computers were not interconnected and most were not even interactive. A few research groups were looking at the potential of networked computing and how distributed systems might be used as information repositories and collaboration tools, but they were hampered by a huge obstacle: they lacked a network to weave their projects together.”

— Bill Duvall

Cambridge-based BBN (then Bolt Beranak and Newman) built the special Interface Message Processors (IMPs) that connected the main computers to the net. BBN served as the system’s administrator. The original IMP pictured is in the Museum’s collection.

Doug Engelbart’s group at SRI ran the Network Information Center (NIC), which besides acting as a central library kept track of all the computers on the ARPANET. This would later evolve into the Domain Name System (.com, .org, etc.). Engelbart’s group had helped pioneer many core features of modern computing by then, as part of a Web-like effort called oNLine System (NLS).

UCLA hosted the Network Measurement Center, researching and improving how data moved across the network.

And that takes us to Bill and Charley’s first connection in October of 1969. Within months, researchers at the National Physical Laboratory in England would turn the key on their very similar Mark I network. Within a year wireless networking would kick off with ALOHANET in Hawaii, another project funded by ARPA, and the great grandparent of everything from Wi-Fi to mobile phone data. Also within a year or so “cloud” pioneer Tymshare would start up Tymnet, which would become the leading commercial network into the 1980s.

There were several other efforts, too. Networking would have happened with or without ARPA. But the sheer scale of the ARPANET and related projects like ALOHAnet, fueled by ARPA’s hefty budget, soon catapulted networking to a whole new level: From a topic for obscure papers to a continent-spanning reality with hundreds of actual users.

ARPANET’s success galvanized other networking efforts, and began an incredibly rich period of development at every layer, from the wires and radio waves that carry raw data, to networking protocols that safely move it where it needs to go, to web-like user applications. At the network level the ARPANET and its relatives spawned innovations like Ethernet and other local networks, further wireless networks like ARPA’s Packet Radio Network and Satellite Network, and systems for connecting different networks to each other—a process known as internetting. ARPA’s own way of doing this eventually beat out leading corporate and international competition to become “the” internet the web runs over today.

SRI’s Packet Radio research van hosted watershed internet experiments in 1976 and 1977 and is part of the Museum’s collection. As with the personal computer, the main user features of the connected world we know today were prototyped in a mad rush by the mid 1970s: Email and clickable links; shared documents and files; wireless data; virtual worlds; even Skype-like packet voice. They just took a few decades to reach the rest of us.

ARPANET 25th Anniversary Celebration

By Dave Walden, BBN Networking Pioneer

Photo of some of the pioneers present at the 1994 celebration taken at the Copley Plaza Hotel in Boston. The people in the photo are (front row left to right) Dave Walden, Barry Wessler, Truett Thach, Larry Roberts, Len Kleinrock, Bob Taylor, Roland Bryan, Bob Kahn, (back row Marty Thrope, Ben Barker, Vint Cerf, Severo Ornstein, Frank Heart, Jon Postel, Doug Englebart, and Steve Crocker.

In 1994 Bolt Beranek and Newman Inc. (BBN), which had developed the hardware for the original ARPANET and run the central Network Operations Center, had a new CEO, George Conrades. He was a veteran IBM marketing executive. By this time BBN had also developed a quite significant Internet business, and Conrades felt that the way to grow BBN quickly was to expand on that Internet business. As a way to make a splash as a significant Internet company, Conrades decided that BBN should host a 25th ARPANET celebration and invite as many ARPANET/internet pioneers as possible.

The celebration was held at Boston’s Copley Plaza hotel on September 10, 1994, and was titled “The History of the Future—ARPANET, Internet, and Beyond: A Celebration of 25 Years of Innovation in Network Communications.” A video tape was made of the approximately one-hour presentation period, below. Other activities related to this celebration were a presentation by Doug Engelbart at BBN and a couple of photo sessions and handouts. In the year after the celebration, Katie Hafner and Matthew Lyons book on the history of the ARPANET, Where Wizards Stay Up Late, was published.

Video of the 25th ARPANET anniversary celebration.

A Year of “Netiversaries”

Besides the 50th anniversary of the ARPANET, 2019 is a year of many web and networking anniversaries, or “netiversaries” to coin an awful word. It has been 30 years since the first proposal for the World Wide Web, and 25 years since the popular explosion and the rise of web commerce, including the launch of Netscape, Amazon, Yahoo!, and many others. It has also been 25 since the so-called “Woodstock of the Web,” the vastly oversubscribed first web conference in 1994. The series is still going strong, with the most recent this past May in San Francisco.

Twenty years ago Japan rolled out the mobile web the rest of us wouldn’t discover until the iPhone era (i-Mode), while here we remember 1999 for Napster as well as the teetering height of the dot-com boom. Lastly, 2019 marks 15 years since the web’s popular rehabilitation following the dot-com crash, including Google’s IPO and the rise of “Web 2.0” sites like Yelp, Flickr, and a social network called Facebook.

Ten years ago marks the start of yet another try at a digital cryptocurrency in the mold of the pioneering 1989 DigiCash Inc. The upstart was called Bitcoin.

For exhibits on the evolution of the web and the online world, visit the Web, Mobile, and Networking galleries of our exhibition Revolution: The First 2000 Years of Computing, either in person or online.

Further Reading on ARPANET

From Our “Netiversaries” Series

FacebookTwitterCopy Link

The post The First 50 Years of Living Online: ARPANET and Internet appeared first on CHM.

]]>
https://computerhistory.org/blog/history-of-the-future-october-29-1969-fifty-years-of-a-connected-world/feed/ 0
“Woodstock of the Web” at 25 https://computerhistory.org/blog/woodstock-of-the-web-at-25/ https://computerhistory.org/blog/woodstock-of-the-web-at-25/#respond Fri, 24 May 2019 00:00:00 +0000 http://computerhistory.org/blog/woodstock-of-the-web-at-25/ 2019 is a year of many web and networking anniversaries, or “netiversaries” to continue using an awful word. This year marks the 50th anniversary of general purpose computer networks. That first connection was over the ARPANET, between Douglas Engelbart’s laboratory at SRI and another node at UCLA. Such networks were built as transport for online systems like Engelbart’s oNLine System, famously demoed in late 1968, which is a key ancestor of the web.

The post “Woodstock of the Web” at 25 appeared first on CHM.

]]>
Editor’s Note: This is part of an ongoing series dedicated to the web anniversaries of 2019, including the 50th anniversary of general purpose computer networks connected over the ARPANET, the 30th anniversary of the web’s conception, and shorter anniversaries for everything from mass Wi-Fi to familiar giants like Amazon and Facebook.

Pioneers reflect on the past and future of the conference series and the web.

“Netiversaries”

2019 is a year of many web and networking anniversaries, or “netiversaries” to continue using an awful word. This year marks the 50th anniversary of general purpose computer networks. That first connection was over the ARPANET, between Douglas Engelbart’s laboratory at SRI and another node at UCLA. Such networks were built as transport for online systems like Engelbart’s oNLine System, famously demo’ed in late 1968, which is a key ancestor of the Web. Another blog article in @CHM remembers Engelbart and his work.

On the web side it has been 30 years since the first web proposal and 25 since its popular explosion and the rise of web commerce, including the launch of Netscape, Amazon, Yahoo!, and many others. It has also been 25 since the so-called “Woodstock of the Web,” the vastly oversubscribed first web conference in 1994. The series is still going strong, with The Web Conference coming in May to San Francisco. The piece below recounts some of the flavor of that first coming together. At the end we’ve included links to the original conference site and video from the conference itself, courtesy of CERN.

For the story of the web’s conception and birth, see this article from the 25th anniversary.

For exhibits on the evolution of the web and the online world, visit the Web, Mobile, and Networking galleries of our exhibition Revolution: The First 2000 Years of Computing, either in person or online.

“Woodstock of the Web”

May 25−27, 1994: First International WWW Conference, CERN, Geneva.

Robert Cailliau, Tim Berners-Lee’s partner in the early web project, had wanted to host a web conference for a long time. The Wizard’s Workshop the summer before at tech publisher O’Reilly had been a kind of trial run. By the spring of 1994 every one of the few thousand people who had been involved with the infant web or its hypertext ancestors knew he, or in a few cases she, was part of something BIG. Once giddy comparisons with great media of the past from the telephone to print to TV suddenly looked sober. The public fireworks between NCSA and Mosaic Communications (later Netscape) in the first “Browser War,” while painful to those directly involved, only served to underline the Web’s importance—and raise interest further.

But behind this sense of imminent grandeur, the actual web development community was still a small one and spread over two or three continents. It had never had a proper town meeting. This community would come joyfully together in the summer of 1994 on a wave of hope and excitement; first with conferences—Geneva and soon after Prague—and then with the coalescing of Tim Berners-Lee’s World Wide Web Organization. But there were bumps along the way.

As soon as plans were firm, Robert mailed Joseph Hardin, head of the Mosaic effort at NCSA, to invite him and the remaining Mosaic team to come and speak. The reply was something like “Ah! Funny you should say that . . .” Joseph was planning to have a conference on Mosaic at NCSA that very same week and invite many of the same speakers. He had already committed resources. Robert and Tim were flabbergasted. Tim wrote to the effect that “I think it would be extremely unfortunate if you held a conference that week.” Joseph finally backed off at the 11th hour and agreed to hold the conference in November, six months later. It was an uneasy compromise, but it would set the pattern for the next five web conferences—one every six months. The web was imply moving too fast to wait a full year in between.

Once people began arriving in Geneva there was the odd sensation of meeting long-time email colleagues in the flesh, as at the O’Reilly conference of the summer before. But multiplied. Soon everyone was milling about the lobby, electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www.talk mailing list. Most had met Tim, but very few had met each other except online. There was something raw, even slightly obscene in seeing the actual person behind such correspondence, like meat after a long vegetarian interlude. This very physically real person who may have pockmarks on his skin, expound belligerently, or quietly dampen one’s palm with nervous sweat, was the source behind an online voice one had bitterly debated, or staunchly supported, or come to feel a genuine affection for over the last year or more.

Where O’Reilly had been 20 something developers, the Geneva conference had 380. Even more had been turned away because the space at CERN was fixed—an unexpected 800 had applied. The trickle, then stream of folks arrived in Geneva from all over the world. Many of the Americans had never been in Europe before. The web had also grown to the point where many crucial new pioneers had never met Tim, Robert, or the CERN team.

But despite NCSA’s agreement not to hold a competing conference, virtually none of the NCSA Mosaic folk showed, and, less surprisingly, none of the Mosaic Communications splinter group who were hard at work on their first products, and would soon be known as Netscape. While partly this may have been because the two groups of former colleagues were locked in battle with each other, it marked a further step in the division of the web’s movers and shakers into two overlapping worlds. There was the public, American one of Mosaic and the media, and the more international one of researchers, academic collaborations and technical arguments among purists. Many Geneva participants were in the latter camp, from institutions like the World Meteorological Organization, the International Center for Theoretical Physics, the University of Iceland and so on.

* * *

The program itself unfolded like an unending parade of pleasant technical surprises. It seemed everybody there had been secretly beavering away at some radical new expansion of the web and was just revealing it now: a Christmas of unexpected presents, as well as a rapidly cohering vision of a better future.

The main leader of the HTML effort, Dave Raggett, showed off the Arena browser he’d been working away at on the family dining room table at home, because his employer HP Labs still didn’t see the web as worth investing in. Along with Haukon Lee and Henrik Nielsen of the CERN web team Dave used Arena to give a glimpse into what they hoped would be the future of HTML and browsers: text that flowed around images, resizable tables, image backgrounds, math, and more.

Despite their absence, Marc Andreessen, Eric Bina, and Lou Montulli were ceremonially inducted into the first “World Wide Web Hall of Fame” along with Rob Hartill, Tim Berners-Lee, and Kevin Hughes of the Hawaii site.

Dan Connolly made a strong pitch for the unity of HTML with his talk on “Interoperability: Why Everyone Wins,” which warned of the dangers of corporations and browser makers setting their own standards. Dan’s effort gained him the questionable reward of being handed the torch of HTML Chief Architect, from Dave Raggett.

Part of the closing panel, from left to right: Dr. Joseph Hardin, NCSA; Robert Cailliau, CERN; Tim Berners-Lee, CERN; Dan Connolly, HaL Software. 
CERN Photo, copyright CERN

Part of the closing panel, from left to right: Dr. Joseph Hardin, NCSA; Robert Cailliau, CERN; Tim Berners-Lee, CERN; Dan Connolly, HaL Software. CERN Photo, copyright CERN

The biggest surprise for nearly all attendees was Mark Pesce with Labyrinth. The vague concept of virtual reality over the web had occurred to many of them, but usually as a late-night idea for what might be feasible in the middle future. Suddenly, here was this intense, charismatic American with a working spec for just such a late-night idea. Technically it was nothing special, even crude to some of the purists. But it showed the sheer scope of creative brilliance the web was attracting. It was as if a meeting of missile hobbyists in the 1920s was visited by someone with a detailed blueprint for a moon rocket. Everyone knew it was possible, but assumed specific plans were a ways down the road. Dave Raggett suggested the name for Labyrinth, which stuck: VRML, for Virtual Reality Markup Language. The feeling of delicious possibility was immense. How many other late-night ideas were already being secretly followed up, or soon would be?

Nights, the attendees descended on the Geneva nightlife in loose, changing groups; youngish tech wizards from every culture getting progressively more drunk and idealistic together.

The urgent discussions continued through elaborate meals and pints of imported Guinness in the Old Town, as Russians and Peruvians tried to communicate in second languages on pub and disco crawls across the same cobblestones where unfortunates accused of witchcraft had marched to the pyre. Here, buffeted in the wake of another information revolution, printers had turned out works banned in Catholic countries, and Calvin himself had burned a heretic or two.

The young coders of 1994 had no wild plans to build a bricks and mortar World City of knowledge on the Geneva plain as had information visionary Paul Otlet and architect Le Corbusier in the pause between world wars. Rather, they were building an invisible city that also reached out from here to circle the globe, one both broader and shallower but with the cardinal advantage that it was real. Robert Cailliau had a small group for dinner at his home in the French countryside a few miles from Voltaire’s old redoubt of Ferney-Voltaire, the writer and philosopher’s refuge when banned from Paris by the King.

The lake cruise on the final night was guaranteed to be memorable for its sheer local color, despite some rain. The craft was one of the giant Mississippi-style paddle-wheelers that carry dining and dancing groups far out onto Lake Geneva’s calm waters, where jagged Alpine peaks stick up behind the low mountains which ring the lake. As night falls, the cities and towns on the shore surround a boat with distant, twinkling lights. The band for the cruise was Wolfgang and the Werewolves, a jazz band with a name which could only come from polyglot Switzerland. The lake cruise was a curiously romantic end to this most exciting of technical conferences. Original Web programmer Jean-François Groff among others would christen it the “Woodstock of the Web.”

Links to original video, at CERN in Geneva

Photos of speakers at the closing panel:

Web Hall of Fame

Note that Marc Andreessen, Eric Bina, and Lou Montulli received the award in absentia: http://www94.web.cern.ch/www94/Awards0529.html

Additional links

https://cds.cern.ch/record/278586

More from Our “Netiversaries” Series

FacebookTwitterCopy Link

The post “Woodstock of the Web” at 25 appeared first on CHM.

]]>
https://computerhistory.org/blog/woodstock-of-the-web-at-25/feed/ 0
Happy 30th to the World Wide Web! https://computerhistory.org/blog/happy-30th-to-the-world-wide-web/ https://computerhistory.org/blog/happy-30th-to-the-world-wide-web/#respond Tue, 12 Mar 2019 00:00:00 +0000 http://computerhistory.org/blog/happy-30th-to-the-world-wide-web/ Thirty years ago this month, physicist turned programmer Tim Berners-Lee first proposed what became the World Wide Web. A few months later he resubmitted the proposal with his colleague Robert Cailliau. Today the web is living up to its ambitious name, serving over four billion people with more to come.

The post Happy 30th to the World Wide Web! appeared first on CHM.

]]>
Editor’s Note: This is part of an ongoing series dedicated to the web anniversaries of 2019, including the 50th anniversary of general purpose computer networks connected over the ARPANET, the 30th anniversary of the web’s conception, and shorter anniversaries for everything from mass Wi-Fi to familiar giants like Amazon and Facebook.

Diagram from “Information Management: A Proposal”
By Tim Berners-Lee, CERN, 1989. © CERN

Diagram from “Information Management: A Proposal” By Tim Berners-Lee, CERN, 1989. © CERN

Thirty years ago this month, physicist turned programmer Tim Berners-Lee first proposed what became the World Wide Web. A few months later he resubmitted the proposal with his colleague Robert Cailliau. Today the web is living up to its ambitious name, serving over four billion1 people with more to come. To mark the anniversary, we’re reissuing an article that tells 2 the story of how the infant web beat out bigger, better funded rivals to bring the online world to the rest of us.

Since the 25th anniversary in 2014, problems from “fake news” to wholesale harvesting of personal data have exposed some of ironies of the web’s evolution: a system designed to be decentralized and open has also given rise to enormous concentrations of power. But the story is far from over—check back at the 40th and 50th for updates.

See below for a number of celebrations of the web at 30 happening around the world:

March 12, 2019

May 13–17, 2019

  • The Web Conference, San Francisco: Historical panel marking the Web@30 and noting the 25 years of the web conference series

Other important web milestones coming up include the first demo browser, server, and web site (December 1990), and the public release of the WWW code library so that hackers anywhere could build their own browsers and servers (August 1991).

A team at CERN has restored the first 1990 web browser, which was also an editor and represented the original vision of what the web would be:

Sir Tim Berners-Lee, ca. 1999. © Andrew Brusso/Corbis

Sir Tim Berners-Lee, ca. 1999. © Andrew Brusso/Corbis

Robert Cailliau, 1995-06. © CERN Geneva

Robert Cailliau, 1995-06. © CERN Geneva

A Year of “Netiversaries”

2019 is a year of many web and networking anniversaries, or “netiversaries” to coin an awful word. On the web side, it has been 25 years since the popular explosion and the rise of web commerce, including the launch of Netscape, Amazon, Yahoo!, and many others. It has also been 25 since the so-called “Woodstock of the Web,” the vastly oversubscribed first web conference in 1994. The series is still going strong, with The Web Conference coming in May to San Francisco.

Twenty years ago Japan rolled out the mobile web the rest of us wouldn’t discover until the iPhone era (i-Mode), while here we remember 1999 for Napster as well as the teetering height of the dot-com boom. Lastly, 2019 marks 15 years since the web’s popular rehabilitation following the crash, including Google’s IPO and the rise of “Web 2.0” sites like Yelp, Flickr, and a social network called Facebook.

10 years ago marks the start of yet another try at a digital crypo-currency in the mold of the pioneering 1989 Digicash Inc. The upstart was called Bitcoin.

Our upcoming yearly Core magazine and future articles will explore other “netiversaries” of 2019, including the 50th anniversary of general purpose computer networks. That first connection was over the ARPANET, between Douglas Engelbart’s laboratory at SRI and another node at UCLA. Such networks were built as transport for online systems like Engelbart’s oNLine System, famously demo’ed in late 1968, which is a key ancestor of the web. Another blog article in @CHM remembers Engelbart and his work.

For exhibits on the evolution of the web and the online world, visit the Web, Mobile, and Networking galleries of our exhibition Revolution: The First 2000 Years of Computing, either in person or online.

The Web’s Conception—Further Reading

For the story of the web’s conception and birth, see this article from the 25th anniversary.

  1. https://www.statista.com/statistics/617136/digital-population-worldwide/, accessed March 2019
  2. This article first appeared in 2014 on the 25th anniversary of the first web proposal

More from Our “Netiversaries” Series

FacebookTwitterCopy Link

The post Happy 30th to the World Wide Web! appeared first on CHM.

]]>
https://computerhistory.org/blog/happy-30th-to-the-world-wide-web/feed/ 0
Net@50: Did Engelbart’s “Mother of All Demos” Launch the Connected World? https://computerhistory.org/blog/net-50-did-engelbart-s-mother-of-all-demos-launch-the-connected-world/ https://computerhistory.org/blog/net-50-did-engelbart-s-mother-of-all-demos-launch-the-connected-world/#respond Sun, 09 Dec 2018 00:00:00 +0000 http://computerhistory.org/blog/net-50-did-engelbart-s-mother-of-all-demos-launch-the-connected-world/ His goal was building systems to augment human intelligence. His group prototyped much of modern computing (and invented the mouse) along the way.

The post Net@50: Did Engelbart’s “Mother of All Demos” Launch the Connected World? appeared first on CHM.

]]>
His goal was building systems to augment human intelligence. His group prototyped much of modern computing (and invented the mouse) along the way

The better we get at getting better, the faster we will get better.

— Douglas Engelbart

50th Anniversary Events at CHM

Doug Engelbart at an NLS workstation

Doug Engelbart at an NLS workstation

In 1945, a young naval radar operator was waiting to be shipped home in the slack days after victory in WWII. He read a magazine article in his Philippine jungle base that proposed a new kind of information system, based on a fabulous desk called a Memex. Its two side-by-side microfilm readers and a host of hidden machinery would let you browse and create links between spools on any subject. The idea was to use the power of machines to make the whole of human knowledge accessible to all, and to let people add to and refine that knowledge in a virtuous circle.

Memex desk, as portrayed in an illustrated Life magazine version of Bush’s 1945 article “As We May Think”

Memex desk, as portrayed in an illustrated Life magazine version of Bush’s 1945 article “As We May Think”

Some years later that sailor, Douglas Engelbart, now a thoughtful and restless engineer at a Mountain View, California aerospace company, had an epiphany. Perhaps the new digital computer – not microfilm – could form the heart of a system like the one he’d read about. He imagined moving through information space the way a radar screen let you navigate through physical space.

The article he’d read was “As We May Think”, by leading U.S. scientist Vannevar Bush, a polymath who had built analog computers as well as played a major role in the development of the atomic bomb. Bush’s article mirrored some of the ideas of early 20th century pioneers including Paul Otlet and writer H.G. Wells about using the power of machines to assemble all knowledge in a kind of “world brain.” To Engelbart, the flexibility of the computer opened up a whole new set of possibilities. He decided that building such a system would be his life’s work.

Navigating Knowledge

But as I wrote in my piece on CHM Fellow Bob Taylor, the man who funded Douglas Engelbart through many of his most productive years, the idea of using digital computers to share information wasn’t exactly an easy sell in the 1950s and early ‘60s. Why would you waste these fabulously expensive data crunchers on something as quotidian as communication, in a world that already had telephones, printing, telegraphs, photography, TV and radio? Just as wild was Engelbart’s idea that each person would sit in front of their own keyboard and fabulously expensive radar-style video screen, interacting in real time with the computer and through it, with each other.

Engelbart was not completely alone; a few others had begun to see the computer as the ultimate information machine. A brilliantly precocious college student named Ted Nelson came up with an independent concept of using associative links to navigate and organize all the world’s knowledge into a new kind of multimedia literature, and he coined the term hypertext.

Two other fellow travelers were in a position to offer Engelbart extraordinarily concrete help. At the army’s Advanced Research Projects Administration (ARPA), J.C.R Licklider and his protégé Bob Taylor would later co-author a paper called “The Computer as a Communications Device.” With funding from Taylor first at NASA and then at ARPA, as well as from several others, Engelbart began to turn his vision into reality.

Hardware wizard Bill English with several ergonomic setups for the oNLine System (NLS); late 1960s

Hardware wizard Bill English with several ergonomic setups for the oNLine System (NLS); late 1960s

His goal was nothing less than to augment human intellect – to harness people’s ability to collaboratively solve the world’s important problems. He believed that properly trained and with the right computer tools, we could raise our “collective IQ.” By putting knowledge at the fingertips of those who needed it, and letting them share their refinements and insights with others, he hoped to start a feed-forward process he called “bootstrapping.” Each improvement would help accelerate further advances in method, and so on. The concept of bootstrapping also went far beyond computers. Much of his work, and that of his group, was aimed at improving the organizational processes that can help lead to innovation

This vision was in stark contrast to his Artificial Intelligence contemporaries, who wanted to create an alternate intelligence on computers rather than help turbo-charge the human kind. This early fork in the road still leaves its mark on computing today.

Engelbart started a laboratory at SRI (Stanford Research Institute at the time). He grandly named it the Augmented Human Intellect Research Center (AHIRC), later shortened to Augmentation Research Center (ARC). At the peak he would have 50 people working for him.

Doug Engelbart had a thoughtful, gentle manner, and a wonderfully open smile. When he met people he was charming and often funny. But he also gave the sense that he was considering things really, really deeply; that there was some serious purpose to everything he did. With prematurely grey hair and deep-set eyes framed by his large nose and prominent brows, he had the perfect presence for a visionary, or a guru.

As a manager he was often hands-off when it came to operational details, but concerned with communicating his vision so that others could help build it. He wasn’t terribly interested in technical details either. But he was brilliant at inspiring some of the best programmers and engineers of the time to come and work with him.

oNLine System

NLS screenshot

NLS screenshot

In a sense, Engelbart and his teams only built one big thing in his long career, the oNLine System (NLS), later repurposed as Augment. The mouse was merely Doug’s idea for a convenient input device which hardware wizard Bill English developed as one of several ergonomic accessories to that system; the chord keyset was another.

The keys on the chord keyset functioned somewhat like function keys in a modern program. Experts could also use them to input text with one hand using key combinations.

The keys on the chord keyset functioned somewhat like function keys in a modern program. Experts could also use them to input text with one hand using key combinations.

But if you tried to map the features of NLS to the computing world we know today, you would have to include pretty much all the core features of the Web as well as word processing, spell checkers, online collaboration in forms like wikis and Google Docs, videoconferencing tools, personal information software for things like grocery lists, a full featured email system, archiving software for saving documents with permanent identifiers, and some features of databases. Other features wouldn’t map at all, since they still haven’t reached wide use. These include documents that are editable by multiple applications rather than belonging to a single one, and a whole host of specialized hypertext features.

How could one system do so much? When Engelbart and his few peers imagined the future of computer communication in the early 1960s, the power of the machine was already clear to them, as was the fact that this power would get exponentially cheaper and faster (later memorialized as Moore’s Law).

The rest was gloriously wide open; a blank frontier in which to build not just castles but whole cities made of sand and imagination. There were no standards to support, no established players to consider in business strategies, no relevant conventional wisdom from advisors and investors. The result? By the mid 1960s Engelbart and his team had actually prototyped many of the core features of the computing world that would unfold over the next 40 years, plus others that may come.

The first mouse was carved from a block of redwood. This exact replica was made by the same shop at SRI which made the original, and is on display in “Revolution.”

The first mouse was carved from a block of redwood. This exact replica was made by the same shop at SRI which made the original, and is on display in “Revolution.”

Mouse, bottom view

Mouse, bottom view

Similarly, Ted Nelson independently conceived a number of these features plus his own vision of new kinds of electronic literature and multimedia, and built out some of them with help from his former schoolmate Andy van Dam. J.C.R. Licklider and Bob Taylor laid out quite different, but also sweeping visions of the future of computing.

By contrast, an example of an ambitious and lavishly funded computing project today might be launching a new social network within the ecosystem of established precedents.

Partly as a result of their lofty aspirations, Engelbart and his researchers forged close connections with many key figures of the 1960s counterculture. There was Stewart Brand of the Whole Earth Catalog, Ken Kesey and his Merry Pranksters, and many others. Like the ARPANET community that would follow, the ARC lab represented an uneasy intersection of two very different flavors of open-ended exploration; that of military-funded research, and the sometimes idealistic, sometimes just for kicks questing of an emerging caste of hippie hackers. This intersection is beautifully explored in John Markoff’s book What the Dormouse Said, and Fred Turner’s From Counterculture to Cyberculture.

In 1968, Engelbart and his staff put on the so-called “mother of all demos” at a major conference in San Francisco, showing off all the features they had developed over the years. For ninety minutes, the stunned audience of over 1000 computer professionals witnessed many of the features of modern computing for the first time: Live videoconferencing, document sharing, word processing, windows, and a strange pointing device jokingly referred to as ‘the mouse’. Elements on the screen linked to other elements using associative links – or ‘hypertext’.

Video: Excerpt from Engelbart’s 1968 “Mother of All Demos”

Only Connect

In the late 1960s NLS was a timesharing program, meaning that it ran on a single computer shared by a community of perhaps a couple of hundred users who logged in from their own terminals. General purpose computer-to-computer networking promised to create far larger communities, but it was still in the process of being invented. Engelbart and his lab played a significant role in that process.

Bob Taylor of ARPA had asked Engelbart to have his ARC lab host one of three centers on the experimental ARPAnet; the Network Information Center, or NIC. This would act as a central library and card catalog for all of the information on the growing network, with the archives of the ARC group itself as a foundation. It would also host the central directory for all of the computers on the ARPAnet, a function which later evolved into the familiar Domain Name System (.com, .org, etc.).

Engelbart enthusiastically agreed; he saw the chance to expand the reach of NLS from hundreds of users on timesharing systems to thousands all over the country and beyond; the start of a true online world. His team even made plans to add multimedia, foreshadowing features on the Web a quarter century hence. He hoped the NIC could be the seed of a truly online world.

At the end of 1969, ARC programmer Bill Duvall became one of the first two users on the ARPAnet, the world’s first major general-purpose computer network. Over nearly the next two decades the SRI NIC would play a pivotal role in the expansion of the ARPAnet and later the Internet. The ARC/NIC archives are a foundational collection at the Computer History Museum.

Engelbart (right) started the NIC in his ARC group at SRI. It was a central library as well as the repository of data the network needed to run.

Engelbart (right) started the NIC in his ARC group at SRI. It was a central library as well as the repository of data the network needed to run.

Fragmentation

But the fortunes of the ARC lab itself began to falter. In 1969 Bob Taylor left ARPA, and ARPA itself also changed its funding policies as part of a general government belt-tightening. Grants began to dry up, and SRI management, always wary of Engelbart’s freewheeling group of renegades in colorfully patched jeans, started to make more demands. Engelbart, who was more of a visionary leader than a hands-on manager, felt things slipping away.

The NIC and the ARPANET did indeed bring NLS to a broader spectrum of users, but a lot of them found it hard to use with a steep learning curve and arcane functions. It was also a resource-heavy program for the low-bandwidth, just-created networks of the era. Many started to access the NIC’s information with dumber but faster tools.

Another blow came when Bob Taylor became the leader of the Computer Systems Laboratory at Xerox’s newly created and lavishly funded Palo Alto Research Center, or PARC. The ARC lab’s former benefactor and his colleagues began to hire more and more ARC team members to build his own “Office of the Future,” eventually including some of Engelbart’s closest lieutenants like Bill English, Jeff Rulifson, and Bill Duvall. The bitter joke ran that ARC was a training program for PARC.

The ARC alums brought many of the baseline concepts pioneered in NLS to PARC, and thus into the stream of development that eventually led to much of modern computing. But after the internal failure of the POLOS project, which was meant to be a PARC version of NLS, much got left out as well – from hypertext links to the overall emphasis on collaboration and augmenting human intellect.

In 1977, SRI sold the ARC project to Tymshare, later a subsidiary of McDonnell-Douglas. There, Engelbart and his remaining team turned NLS into Augment, and pioneered several new features. But the momentum was gone, and Tymshare had little interest in pursuing Engelbart’s main goals. He retired from Tymshare in 1986, and continued to pursue his vision in offices provided by a grateful mouse-maker, Logitech.

He continued to speak widely, and in 1988 he founded the Bootstrap Institute with his daughter Christina, one of four children, to perpetuate his work. He won the National Medal of Technology, the Lemelson-M.I.T. Prize and the Turing Award, and was a Fellow of the Computer History Museum. Widowed in 1997, he and his second wife Karen attended public events into the last year of his life.

Douglas Engelbart died on July 2 at his home in Atherton, California. He was 88.

Engelbart’s Unfinished Revolution

What was the impact of Engelbart’s work? The irony is that so far, it has been largely in inverse to the parts he himself considered important. The mouse, a neat but fairly trivial accessory to NLS, became a household item for billions pretty much exactly as Bill English designed it for him. The once-radical idea of using a personal keyboard and screen for reading, and writing, and tracking our own personal information did indeed spread through many channels including the ’68 demo, the ARC alumni recruited by Xerox PARC, and then the larger PC revolution.

But from the late ‘70s into the mid ‘90s the great bulk of those keyboards and screens began to get attached to standalone PCs, the polar opposite to Engelbart’s vision of connectivity. Computing power had dropped so radically in price that a literally personal, standalone computer became affordable, and for over 20 years the attention of the computing world shifted to what an individual could do on his or her own.

Unlike their indirect ancestor the Xerox Alto, relatively few PCs got connected to networks or even dial-up services. The whole notion of multi-user systems and online collaboration began to fall out of fashion. It became an insider thing for researchers, professionals, and geeks. The closest the average person got to a “shared” repository of knowledge was buying a commercial CD-ROM.

Apple’s HyperCard introduced hypertext to the world, albeit in single-user form

Apple’s HyperCard introduced hypertext to the world, albeit in single-user form

Even computer hypertext itself, which Engelbart and Nelson had both independently invented as a tool for collaboratively sharing and building on associative links, first reached the mass market as a single-user program (HyperCard). Its author Bill Atkinson wasn’t directly aware of the work of the 1960s hypertext pioneers. There were some fuller-featured hypertext systems around, like Intermedia by Andy van Dam’s former student Norm Meyrowitz, but they were specialty products.

World Wide Web

Original Web logo by Berners-Lee’s project partner Robert Cailliau, who has synesthesia. He claims the colors are based on those he sees for the letter “W”

Original Web logo by Berners-Lee’s project partner Robert Cailliau, who has synesthesia. He claims the colors are based on those he sees for the letter “W”

By the end of the 1980s the connectivity pendulum slowly began to swing the other way. Down at the network level, the Internet and rival standards were growing exponentially in business, research, and higher education. Commercial online systems for ordinary folks – like CompuServe in the U.S. and Minitel in France – remained a niche but an expanding one.

A few people began to build experimental online systems specifically for the Internet, the next phase of the original ARPAnet whose second node had been at Engelbart’s ARC lab, and on which he’d pinned such high hopes for NLS. Several of those experimental systems had hypertext features, including one ambitiously called “WorldWideWeb.”

But while its creators had well-intentioned plans to add more, the Web came with only the most paltry minimum of hypertext and collaborative features from the point of view of Engelbart and other 1960s pioneers. It had simple associative links like you see on this Web page today, i.e “click here to see something related,” and that was it.

There were no typed links, like the kind in NLS that told you what sort of thing you were about to link to. There were no links to links, or to annotations, or to many other kinds of targets. There weren’t multiple views of the same information, like the collapsible outline view in Microsoft Word; NLS had even let you customize views for different levels of users. The Web also lacked any way to properly classify items and manage categories of information, or even to update broken links: if a target changed, a user simply got the familiar “404” error message. There was no built-in way to categorize pages by subject.

Of course, many of the more sophisticated features of NLS and other full-blown hypertext systems were easiest to implement in a controlled environment, a system you built from the ground up like one could in the wide open frontier days of the 1960s. The Web, by contrast, was a guerilla application designed to run in a huge variety of mutually incompatible environments; to spread virally and be installed casually by any sympathetic system administrator. It was a child of the cluttered, feudal, and feuding computing world of the late 1980s.

The Web was also dead simple to use, another survival strategy. New users were up and surfing like champs in minutes. This was a contrast to the formidable learning curve of NLS and a number of other systems.

For a few visionary Web pioneers including its main inventor Sir Tim Berners-Lee, NLS and other full-featured visions like Nelson’s Xanadu became a kind of aspirational template. They represented a set of features to be implemented and explored just as soon as there was breathing space from the sheer crush of immediate tasks. Some, like typed links and fields for classifying pages by category, even made it into early specs. But in the Web’s explosive growth the needed breathing space never came, and the original pioneers gradually lost direct control over the Web’s direction.

That loss of control also killed a feature that both the Web team and Engelbart considered a foundation of any meaningful online system: Authoring. While Berners-Lee’s first browser was also an editor, like a word processor connected straight to the Internet, the ones that made the Web famous left out that difficult-to-program function. Only with wikis and blogs a decade later did users regain some limited ability to easily contribute to the online world.

Berners-Lee himself later moved toward his own vision of the Semantic Web, which is compatible with a larger hypertext vision but emphasizes different aspects – pre-digesting information by making it readable by machines, rather than further building out tools for direct manipulation by people.

In 1990, hypertext and future Web pioneer Dan Connolly was inspired by Engelbart’s writings at a major conference on computer collaboration. Engelbart’s works were also an occasional presence at subsquent hypertext and Web conferences. But the majority of developers jumping on the Web bandwagon knew little or nothing about the long history of their newly chosen field.

In 1997, my colleague Kevin Hughes and I were deeply proud to feature Doug Engelbart as the keynote speaker at the first Web History Day and exhibit, which we organized for the International World Wide Web conference that year. The goal was to introduce his work to a wider Web community; we also had a live demo of NLS in the exhibit area, and the day opened with a breakfast “Wake Up Call” from Ted Nelson. I moderated a closing panel on future visions with Engelbart, Tim Berners-Lee, Brewster Kahle, and Pei Wei. Perhaps it’s time to look back and see what they said.

Onward

Today, the world is getting more and more connected. As a young Doug Engelbart could only imagine sixty years ago, much of the world’s population does the bulk of its reading, writing, and research tasks online, whether on the internet or on mobile phone networks.

But when it comes to the kind of knowledge navigation and collaboration tools that were the tangible features of his vision, we’ve climbed only the first rung of the ladder. If there’s ever a time that his ideas can be fully tested, it lies ahead.

Of course obstacles may be increasing, too, as the online world continues to clutter itself with fissioning standards, and proprietary services and apps. Take the simple act of sharing a link to a page, i.e. marking a “trail” in the parlance of Vannevar’s Bush’s article on his magical Memex desk. There are now literally dozens of proprietary “social bookmarking services” vying with each other to handle the task, from Digg, to Reddit, to Squidoo. Instead of refining a truly shared body of knowledge, such a multiplicity of trails become just another form of ephemeral content.

There are many other examples. It’s as if each tiny feature from the grand unified visions of people like Engelbart, Otlet, Bush and Nelson now has competing constituencies and sometimes IPOs behind it.

A century ago, Paul Otlet began building perhaps the first system featuring automation and hyperlinks to try and organize and refine all the worlds information. A core goal was to fight the increasing fragmentation of knowledge, which had gotten more and more specialized with what contemporaries called “information flood.” H.G. Wells and Vannevar Bush (of the Memex desk) championed similar quests in the pre-computer era, and Engelbart and several others I’ve mentioned here in the youth of the computer age.

Mondothèque, Paul Otlet’s 1930s multimedia desk concept for accessing remote information. Note communication devices at lower right

Mondothèque, Paul Otlet’s 1930s multimedia desk concept for accessing remote information. Note communication devices at lower right

The computer attracted Engelbart because of its infinite flexibility, as compared with the klugey microfilm and library cards of earlier systems. But an Achilles heel of the computer’s flexibility is how easy that makes it to create incompatible stuff. It would be ironic but unsurprising if the century-old quest to unify knowledge were to founder on specialization of a different kind; another profusion of incompatible standards on the “universal machine”. If so, we can hope that the computer’s very flexibility, and another generation of visionaries, can put it back on course.

Commenting

Errors: Please let me know of any errors in the piece, at mweber@computerhistory.org.

Memories: If you have memories related to Doug and the ARC/NIC you would like to share, please use the commenting feature in this blog and we will archive them.

Historical Materials: If you have or know of historical materials related to the topics of this piece that you think should be preserved with us or elsewhere, please contact me as above. You can offer materials directly to the Museum

Resources

Related Materials at CHM

Events

SRI ARC/NIC Records

The ARC/NIC records are part of the Museum’s permanent collection and comprise over 300 boxes of documents and hundreds of backup tapes, the latter now transferred to modern media. Networking pioneer and member of the Internet Society hall of Fame Elizabeth “Jake” Feinler, the former director of the NIC, brought these archives to the Museum and is a core Advisor to the CHM Internet History Program. The bulk of the existing archives of Engelbart’s work are split between the ARC/NIC records and other materials at CHM and the holdings at Stanford Libraries, listed below.

  • Guide to the SRI ARC/NIC Records. This Finding Aid by Jake Feinler with contributions from Sara Lott details the holdings.
  • Scans of pages from the ARC/NIC archives. These pages have been scanned thanks to a generous gift from SRI. Click on the link at left, or search for “ARC NIC Journal” in “Catalog Search” under “Explore” on our site.
  • Miscellaneous Engelbart videos, c. 1968 – 2000. This extensive collection of Engelbart videos were converted to DVD format by Jeff Rulifson. The full set is available at the Computer History Museum, and may also be viewed online at the Internet Archive

Oral Histories

  • Doug Engelbart interviewed by John Markoff
  • Engelbart’s Augmentation Research Center programmers oral history panel, accession number 102702010
  • Feinler, Elizabeth oral history, accession number 102702199
  • Taylor, Bob (Robert W.) oral History, accession number 102702015
  • Coming soon: Memorial video interview with Robert Taylor, the technology legend who funded much of Engelbart’s work, following Engelbart’s death
  • ”White Rabbit” interviews, the interviews by John Markoff that led to his classic book What the Dormouse Said: Duvall, Engelbart, and English; Duvall; Engelbart and English; Engelbart; and Taylor

For other oral histories, see info below.

Miscellaneous materials, CHM Internet History Program

See the document “Networking Resources at CHM” for a guide to networking materials in general. A number of not-yet-posted oral histories related to the work of the ARC lab and the NIC are listed there. They are currently available to researchers, and we hope to have them posted soon. Note also the networking oral histories in the Pelkey Collection, listed there.

Exhibits

Within our permanent Revolution exhibition, Engelbart’s work can be found in the HCI, Networking, and Web galleries. You can also search on his name.

Hall of Fellows

Related Collections at Other Repositories

Obituary

John Markoff, New York Times, “Computer Visionary Who Invented the Mouse

FacebookTwitterCopy Link

The post Net@50: Did Engelbart’s “Mother of All Demos” Launch the Connected World? appeared first on CHM.

]]>
https://computerhistory.org/blog/net-50-did-engelbart-s-mother-of-all-demos-launch-the-connected-world/feed/ 0
Born in a Van: Happy 40th Birthday to the Internet! https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/#respond Wed, 22 Nov 2017 00:00:00 +0000 http://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ Over the shortening fall days of 1977, an unmarked silver step van filled with futuristic equipment, shaggy-haired engineers, and sometimes fully uniformed generals quietly cruised the streets of the San Francisco Peninsula. Only an oddly shaped antenna gave a hint of its purpose.

The post Born in a Van: Happy 40th Birthday to the Internet! appeared first on CHM.

]]>
40th Anniversary of the First Major TCP Internetwork Demonstration
November 22, 1977

 

SRI’s packet radio research van, from which internet protocols were demonstrated.

Over the shortening fall days of 1977, an unmarked silver step van filled with futuristic equipment, shaggy-haired engineers, and sometimes fully uniformed generals quietly cruised the streets of the San Francisco Peninsula. Only an oddly shaped antenna gave a hint of its purpose.

The van was getting ready to demonstrate the first full transmission with what would become the internet standard we use today for nearly everything, from dating to rides to medical information.

“Where the Internet Was Born,” produced by the Computer History Museum.

While some people trace the internet’s origins to the ARPANET network of the late 1960s, the word internet in fact means joining different kinds of individual networks together. The first experiments with such “networks of networks” came around 1973, with the European Informatics Network (EIN) and the trials that led to the PUP standard at Xerox PARC.

By that time the US Advanced Research Projects Agency, ARPA, had started developing a mobile radio network and a satellite network for military use. If it didn’t figure out a way to connect them to each other, and to its original ARPANET, it would be in the crazy position of running three incompatible networks!

Two ARPANET alumni holed up in a Palo Alto motel room for a frenzied weekend and wrote a draft internetworking standard to bridge the gaps. Bob Kahn and Vint Cerf named it TCP (Transport Control Protocol). Kahn would go on to head ARPA’s computing division, and Cerf would join him.

Bob Kahn and Vint Cerf on the 1977 internet demonstration.

The draft went through a number of revisions, at first with the collaboration of the PUP researchers at Xerox PARC and especially the Europeans behind the earlier EIN internetworking experiments. These included researchers from Louis Pouzin’s French CYCLADES network as well as from the NPL Mark I network developed by English networking pioneer Donald Davies.

For a while it looked like there might be a single international standard. But then Cerf and Kahn decided to split off TCP on its own, though still drawing on many features from the international collaboration.

Meanwhile, ARPA’s two military networks had gone from concept to reality. Irwin Jacobs, co-founder of Linkabit of San Diego (which developed into Qualcomm) led the Satellite Network effort, making use of an Intelsat communications satellite to beam data to Europe and beyond.

SRI’s Don Nielson led the Packet Radio Network effort, centered around a research van they had specially outfitted as a mobile network node. Collins Radio built special equipment; original ARPANET contractor Bolt, Beranek, and Newman (BBN) developed server software among other tasks.

The Packet Radio network was the ancestor of all the wireless networks we use today, from mobile phones to Wi-Fi.

2007: Panel commemorating 30th anniversary of internet protocols.

By the August 1976, the teams were ready to try connecting across two networks as a first step. As they sat in the beer garden of biker hangout Rossotti’s, the Packet Radio group successfully sent their weekly progress report through the van over their radio network and then through the ARPANET.

It was a promising start. But while connecting two networks together had been done before could be done with a bunch of one-off hacks, translating data seamlessly across three or more networks would demonstrate unequivocally that TCP was a general internetworking standard.

The preparations took almost another year, and involved groups in multiple US states and four countries.

Virginia Travers (neé Strazisar) of BBN had written the first gateway software for TCP, ancestor of the code inside Cisco and Juniper routers that power today’s net. Like a Johnny Appleseed of internetworking, she spent months traveling to US and European sites to install the software at the points that would be needed for the big test.

Birth of the Internet: Barbara Denny, Paal Spilling, and Virginia Strazisar Travers.

On a rainy Wednesday, November 22, 1977, all was ready. Data flowed seamlessly from the van to SRI in Menlo Park and the University of Southern California in Los Angeles via England, Boston, and Sweden across three types of networks: packet radio, the satellite network, and the ARPANET. All as the van cruised Bay Area roads. The internet was born. Sort of…

 

Around the world: Path of test packets on November 22, 1977.

It would be six years before ARPA’s internet protocol—now renamed TCP/IP to reflect some additional functionality—was ready to officially install on all of the agency’s networks.

It would be 15 years before TCP/IP defeated powerful rivals to become the dominant internetworking standard world-wide, paving the way for a unified online system like the Web. Along the way TCP/IP beat out powerful corporate standards like DECNET and IBM’s SNA, and most importantly OSI, the official international standard which grew out of the early European internetworking efforts. Al Gore played several roles.

But it all started in the van.

Ten years ago, Don Nielson of SRI and I organized a major commemoration of the 30th anniversary of the 1977 experiment with help from Vint Cerf and Bob Kahn—the program is here and the press release is here. We had a public evening panel, and two days of interviewing all the original participants who came from several parts of the USA and Europe for the occasion.

Today, the SRI Packet Radio van is a part of the Computer History Museum’s permanent collection. A cutaway 3-D−printed model of the van is in the Networking gallery of the permanent Revolution exhibition, made by the same shop at SRI that outfitted the original.

A Team Effort

The packet radio van might have looked lonely as it cruised Bay Area streets in November of 1977. But it was part of a carefully planned international effort involving more more than 35 people and eight institutions. The figure below shows the path taken by the test packets.

There were other crucial people and institutions not shown on the map: ARPA, which ordered and funded it all; Collins Radio, which built the packet radios themselves, Linkabit, which developed the Satellite Network; and more listed below.

The People

The following people are among those who helped make the 1977 demo possible:

  • TCP Concept: Bob Kahn (DARPA) and Vint Cerf (Stanford University)
  • TCP Client (SRI International): Jim Mathis and Dave Retz
  • TCP Server (BBN): Ray Tomlinson and Bill Plummer
  • Gateways (BBN): Virginia Strazisar
  • Packet Radio Terminal Interface Unit (SRI International): Jim Mathis, Dave Retz, and Jim McClurg
  • Packet Radios (Collins Radio): Jim Garrett, Dick Sunlin, Mike Cisco, Anant Jain, Steve Gronemeyer, and John Jubin
  • Packet Radio Network (SRI International): Don Nielson, Ron Kunzelman, Keith Klemba, Don Cone, Jim McClurg, John Leung, Stan Fralick, and Earl Craighill (speech)
  • Satellite Network (Linkabit/BBN/UCL/NDRE): Irwin Jacobs, Dick Binder, Bob Bressler, Estil Hoversten, Peter Kirstein, Paal Spilling
  • Packet Radio Station (BBN): Jerry Burchfiel, Radia Perlman, Julie Sussman, Mike Beeler, Greg Lauer, Jill Westcott, and Barbara Denny

The Technology

  • Packet Radio: Built by Collins Radio Corporation (now Rockwell Collins)
  • Terminal Interface Unit and TCP Client: Built by SRI International; contains a modified Telnet terminal handler and one of the first versions of TCP, started at Stanford University and completed at SRI
  • Gateways: Designed and implemented by BBN for connecting the ARPANET to both the Packet Radio and Satellite Networks
  • TCP Server: In a DEC TENEX host located at University of Southern California’s Information Sciences Institute
  • Satellite Network: Implemented by Linkabit Corporation and others between England, Sweden, and the United States
  • Packet Radio Network: Designed and implemented by BBN, Collins Radio, SRI, and University of California, Los Angeles, with system integration and technical direction by SRI
  • ARPANET: First major packet-switched network consisting of landlines in the US with overseas nodes in Norway and England

Related Materials

Video interviews with key people done around 2007 internetworking anniversary:

TCP 3-network transmission pioneers interviewed separately:

Video interviews with other pioneers of internetworking:

Gallery

FacebookTwitterCopy Link

The post Born in a Van: Happy 40th Birthday to the Internet! appeared first on CHM.

]]>
https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/feed/ 0
2017 CHM Fellow Lawrence G. Roberts https://computerhistory.org/blog/2017-chm-fellow-lawrence-g-roberts/ https://computerhistory.org/blog/2017-chm-fellow-lawrence-g-roberts/#respond Thu, 27 Apr 2017 00:00:00 +0000 http://computerhistory.org/blog/2017-chm-fellow-lawrence-g-roberts/ 2017 CHM Fellow Larry Roberts (1937–2018) is honored for his seminal contributions to the evolution of our connected world. Following his early work in computer graphics and networking he was chief architect of the ARPANET, the US Department of Defense network that was a key building block of the later Internet. He was a champion of the x.25 networking standard, and a principal of the pioneering commercial networking corporation Telenet.

The post 2017 CHM Fellow Lawrence G. Roberts appeared first on CHM.

]]>
Larry Roberts (1937–2018) was honored as a CHM Fellow in 2017 for his seminal contributions to the evolution of our connected world. Following his early work in computer graphics and networking he was chief architect of the ARPANET, the US Department of Defense network that was a key building block of the later Internet. He was a champion of the x.25 networking standard, and a principal of the pioneering commercial networking corporation Telenet.

Larry Roberts (1937–2018) was honored as a CHM Fellow in 2017 for his seminal contributions to the evolution of our connected world. Following his early work in computer graphics and networking he was chief architect of the ARPANET, the US Department of Defense network that was a key building block of the later Internet. He was a champion of the x.25 networking standard, and a principal of the pioneering commercial networking corporation Telenet.

Larry Roberts (1937–2018) and his cofounders flew to New York for the IPO of their fast-growing Bay Area networking company, Telenet. They gave speeches on the stock exchange floor and in general savored the triumph of this recognition in dollars and shares for their five years of pioneering work, from raw startup to international communications carrier. There were thrilling possibilities on the horizon. Telecommunications giants like BT and Bell Canada as well as major manufacturing companies wanted to buy the innovative switches Telenet was manufacturing, based on a new international standard they had pushed through a normally glacial process in record time. Their own networking services were used by all manner of corporate customers.

The year was 1979. Telenet’s triumphant IPO was partly based on Larry’s track record for helping pioneer different aspects of networking at appropriate times, from the mid 1960s onwards. Networking itself was exploding. In less than a decade, it had gone from specialized military uses and a few experiments like the early ARPAnet (which Larry played a key role in), ALOHAnet, and the English NPL Mark I network, to an emerging industry.

By the mid-70s Telenet and competitors like Tymnet and CompuServe were offering network services that spanned the globe, for corporate customers and startup services alike. DIALOG and LEXISNEXIS offered Google-like search to the few customers who could afford their punishing hourly fees. Every major computer maker had its own nascent networking standard, from IBM’s SNA to Xerox’s XNS to DEC’s DECNET.

 

X.25 was the networking standard Larry Roberts helped formalize with the CCITT standards body in record time. X.25 became the basis for Telenet’s technology, as well as a major international standard for well over a decade

Every major telecommunication company was thinking about how to connect computers to each other, both to serve customers and for their own infrastructure. Some, like BT and France Telecom, were even poised to offer Web-like Videotex services to the general public. Large corporations were starting to try and network their own scattered computers and timesharing systems.

But within months of Telenet’s IPO, it was clear that even this audacious new public company was too small to get its arms around the revolution it had helped launch. It was a dilemma that would become familiar to other net entrepreneurs a generation later. Telenet had helped establish a market that was growing too fast for its own abilities to scale. Larry and his cofounders sold the firm to GTE in 1980 for $60 million.

The son of two PhD chemists, Larry Roberts had grown up tinkering with chemicals, electricity, and machines in the basement of their home in Westport, Connecticut. He made nitroglycerin and brought it to school when he was in first grade. He had read up on the needed steps in his father’s chemistry books. Luckily he didn’t cool it enough, and the chemical did nothing when he tried to set it off on the school grounds.

Even though his parents had both met in their graduate chemistry program, his mother followed the conventions of the time and stayed home with the children; she never used her degree professionally. But she volunteered for all manner of causes, including founding a number of Girl Scouts camps. One of them needed a telephone system, so when he was in college Larry designed a switch using transistors, which were quite new at the time. He had also built a mechanical telephone switch in his dorm for communicating with friends, through which he found a way to get insider access to long-distance lines and talk to his family for free.

He had entered MIT as an Electrical Engineering major, and went on to a masters and doctorate in the same subject. His early self-education in chemistry had almost been too good; he felt the field was “…sort of passé, it was sort of pretty well understood.” He thought electronics would have the biggest impact.

But he wasn’t thinking computers – yet. He used an IBM mainframe for a project in his senior year, but was underwhelmed by the batch processing and the punched cards. The epiphany came in the form of Wes Clark’s TX-0, one of the first interactive, single-user computers. At Lincoln Labs, the government research lab with strong ties to MIT, Larry got obsessed with that revolutionary machine. He put in over 700 hours in the first year, building an OCR program using neural network principles. That work became the basis of his first published paper.

 

MIT’s Transistor EXperimental (TX-0) computer

When Wes Clark abruptly left Lincoln in a disagreement over research priorities, his group, the TX-0 computer, and the nascent TX-2 were left leaderless. Larry in effect took over the effort, even though he was just a graduate student. He himself wrote an operating system, compilers, and other software for the TX-2.

 

TX-2 computer

With his classmate Ivan Sutherland, he began using the machine to explore both graphics and alternate input devices. The Lincoln Wand, as they called it, was an ultrasonic pointing device that looked like a magic wand and could freely manipulate three dimensional virtual objects. A couple of years later, when Ivan began working on the first virtual reality helmet with input from Larry, the wand let you directly see whatever you were pointing to. But at the start it was already useful for manipulating objects on the screen, and a convenient way to define buttons anywhere within arm’s reach. You could paste a piece of paper on a nearby wall, and pointing to it with the wand would activate whatever function you had set up.

Larry did his thesis on machine perception of shapes, using how humans perceive solid objects as a model for 3D computer graphics. He published the results. Ivan Sutherland’s thesis was Sketchpad, the graphics program that is the ancestor of all the thousands of others that exist today.

 

Ivan Sutherland

By writing machine code and having an intimate knowledge of the hardware of the TX-2, Larry and Ivan were able to explore graphics that would not become practical on other machines for years to come. The 3D objects for Larry’s thesis were wireframes, which was all even the TX-2 could handle. But they were fast, and he was fully calculating their surfaces and the intersections between them. The only obstacle between that and rendered 3D graphics like we see today was raw computing power.

But that obstacle proved too great for the young man’s patience. Says Larry “…I realized at that point that I was at least twenty years away from anybody being able to do this commercially. And it was sort of a waste of time to start working on more work on the 3D display…[Ididn’t think] it would get commercial in a timeframe that was useful.”

He needed a new direction. In 1964, Larry went to a conference in Virginia with a couple of MIT luminaries – Fernando Corbato, timesharing pioneer, and computing and cognitive science visionary J.C.R. Licklider. The latter was head of the computing division at the U.S. military’s Advanced Research Projects Agency (ARPA), and was talking about his idea for an “intergalactic computer network” to hook up various ARPA-funded research projects.

 

J.C.R. Licklider was the founding head of ARPA’s computer research effort. His “Intergalactic Computer Network” memo kicked off the idea that became ARPAnet

For Larry, networking wasn’t completely out of the blue. His MIT roommate, Len Kleinrock, had done his thesis on statistical and theoretical aspects of future computer networks. Larry himself had built those telephone switches as an undergraduate, and fiddled around with transmitting scanned images of Old Master paintings to other computers for AI pioneer Marvin Minsky as part of his graphics work. He’d even set up a phone line connection between the TX-2 and a distant living room so that Amar Bose, future founder of Bose Corporation, could process acoustics.

Larry’s classmate Ivan Sutherland had just taken over Licklider’s old job heading the computing division at ARPA. He was happy to set Larry to work on experiments in networking the TX-2 at Lincoln Labs to computers in California.

The results were promising, and life was good. Larry was ensconced at Lincoln, with unlimited access to what he considered “…the best computer in the world.” His ex-classmate Ivan was funding him for exactly the research he wanted to do, and that continued even after Ivan left and was replaced by Licklider’s protegé and deputy Bob Taylor.

 

Bob Taylor

The one thing that could disturb Larry’s idyll turned out to be blackmail. With backing from the head of ARPA, Charles Herzfeld, Bob Taylor had decided to turn Licklider’s vague, someday idea of a computer network for ARPA researchers into copper and steel reality. He wanted Larry to oversee the technical architecture. But Larry was loath to leave Lincoln and the TX-2, especially to take what he suspected would be largely a management job. Worse in his eyes, it would be one working for a man without much technical knowledge or even an engineering degree, much less a doctorate. Bob was trained as a psychoacoustician like Licklider.

After several refusals from Larry, Bob remembered that Lincoln was a recipient of ARPA funding. He talked to Hertzfeld, who asked the head of Lincoln to convey to Larry that it would be in both of their interests if he took the job.

The work didn’t turn out to be nearly as dull as he had thought. The first task was to convince a gaggle of mostly reluctant ARPA fundees that they should share some of their computer’s precious resources with others. For men who had fought endless bureaucratic battles to have a computer at all, this could be about as appealing as a request to just share say, just 10 or 20% of your wife. It got worse. Their labs would also have to write the interface to the network themselves, with their own time and graduate students; the research-world equivalent of carrying one’s own cross. With a mix of rewards and threats, Larry and Bob managed to ram the project through.

One key technical decision made the crosses a little easier to bear. Instead of connecting all the researcher’s/their mainframe computers directly to each other, ARPA decided to give each one an intermediary – a dedicated minicomputer that would act as a standard interface to the network. These Interface Message Processors (IMPs) were the suggestion of Wes Clark, the designer of Larry’s beloved TX-0 and TX-2 and a champion of small computers. Each laboratory would only have to worry about writing software to get its mainframe connected to the IMP.

 

Interface Message Processor (IMP)

Another technical turning point was Larry’s decision to implement the newish idea of packet switching. Instead of the direct user-to-user circuits of a telephone network, a packet-switched network acts more like a postal system. It breaks information down into tiny packets that can then take their own individual routes to the destination, where they get reassembled. For computer communication, which tends to have bursts of activity followed by long silences, using packets can be far more efficient.

Roberts and Taylor chose three centers to be responsible for critical parts of the new network, which they had prosaically named “ARPAnet.” Frank Heart’s team at Bolt, Beranek and Newman in Boston won the bid to create the IMPs and some of the basic software. Once the system was up, they would act as the overall Network Operations Center (NOC)—making them perhaps the world’s first formal network administrators.

The choice for the Network Information Center (NIC) was an easy one. Doug Engelbart’s lab at SRI had been one of the only ARPA-funded projects to show actual enthusiasm for joining the new net. Doug’s NLS (oNLine System) was a foretaste of the computing world of today, with users collaborating on documents, browsing remote documents, exchanging electronic mail, and clicking on hypertext links with a mouse. He saw a chance to expand the reach of NLS from dozens of users on individual timesharing systems to thousands, all over the country and beyond; the start of a true online world. The NIC would act as a central library and card catalog for all of the information about and available on the network. It would also host the central directory for all of the computers that made it work/were part of it, a function which later evolved into the familiar Domain Name System (.com, .org, etc.).

Completing the triad was the Network Measurement Center (NMC) at UCLA, in the lab of Larry’s old MIT roommate Len Kleinrock. The NMC was in charge of both measuring and predicting the various theoretical issues about how data might flow – or collide – over the net. Their results would help shape the kinds of rules and “traffic signals” the new system might require to run smoothly. It was loosely through the NMC that a group of graduate students coalesced around the task of writing the basic protocols to make the ARPAnet work. Their informal group process would evolve into the system that still runs the Internet standards process today.

By the end of 1969 the ARPAnet was connecting its first two mainframes, at SRI and UCLA. It was around this time that Bob Taylor left ARPA and Larry took over as head of its computing division.

Within a year there were at least two other packet-switched networks booting up. One was the brilliant but under-funded Mark I network at England’s National Physical Laboratory, the brainchild of Donald Davies.

 

NPL Mark I network, switch box

Another was a project Larry and Bob had also funded from ARPA, but using radio waves instead of wires. ALOHAnet in Hawaii was the direct ancestor of the digital data networks that surround us like an invisible umbilical cord today, from mobile phones to Wi-Fi.

Larry was intrigued. He contributed a number of key technical ideas to ALOHAnet and its protocol. Along with some of the ALOHAnet folks he began thinking of a smartphone-like mobile terminal that would let you take the power of networking anywhere, and a wider network to support it. And beyond terrestrial radio, there was space. Larry began thinking about a packet-switched network that could bounce from ground stations to satellites and back, reaching across continents like skipping a stone on a pond.

 

ALOHAnet

It was Larry’s successor at ARPA, Bob Kahn, who would build out those interests into the Packet Radio Network (PRNET) and the Satellite Network (SATNET). Along with the ARPAnet these would be the components of ARPA’s first Internet, the one we use today. For Larry, the horizon that beckoned was the marketplace. He felt that the next phase of the networking revolution would not be in the research lab, but out in the world where corporate users and the public could begin to share the connectivity that was already a familiar friend to networking researchers.

One soon-to-be popular feature of that connectivity was electronic mail. While most timesharing systems had offered some form of mail since the early 1960s, it didn’t go beyond at best a few hundred users of that particular system. Following the lead of Ray Tomlinson and others, Larry wrote some of the code that helped adapt electronic mail to the ARPAnet.

He left ARPA not long after, in 1973, to be the founding CEO of Telenet. The firm had started as an offshoot of BBN (Bolt, Beranek and Newman), the firm that built a good chunk of the original ARPAnet. They had recognized early on the need for commercial versions of that research network.

We’ve already seen at the start of this piece what happened, with the launch of the x.25 standard, and Telenet’s IPO and $60 million sale to GTE. Since then, Larry Roberts has served as CEO of DHL, and founder and CEO of five network equipment startups: NetExpress, ATM Systems, Caspian Networks, Anagran, and lately FSA Technologies.

Honoring Larry Roberts

The Computer History Museum honored Larry Roberts in 2017 for his contributions to human and machine communications and for his role in the development of the ARPANET and the X.25 protocol…

FacebookTwitterCopy Link

The post 2017 CHM Fellow Lawrence G. Roberts appeared first on CHM.

]]>
https://computerhistory.org/blog/2017-chm-fellow-lawrence-g-roberts/feed/ 0