From vgcerf at gmail.com Sat Oct 1 16:30:02 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sat, 1 Oct 2022 19:30:02 -0400 Subject: [ih] nice story about dave mills and NTP Message-ID: in the New Yorker https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time v From jack at 3kitty.org Sat Oct 1 21:35:29 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 1 Oct 2022 21:35:29 -0700 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: Message-ID: On 10/1/22 16:30, vinton cerf via Internet-history wrote: > in the New Yorker > > https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time > > v Agree, nice story.?? Dave did a *lot* of good work.? Reading the article reminded me of the genesis of NTP. IIRC.... Back in the early days circa 1980, Dave was the unabashed tinkerer, experimenter, and scientist.? Like all good scientists, he wanted to run experiments to explore what the newfangled Internet was doing and test his theories.?? To do that required measurements and data. At the time, BBN was responsible for the "core gateways" that provided most of the long-haul Internet connectivity, e.g., between US west and east coasts and Europe.? There were lots of ideas about how to do things - e.g., strategies for TCP retransmissions, techniques for maintaining dynamic tables of routing information, algorithms for dealing with limited bandwidth and memory, and other such stuff that was all intentionally very loosely defined within the protocols.?? The Internet was an Experiment. I remember talking with Dave back at the early Internet meetings, and his fervor to try things out, and his disappointment at the lack of the core gateway's ability to measure much of anything.?? In particular, it was difficult to measure how long things took in the Internet, since the gateways didn't even have real-time clocks. This caused a lot of concern about protocol elements such as Time-To-Live, which were temporarily to be implemented purely as "hop counts", pending the introduction of some mechanism for measuring Time into the gateways.? (AFAIK, we're still waiting....) Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs did have a pretty good mechanism for measuring time, at least between pairs of IMPs at either end of a communications circuit, because such circuits ran at specific speeds.?? So one IMP could tell how long it was taking to communicate with one of its neighbors, and used such data to drive the ARPANET internal routing mechanisms. In the Internet, gateways couldn't tell how long it took to send a datagram over one of its attached networks.?? The networks of the day simply didn't make such information available to its "users" (e.g., a gateway). But experiments require data, and labs require instruments to collect that data, and Dave wanted to test out lots of ideas, and we (BBN) couldn't offer any hope of such instrumentation in the core gateways any time soon. So Dave built it. And that's how NTP got started.? IIRC, the rest of us were all just trying to get the Internet to work at all.?? Dave was interested in understanding how and why it worked.? So while he built NTP, that didn't really affect any other projects.? Plus most (at least me) didn't understand how it was possible to get such accurate synchronization when the delays through the Internet mesh were so large and variable.?? (I still don't).?? But Dave thought it was possible, and that's why your computer, phone, laptop, or whatever know what time it is today. Dave was responsible for another long-lived element of the Internet.?? Dave's experiments were sometimes disruptive to the "core" Internet that we were tasked to make a reliable 24x7 service.? Where Dave The Scientist would say "I wonder what happens when I do this..." We The Engineers would say "Don't do that!" That was the original motivation for creating the notion of "Autonomous Systems" and EGP - a way to insulate the "core" of the Internet from the antics of the Fuzzballs.? I corralled Eric Rosen after one such Fuzzball-triggered incident and we sat down and created ASes, so that we could keep "our" AS running reliably.? It was intended as an interim mechanism until all the experimentation revealed what should be the best algorithms and protocol features to put in the next generation, and the Internet Experiment advanced into a production network service.?? We defined ASes and EGP to protect the Internet from Dave's Fuzzball mania. AFAIK, that hasn't happened yet ... and from that article, Dave is still Experimenting..... and The Internet is still an Experiment. Fun times, Jack Haverty From jmamodio at gmail.com Sun Oct 2 06:09:39 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Sun, 2 Oct 2022 08:09:39 -0500 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: Message-ID: Excellent story! Thanks for sharing Jack. Cheers - Jorge (mobile) > On Oct 1, 2022, at 11:36 PM, Jack Haverty via Internet-history wrote: > > ?On 10/1/22 16:30, vinton cerf via Internet-history wrote: >> in the New Yorker >> >> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >> >> v > > Agree, nice story. Dave did a *lot* of good work. Reading the article reminded me of the genesis of NTP. > > IIRC.... > > Back in the early days circa 1980, Dave was the unabashed tinkerer, experimenter, and scientist. Like all good scientists, he wanted to run experiments to explore what the newfangled Internet was doing and test his theories. To do that required measurements and data. > > At the time, BBN was responsible for the "core gateways" that provided most of the long-haul Internet connectivity, e.g., between US west and east coasts and Europe. There were lots of ideas about how to do things - e.g., strategies for TCP retransmissions, techniques for maintaining dynamic tables of routing information, algorithms for dealing with limited bandwidth and memory, and other such stuff that was all intentionally very loosely defined within the protocols. The Internet was an Experiment. > > I remember talking with Dave back at the early Internet meetings, and his fervor to try things out, and his disappointment at the lack of the core gateway's ability to measure much of anything. In particular, it was difficult to measure how long things took in the Internet, since the gateways didn't even have real-time clocks. This caused a lot of concern about protocol elements such as Time-To-Live, which were temporarily to be implemented purely as "hop counts", pending the introduction of some mechanism for measuring Time into the gateways. (AFAIK, we're still waiting....) > > Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs did have a pretty good mechanism for measuring time, at least between pairs of IMPs at either end of a communications circuit, because such circuits ran at specific speeds. So one IMP could tell how long it was taking to communicate with one of its neighbors, and used such data to drive the ARPANET internal routing mechanisms. > > In the Internet, gateways couldn't tell how long it took to send a datagram over one of its attached networks. The networks of the day simply didn't make such information available to its "users" (e.g., a gateway). > > But experiments require data, and labs require instruments to collect that data, and Dave wanted to test out lots of ideas, and we (BBN) couldn't offer any hope of such instrumentation in the core gateways any time soon. > > So Dave built it. > > And that's how NTP got started. IIRC, the rest of us were all just trying to get the Internet to work at all. Dave was interested in understanding how and why it worked. So while he built NTP, that didn't really affect any other projects. Plus most (at least me) didn't understand how it was possible to get such accurate synchronization when the delays through the Internet mesh were so large and variable. (I still don't). But Dave thought it was possible, and that's why your computer, phone, laptop, or whatever know what time it is today. > > Dave was responsible for another long-lived element of the Internet. Dave's experiments were sometimes disruptive to the "core" Internet that we were tasked to make a reliable 24x7 service. Where Dave The Scientist would say "I wonder what happens when I do this..." We The Engineers would say "Don't do that!" > > That was the original motivation for creating the notion of "Autonomous Systems" and EGP - a way to insulate the "core" of the Internet from the antics of the Fuzzballs. I corralled Eric Rosen after one such Fuzzball-triggered incident and we sat down and created ASes, so that we could keep "our" AS running reliably. It was intended as an interim mechanism until all the experimentation revealed what should be the best algorithms and protocol features to put in the next generation, and the Internet Experiment advanced into a production network service. We defined ASes and EGP to protect the Internet from Dave's Fuzzball mania. > > AFAIK, that hasn't happened yet ... and from that article, Dave is still Experimenting..... and The Internet is still an Experiment. > > Fun times, > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From mfidelman at meetinghouse.net Sun Oct 2 06:50:51 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 2 Oct 2022 09:50:51 -0400 Subject: [ih] nice story about dave mills and NTP (off topic - re. hop counts) In-Reply-To: References: Message-ID: <2df1c3ab-c2f0-d844-a408-3799e91c723a@meetinghouse.net> Jack Haverty via Internet-history wrote: > On 10/1/22 16:30, vinton cerf via Internet-history wrote: >> in the New Yorker >> >> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >> >> >> v > > Agree, nice story.?? Dave did a *lot* of good work.? Reading the > article reminded me of the genesis of NTP. Yes... great story! > emember talking with Dave back at the early Internet meetings, and his > fervor to try things out, and his disappointment at the lack of the > core gateway's ability to measure much of anything.?? In particular, > it was difficult to measure how long things took in the Internet, > since the gateways didn't even have real-time clocks. This caused a > lot of concern about protocol elements such as Time-To-Live, which > were temporarily to be implemented purely as "hop counts", pending the > introduction of some mechanism for measuring Time into the gateways.? > (AFAIK, we're still waiting....) > This reminds me that "hop count" might well be a good measure of time in the real world - e.g, when thinking about frames of data, propagating through parallel computing fabrics. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From alejandroacostaalamo at gmail.com Sun Oct 2 07:45:28 2022 From: alejandroacostaalamo at gmail.com (Alejandro Acosta) Date: Sun, 2 Oct 2022 10:45:28 -0400 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: Message-ID: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> Hello Jack, ? Thanks a lot for sharing this, as usual, I enjoy this kind of stories :-) ? Jack/group, just a question regarding this topic. When you mentioned: "This caused a lot of concern about protocol elements such as Time-To-Live, which were temporarily to be implemented purely as "hop counts" ? Do you mean, the original idea was to really drop the packet at certain time, a *real* Time-To-Live concept?. Thanks, P.S. That's why it was important to change the field's name to hop count in v6 :-) On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: > On 10/1/22 16:30, vinton cerf via Internet-history wrote: >> in the New Yorker >> >> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >> >> >> v > > Agree, nice story.?? Dave did a *lot* of good work.? Reading the > article reminded me of the genesis of NTP. > > IIRC.... > > Back in the early days circa 1980, Dave was the unabashed tinkerer, > experimenter, and scientist.? Like all good scientists, he wanted to > run experiments to explore what the newfangled Internet was doing and > test his theories.?? To do that required measurements and data. > > At the time, BBN was responsible for the "core gateways" that provided > most of the long-haul Internet connectivity, e.g., between US west and > east coasts and Europe.? There were lots of ideas about how to do > things - e.g., strategies for TCP retransmissions, techniques for > maintaining dynamic tables of routing information, algorithms for > dealing with limited bandwidth and memory, and other such stuff that > was all intentionally very loosely defined within the protocols.?? The > Internet was an Experiment. > > I remember talking with Dave back at the early Internet meetings, and > his fervor to try things out, and his disappointment at the lack of > the core gateway's ability to measure much of anything. In particular, > it was difficult to measure how long things took in the Internet, > since the gateways didn't even have real-time clocks. This caused a > lot of concern about protocol elements such as Time-To-Live, which > were temporarily to be implemented purely as "hop counts", pending the > introduction of some mechanism for measuring Time into the gateways.? > (AFAIK, we're still waiting....) > > Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs > did have a pretty good mechanism for measuring time, at least between > pairs of IMPs at either end of a communications circuit, because such > circuits ran at specific speeds.?? So one IMP could tell how long it > was taking to communicate with one of its neighbors, and used such > data to drive the ARPANET internal routing mechanisms. > > In the Internet, gateways couldn't tell how long it took to send a > datagram over one of its attached networks.?? The networks of the day > simply didn't make such information available to its "users" (e.g., a > gateway). > > But experiments require data, and labs require instruments to collect > that data, and Dave wanted to test out lots of ideas, and we (BBN) > couldn't offer any hope of such instrumentation in the core gateways > any time soon. > > So Dave built it. > > And that's how NTP got started.? IIRC, the rest of us were all just > trying to get the Internet to work at all.?? Dave was interested in > understanding how and why it worked.? So while he built NTP, that > didn't really affect any other projects.? Plus most (at least me) > didn't understand how it was possible to get such accurate > synchronization when the delays through the Internet mesh were so > large and variable.?? (I still don't).?? But Dave thought it was > possible, and that's why your computer, phone, laptop, or whatever > know what time it is today. > > Dave was responsible for another long-lived element of the Internet.?? > Dave's experiments were sometimes disruptive to the "core" Internet > that we were tasked to make a reliable 24x7 service.? Where Dave The > Scientist would say "I wonder what happens when I do this..." We The > Engineers would say "Don't do that!" > > That was the original motivation for creating the notion of > "Autonomous Systems" and EGP - a way to insulate the "core" of the > Internet from the antics of the Fuzzballs.? I corralled Eric Rosen > after one such Fuzzball-triggered incident and we sat down and created > ASes, so that we could keep "our" AS running reliably.? It was > intended as an interim mechanism until all the experimentation > revealed what should be the best algorithms and protocol features to > put in the next generation, and the Internet Experiment advanced into > a production network service.?? We defined ASes and EGP to protect the > Internet from Dave's Fuzzball mania. > > AFAIK, that hasn't happened yet ... and from that article, Dave is > still Experimenting..... and The Internet is still an Experiment. > > Fun times, > Jack Haverty > From jack at 3kitty.org Sun Oct 2 10:55:05 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 2 Oct 2022 10:55:05 -0700 Subject: [ih] nice story about dave mills and NTP In-Reply-To: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> Message-ID: The short answer is "Yes".? The Time-To-Live field was intended to count down actual transit time as a datagram proceeded through the Internet.?? A datagram was to be discarded as soon as some algorithm determined it wasn't going to get to its destination before its TTL ran to zero.?? But we didn't have the means to measure time, so hop-counts were the placeholder. I wasn't involved in the IPV6 work, but I suspect the change of the field to "hop count" reflected the reality of what the field actually was.?? But it would have been better to have actually made Time work. Much of these "original ideas" probably weren't ever written down in persistent media.? Most discussions in the 1980 time frame were done either in person or more extensively in email.?? Disk space was scarce and expensive, so much of such email was probably never archived - especially email not on the more "formal" mailing lists of the day. As I recall, Time was considered very important, for a number of reasons.? So here's what I remember... ----- Like every project using computers, the Internet was constrained by too little memory, too slow processors, and too limited bandwidth. A typical, and expensive, system might have a few dozen kilobytes of memory, a processor running at perhaps 1 MHz, and "high speed" communications circuits carrying 56 kilobits per second.?? So there was strong incentive not to waste resources. At the time, the ARPANET had been running for about ten years, and quite a lot of experience had been gained through its operation and crises.? Over that time, a lot of mechanisms had been put in place, internally in the IMP algorithms and hardware, to "protect" the network and keep it running despite what the user computers tried to do.? So, for example, an IMP could regulate the flow of traffic from any of its "host" computers, and even shut it off completely if needed.? (Google "ARPANET RFNM counting" if curious). In the Internet, the gateways had no such mechanisms available.? We were especially concerned about the "impedance mismatch" that would occur at a gateway connecting a LAN to a much slower and "skinnier" long-haul network.? All of the "flow control" mechanisms that were implemented inside an ARPANET IMP would be instead implemented inside TCP software in users' host computers. We didn't know how that would work.?? But something had to be in the code....? So the principle was that IP datagrams could be simply discarded when necessary, wherever necessary, and TCP would retransmit them so they would eventually get delivered. We envisioned that approach could easily lead to "runaway" scenarios, with the Internet full of duplicate datagrams being dropped at any "impedance mismatch" point along the way.?? In fact, we saw exactly that at a gateway between ARPANET and SATNET - IIRC in one of Dave's transatlantic experiments ("Don't do that!!!") So, Source Quench was invented, as a way of telling some host to "slow down", and the gateways sent an SQ back to the source of any datagram it had to drop.? Many of us didn't think that would work very well (e.g., a host might send one datagram and get back an SQ - what should it do to "slow down"...?).?? I recall that Dave knew exactly what to do.? Since his machine's datagram had been dropped, it meant he should immediately retransmit it.?? Another "Don't do that!" moment.... But SQ was a placeholder too -- to be replaced by some "real" flow control mechanism as soon as the experimentation revealed what that should be. ----- TCP retransmissions were based on Time.? If a TCP didn't receive a timely acknowledgement that data had been received, it could assume that someone along the way had dropped the datagram and it should retransmit it.? SQ datagrams were also of course not guaranteed to get to their destination, so you couldn't count on them as a signal to retransmit.? So Time was the only answer. But how to set the Timer in your TCP - that was subject to experimentation, with lots of ideas.? If you sent a copy of your data too soon, it would just overload everything along the path through the Internet with superfluous data consuming those scarce resources.? If you waited too long, your end-users would complain that the Internet was too slow.?? So the answer was to have each TCP estimate how long it was taking for a datagram to get to its destination, and set its own "retransmission timer" to slightly longer than that value. Of course, such a technique requires instrumentation and data. Also, since the delays might depend on the direction of a datagram's travel, you needed synchronized clocks at the two endpoint of a TCP connection, so they could accurately measure one-way transit times. Meanwhile, inside the gateways, there were ideas about how to do even better by using Time.? For example, if the routing protocols were actually based on Time (shortest transit time) rather than Hops (number of gateways between here and destination), the Internet would provide better user performance and be more efficient.? Even better - if a gateway could "know" that a particular datagram wouldn't get to its destination before it's TTL ran out, it could discard that datagram immediately, even though it still had time to live.? No point in wasting network resources carrying a datagram already sentenced to death. We couldn't do all that.?? Didn't have the hardware, didn't have the algorithms, didn't have the protocols.? So in the meantime, any computer handling an IP datagram should simply decrement the TTL value, and if it reached zero the datagram should be discarded. TTL effectively became a "hop count". When Dave got NTP running, and enough Time Servers were online and reliable, and the gateways and hosts had the needed hardware, Time could be measured, TTL could be set based on Time, and the Internet would be better. In the meanwhile, all of us TCP implementers just picked some value for our retransmission timers.? I think I set mine to 3 seconds. No exhaustive analysis or sophisticated mathematics involved.? It just felt right.....there was a lot of that going on in the early Internet. ----- While all the TCP work was going on, other uses were emerging.? We knew that there was more to networking than just logging in to distant computers or transferring files between them - uses that had been common for years in the ARPANET.?? But the next "killer app" hadn't appeared yet, although there were lots of people trying to create one. In particular, "Packet Voice" was popular, with a contingent of researchers figuring out how to do that on the fledgling Internet. There were visions that someday it might even be possible to do Video.? In particular, *interactive* voice was the goal, i.e., the ability to have a conversation by voice over the Internet (I don't recall when the term VOIP emerged, probably much later). In a resource-constrained network, you don't want to waste resources on datagrams that aren't useful.? In conversational voice, a datagram that arrives too late isn't useful.? A fragment of audio that should have gone to the speaker 500 milliseconds ago can only be discarded.? It would be better that it hadn't been sent at all, but at least discarding it along the way, as soon as it's known to be too late to arrive, would be appropriate. Of course, that needs Time.? UDP was created as an adjunct to TCP, providing a different kind of network service.?? Where TCP got all of the data to its destination, no matter how long it took, UDP would get as much data as possible to the destination, as long as it got there in time to be useful.?? Time was important. UDP implementations, in host computers, didn't have to worry about retransmissions.? But they did still have to worry about how long it would take for a datagram to get to its destination.? With that knowledge, they could set their datagrams' TTL values to something appropriate for the network conditions at the time.? Perhaps they might even tell their human users "Sorry, conversational use not available right now." -- an Internet equivalent of the "busy signal" - if the current network transit times were too high to provide a good user experience. Within the world of gateways, the differing needs of TCP and UDP motivated different behaviors.? That motivated the inclusion of the TOS - Type Of Service - field in the IP datagram header.? Perhaps UDP packets would receive higher priority, being placed at the head of queues so they got transmitted sooner.? Perhaps they would be discarded immediately if the gateway knew, based on its routing mechanisms, that the datagram would never get delivered in time. Perhaps UDP would be routed differently, using a terrestrial but low-bandwidth network, while TCP traffic was directed over a high-bandwidth but long-delay satellite path.?? A gateway mesh might have two or more independent routing mechanisms, each using a "shortest path" approach, but with different metrics for determining "short" - e.g., UDP using the shortest time route, while some TCP traffic travelled a route with least ("shortest") usage at the time. We couldn't do all that either.? We needed Time, hardware, algorithms, protocols, etc.? But the placeholders were there, in the TCP, IP, and UDP formats, ready for experimentation to figure all that stuff out. ----- When Time was implemented, there could be much needed experimentation to figure out the right answers.? Meanwhile, we had to keep the Internet working.? By the early 1980s, the ARPANET had been in operation for more than a decade, and lots of operational experience had accrued.? We knew, for example, that things could "go wrong" and generate a crisis for the network operators to quickly fix.??? TTL, even as just a hop count, was one mechanism to suppress problems.? We knew that "routing loops" could occur.?? TTL would at least prevent situations where datagrams circulated forever, orbiting inside the Internet until someone discovered and fixed whatever was causing a routing loop to keep those datagrams speeding around. Since the Internet was an Experiment, there were mechanisms put in place to help run experiments.? IIRC, in general things were put in the IP headers when we thought they were important and would be needed long after the experimental phase was over - things like TTL, SQ, TOS. Essentially every field in the IP header, and every type of datagram, was there for some good reason, even though its initial implementation was known to be inadequate.?? The Internet was built on Placeholders.... Other mechanisms were put into the "Options" mechanism of the IP format.?? A lot of that was targeted towards supporting experiments, or as occasional tools to be used to debug problems in crises during Internet operations. E.g., all of the "Source Routing" mechanisms might be used to route traffic in particular paths that the current gateways wouldn't otherwise use.? An example would be routing voice traffic over specific paths, which the normal gateway routing wouldn't use.?? The Voice experimenters could use those mechanisms to try out their ideas in a controlled experiment. Similarly, Source Routing might be used to debug network problems. A network analyst might use Source Routing to probe a particular remote computer interface, where the regular gateway mechanisms would avoid that path. So a general rule was that IP headers contained important mechanisms, often just as placeholders, while Options contained things useful only in particular circumstances. But all of these "original ideas" needed Time.?? We knew Dave was "on it".... ----- Hopefully this helps...? I (and many others) probably should have written these "original ideas" down 40 years ago.?? We did, but I suspect all in the form of emails which have now been lost.?? Sorry about that.?? There was always so much code to write.? And we didn't have the answers yet to motivate creating RFCs which were viewed as more permanent repositories of the solved problems. Sorry about that..... Jack Haverty On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: > Hello Jack, > > ? Thanks a lot for sharing this, as usual, I enjoy this kind of > stories :-) > > ? Jack/group, just a question regarding this topic. When you mentioned: > > "This caused a lot of concern about protocol elements such as > Time-To-Live, which were temporarily to be implemented purely as "hop > counts" > > > ? Do you mean, the original idea was to really drop the packet at > certain time, a *real* Time-To-Live concept?. > > > Thanks, > > P.S. That's why it was important to change the field's name to hop > count in v6 :-) > > > > On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>> in the New Yorker >>> >>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>> >>> >>> v >> >> Agree, nice story.?? Dave did a *lot* of good work.? Reading the >> article reminded me of the genesis of NTP. >> >> IIRC.... >> >> Back in the early days circa 1980, Dave was the unabashed tinkerer, >> experimenter, and scientist.? Like all good scientists, he wanted to >> run experiments to explore what the newfangled Internet was doing and >> test his theories.?? To do that required measurements and data. >> >> At the time, BBN was responsible for the "core gateways" that >> provided most of the long-haul Internet connectivity, e.g., between >> US west and east coasts and Europe.? There were lots of ideas about >> how to do things - e.g., strategies for TCP retransmissions, >> techniques for maintaining dynamic tables of routing information, >> algorithms for dealing with limited bandwidth and memory, and other >> such stuff that was all intentionally very loosely defined within the >> protocols.?? The Internet was an Experiment. >> >> I remember talking with Dave back at the early Internet meetings, and >> his fervor to try things out, and his disappointment at the lack of >> the core gateway's ability to measure much of anything. In >> particular, it was difficult to measure how long things took in the >> Internet, since the gateways didn't even have real-time clocks. This >> caused a lot of concern about protocol elements such as Time-To-Live, >> which were temporarily to be implemented purely as "hop counts", >> pending the introduction of some mechanism for measuring Time into >> the gateways.? (AFAIK, we're still waiting....) >> >> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >> did have a pretty good mechanism for measuring time, at least between >> pairs of IMPs at either end of a communications circuit, because such >> circuits ran at specific speeds.?? So one IMP could tell how long it >> was taking to communicate with one of its neighbors, and used such >> data to drive the ARPANET internal routing mechanisms. >> >> In the Internet, gateways couldn't tell how long it took to send a >> datagram over one of its attached networks.?? The networks of the day >> simply didn't make such information available to its "users" (e.g., a >> gateway). >> >> But experiments require data, and labs require instruments to collect >> that data, and Dave wanted to test out lots of ideas, and we (BBN) >> couldn't offer any hope of such instrumentation in the core gateways >> any time soon. >> >> So Dave built it. >> >> And that's how NTP got started.? IIRC, the rest of us were all just >> trying to get the Internet to work at all.?? Dave was interested in >> understanding how and why it worked.? So while he built NTP, that >> didn't really affect any other projects.? Plus most (at least me) >> didn't understand how it was possible to get such accurate >> synchronization when the delays through the Internet mesh were so >> large and variable.?? (I still don't). But Dave thought it was >> possible, and that's why your computer, phone, laptop, or whatever >> know what time it is today. >> >> Dave was responsible for another long-lived element of the >> Internet.?? Dave's experiments were sometimes disruptive to the >> "core" Internet that we were tasked to make a reliable 24x7 service.? >> Where Dave The Scientist would say "I wonder what happens when I do >> this..." We The Engineers would say "Don't do that!" >> >> That was the original motivation for creating the notion of >> "Autonomous Systems" and EGP - a way to insulate the "core" of the >> Internet from the antics of the Fuzzballs.? I corralled Eric Rosen >> after one such Fuzzball-triggered incident and we sat down and >> created ASes, so that we could keep "our" AS running reliably.? It >> was intended as an interim mechanism until all the experimentation >> revealed what should be the best algorithms and protocol features to >> put in the next generation, and the Internet Experiment advanced into >> a production network service.?? We defined ASes and EGP to protect >> the Internet from Dave's Fuzzball mania. >> >> AFAIK, that hasn't happened yet ... and from that article, Dave is >> still Experimenting..... and The Internet is still an Experiment. >> >> Fun times, >> Jack Haverty >> From brian.e.carpenter at gmail.com Sun Oct 2 12:50:34 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 3 Oct 2022 08:50:34 +1300 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> Message-ID: <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> Jack, On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: > The short answer is "Yes".? The Time-To-Live field was intended to count > down actual transit time as a datagram proceeded through the Internet. > A datagram was to be discarded as soon as some algorithm determined it > wasn't going to get to its destination before its TTL ran to zero.?? But > we didn't have the means to measure time, so hop-counts were the > placeholder. > > I wasn't involved in the IPV6 work, but I suspect the change of the > field to "hop count" reflected the reality of what the field actually > was.?? But it would have been better to have actually made Time work. To be blunt, why? There was no promise of guaranteed latency in those days, was there? As soon as queueing theory entered the game, that wasn't an option. So it wasn't just the absence of precise time, it was the presence of random delays that made a hop count the right answer, not just the convenient answer. I think that's why IPv6 never even considered anything but a hop count. The same lies behind the original TOS bits and their rebranding as the Differentiated Services Code Point many years later. My motto during the diffserv debates was "You can't beat queueing theory." There are people in the IETF working hard on Detnet ("deterministic networking") today. Maybe they have worked out how to beat queueing theory, but I doubt it. What I learned from working on real-time control systems is that you can't guarantee timing outside a very limited and tightly managed set of resources, where unbounded queues cannot occur. Brian > > Much of these "original ideas" probably weren't ever written down in > persistent media.? Most discussions in the 1980 time frame were done > either in person or more extensively in email.?? Disk space was scarce > and expensive, so much of such email was probably never archived - > especially email not on the more "formal" mailing lists of the day. > > As I recall, Time was considered very important, for a number of > reasons.? So here's what I remember... > ----- > > Like every project using computers, the Internet was constrained by too > little memory, too slow processors, and too limited bandwidth. A > typical, and expensive, system might have a few dozen kilobytes of > memory, a processor running at perhaps 1 MHz, and "high speed" > communications circuits carrying 56 kilobits per second.?? So there was > strong incentive not to waste resources. > > At the time, the ARPANET had been running for about ten years, and quite > a lot of experience had been gained through its operation and crises. > Over that time, a lot of mechanisms had been put in place, internally in > the IMP algorithms and hardware, to "protect" the network and keep it > running despite what the user computers tried to do.? So, for example, > an IMP could regulate the flow of traffic from any of its "host" > computers, and even shut it off completely if needed.? (Google "ARPANET > RFNM counting" if curious). > > In the Internet, the gateways had no such mechanisms available.? We were > especially concerned about the "impedance mismatch" that would occur at > a gateway connecting a LAN to a much slower and "skinnier" long-haul > network.? All of the "flow control" mechanisms that were implemented > inside an ARPANET IMP would be instead implemented inside TCP software > in users' host computers. > > We didn't know how that would work.?? But something had to be in the > code....? So the principle was that IP datagrams could be simply > discarded when necessary, wherever necessary, and TCP would retransmit > them so they would eventually get delivered. > > We envisioned that approach could easily lead to "runaway" scenarios, > with the Internet full of duplicate datagrams being dropped at any > "impedance mismatch" point along the way.?? In fact, we saw exactly that > at a gateway between ARPANET and SATNET - IIRC in one of Dave's > transatlantic experiments ("Don't do that!!!") > > So, Source Quench was invented, as a way of telling some host to "slow > down", and the gateways sent an SQ back to the source of any datagram it > had to drop.? Many of us didn't think that would work very well (e.g., a > host might send one datagram and get back an SQ - what should it do to > "slow down"...?).?? I recall that Dave knew exactly what to do.? Since > his machine's datagram had been dropped, it meant he should immediately > retransmit it.?? Another "Don't do that!" moment.... > > But SQ was a placeholder too -- to be replaced by some "real" flow > control mechanism as soon as the experimentation revealed what that > should be. > > ----- > > TCP retransmissions were based on Time.? If a TCP didn't receive a > timely acknowledgement that data had been received, it could assume that > someone along the way had dropped the datagram and it should retransmit > it.? SQ datagrams were also of course not guaranteed to get to their > destination, so you couldn't count on them as a signal to retransmit. > So Time was the only answer. > > But how to set the Timer in your TCP - that was subject to > experimentation, with lots of ideas.? If you sent a copy of your data > too soon, it would just overload everything along the path through the > Internet with superfluous data consuming those scarce resources.? If you > waited too long, your end-users would complain that the Internet was too > slow.?? So the answer was to have each TCP estimate how long it was > taking for a datagram to get to its destination, and set its own > "retransmission timer" to slightly longer than that value. > > Of course, such a technique requires instrumentation and data. Also, > since the delays might depend on the direction of a datagram's travel, > you needed synchronized clocks at the two endpoint of a TCP connection, > so they could accurately measure one-way transit times. > > Meanwhile, inside the gateways, there were ideas about how to do even > better by using Time.? For example, if the routing protocols were > actually based on Time (shortest transit time) rather than Hops (number > of gateways between here and destination), the Internet would provide > better user performance and be more efficient.? Even better - if a > gateway could "know" that a particular datagram wouldn't get to its > destination before it's TTL ran out, it could discard that datagram > immediately, even though it still had time to live.? No point in wasting > network resources carrying a datagram already sentenced to death. > > We couldn't do all that.?? Didn't have the hardware, didn't have the > algorithms, didn't have the protocols.? So in the meantime, any computer > handling an IP datagram should simply decrement the TTL value, and if it > reached zero the datagram should be discarded. TTL effectively became a > "hop count". > > When Dave got NTP running, and enough Time Servers were online and > reliable, and the gateways and hosts had the needed hardware, Time could > be measured, TTL could be set based on Time, and the Internet would be > better. > > In the meanwhile, all of us TCP implementers just picked some value for > our retransmission timers.? I think I set mine to 3 seconds. No > exhaustive analysis or sophisticated mathematics involved.? It just felt > right.....there was a lot of that going on in the early Internet. > > ----- > > While all the TCP work was going on, other uses were emerging.? We knew > that there was more to networking than just logging in to distant > computers or transferring files between them - uses that had been common > for years in the ARPANET.?? But the next "killer app" hadn't appeared > yet, although there were lots of people trying to create one. > > In particular, "Packet Voice" was popular, with a contingent of > researchers figuring out how to do that on the fledgling Internet. There > were visions that someday it might even be possible to do Video.? In > particular, *interactive* voice was the goal, i.e., the ability to have > a conversation by voice over the Internet (I don't recall when the term > VOIP emerged, probably much later). > > In a resource-constrained network, you don't want to waste resources on > datagrams that aren't useful.? In conversational voice, a datagram that > arrives too late isn't useful.? A fragment of audio that should have > gone to the speaker 500 milliseconds ago can only be discarded.? It > would be better that it hadn't been sent at all, but at least discarding > it along the way, as soon as it's known to be too late to arrive, would > be appropriate. > > Of course, that needs Time.? UDP was created as an adjunct to TCP, > providing a different kind of network service.?? Where TCP got all of > the data to its destination, no matter how long it took, UDP would get > as much data as possible to the destination, as long as it got there in > time to be useful.?? Time was important. > > UDP implementations, in host computers, didn't have to worry about > retransmissions.? But they did still have to worry about how long it > would take for a datagram to get to its destination.? With that > knowledge, they could set their datagrams' TTL values to something > appropriate for the network conditions at the time.? Perhaps they might > even tell their human users "Sorry, conversational use not available > right now." -- an Internet equivalent of the "busy signal" - if the > current network transit times were too high to provide a good user > experience. > > Within the world of gateways, the differing needs of TCP and UDP > motivated different behaviors.? That motivated the inclusion of the TOS > - Type Of Service - field in the IP datagram header.? Perhaps UDP > packets would receive higher priority, being placed at the head of > queues so they got transmitted sooner.? Perhaps they would be discarded > immediately if the gateway knew, based on its routing mechanisms, that > the datagram would never get delivered in time. Perhaps UDP would be > routed differently, using a terrestrial but low-bandwidth network, while > TCP traffic was directed over a high-bandwidth but long-delay satellite > path.?? A gateway mesh might have two or more independent routing > mechanisms, each using a "shortest path" approach, but with different > metrics for determining "short" - e.g., UDP using the shortest time > route, while some TCP traffic travelled a route with least ("shortest") > usage at the time. > > We couldn't do all that either.? We needed Time, hardware, algorithms, > protocols, etc.? But the placeholders were there, in the TCP, IP, and > UDP formats, ready for experimentation to figure all that stuff out. > > ----- > > When Time was implemented, there could be much needed experimentation to > figure out the right answers.? Meanwhile, we had to keep the Internet > working.? By the early 1980s, the ARPANET had been in operation for more > than a decade, and lots of operational experience had accrued.? We knew, > for example, that things could "go wrong" and generate a crisis for the > network operators to quickly fix.??? TTL, even as just a hop count, was > one mechanism to suppress problems.? We knew that "routing loops" could > occur.?? TTL would at least prevent situations where datagrams > circulated forever, orbiting inside the Internet until someone > discovered and fixed whatever was causing a routing loop to keep those > datagrams speeding around. > > Since the Internet was an Experiment, there were mechanisms put in place > to help run experiments.? IIRC, in general things were put in the IP > headers when we thought they were important and would be needed long > after the experimental phase was over - things like TTL, SQ, TOS. > > Essentially every field in the IP header, and every type of datagram, > was there for some good reason, even though its initial implementation > was known to be inadequate.?? The Internet was built on Placeholders.... > > Other mechanisms were put into the "Options" mechanism of the IP > format.?? A lot of that was targeted towards supporting experiments, or > as occasional tools to be used to debug problems in crises during > Internet operations. > > E.g., all of the "Source Routing" mechanisms might be used to route > traffic in particular paths that the current gateways wouldn't otherwise > use.? An example would be routing voice traffic over specific paths, > which the normal gateway routing wouldn't use.?? The Voice experimenters > could use those mechanisms to try out their ideas in a controlled > experiment. > > Similarly, Source Routing might be used to debug network problems. A > network analyst might use Source Routing to probe a particular remote > computer interface, where the regular gateway mechanisms would avoid > that path. > > So a general rule was that IP headers contained important mechanisms, > often just as placeholders, while Options contained things useful only > in particular circumstances. > > But all of these "original ideas" needed Time.?? We knew Dave was "on > it".... > > ----- > > Hopefully this helps...? I (and many others) probably should have > written these "original ideas" down 40 years ago.?? We did, but I > suspect all in the form of emails which have now been lost.?? Sorry > about that.?? There was always so much code to write.? And we didn't > have the answers yet to motivate creating RFCs which were viewed as more > permanent repositories of the solved problems. > > Sorry about that..... > > Jack Haverty > > > > On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >> Hello Jack, >> >> ? Thanks a lot for sharing this, as usual, I enjoy this kind of >> stories :-) >> >> ? Jack/group, just a question regarding this topic. When you mentioned: >> >> "This caused a lot of concern about protocol elements such as >> Time-To-Live, which were temporarily to be implemented purely as "hop >> counts" >> >> >> ? Do you mean, the original idea was to really drop the packet at >> certain time, a *real* Time-To-Live concept?. >> >> >> Thanks, >> >> P.S. That's why it was important to change the field's name to hop >> count in v6 :-) >> >> >> >> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>> in the New Yorker >>>> >>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>> >>>> >>>> v >>> >>> Agree, nice story.?? Dave did a *lot* of good work.? Reading the >>> article reminded me of the genesis of NTP. >>> >>> IIRC.... >>> >>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>> experimenter, and scientist.? Like all good scientists, he wanted to >>> run experiments to explore what the newfangled Internet was doing and >>> test his theories.?? To do that required measurements and data. >>> >>> At the time, BBN was responsible for the "core gateways" that >>> provided most of the long-haul Internet connectivity, e.g., between >>> US west and east coasts and Europe.? There were lots of ideas about >>> how to do things - e.g., strategies for TCP retransmissions, >>> techniques for maintaining dynamic tables of routing information, >>> algorithms for dealing with limited bandwidth and memory, and other >>> such stuff that was all intentionally very loosely defined within the >>> protocols.?? The Internet was an Experiment. >>> >>> I remember talking with Dave back at the early Internet meetings, and >>> his fervor to try things out, and his disappointment at the lack of >>> the core gateway's ability to measure much of anything. In >>> particular, it was difficult to measure how long things took in the >>> Internet, since the gateways didn't even have real-time clocks. This >>> caused a lot of concern about protocol elements such as Time-To-Live, >>> which were temporarily to be implemented purely as "hop counts", >>> pending the introduction of some mechanism for measuring Time into >>> the gateways.? (AFAIK, we're still waiting....) >>> >>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>> did have a pretty good mechanism for measuring time, at least between >>> pairs of IMPs at either end of a communications circuit, because such >>> circuits ran at specific speeds.?? So one IMP could tell how long it >>> was taking to communicate with one of its neighbors, and used such >>> data to drive the ARPANET internal routing mechanisms. >>> >>> In the Internet, gateways couldn't tell how long it took to send a >>> datagram over one of its attached networks.?? The networks of the day >>> simply didn't make such information available to its "users" (e.g., a >>> gateway). >>> >>> But experiments require data, and labs require instruments to collect >>> that data, and Dave wanted to test out lots of ideas, and we (BBN) >>> couldn't offer any hope of such instrumentation in the core gateways >>> any time soon. >>> >>> So Dave built it. >>> >>> And that's how NTP got started.? IIRC, the rest of us were all just >>> trying to get the Internet to work at all.?? Dave was interested in >>> understanding how and why it worked.? So while he built NTP, that >>> didn't really affect any other projects.? Plus most (at least me) >>> didn't understand how it was possible to get such accurate >>> synchronization when the delays through the Internet mesh were so >>> large and variable.?? (I still don't). But Dave thought it was >>> possible, and that's why your computer, phone, laptop, or whatever >>> know what time it is today. >>> >>> Dave was responsible for another long-lived element of the >>> Internet.?? Dave's experiments were sometimes disruptive to the >>> "core" Internet that we were tasked to make a reliable 24x7 service. >>> Where Dave The Scientist would say "I wonder what happens when I do >>> this..." We The Engineers would say "Don't do that!" >>> >>> That was the original motivation for creating the notion of >>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>> Internet from the antics of the Fuzzballs.? I corralled Eric Rosen >>> after one such Fuzzball-triggered incident and we sat down and >>> created ASes, so that we could keep "our" AS running reliably.? It >>> was intended as an interim mechanism until all the experimentation >>> revealed what should be the best algorithms and protocol features to >>> put in the next generation, and the Internet Experiment advanced into >>> a production network service.?? We defined ASes and EGP to protect >>> the Internet from Dave's Fuzzball mania. >>> >>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>> still Experimenting..... and The Internet is still an Experiment. >>> >>> Fun times, >>> Jack Haverty >>> > From jeanjour at comcast.net Sun Oct 2 13:35:33 2022 From: jeanjour at comcast.net (John Day) Date: Sun, 2 Oct 2022 16:35:33 -0400 Subject: [ih] nice story about dave mills and NTP In-Reply-To: <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> Message-ID: <9952DC83-62AD-452E-9077-6E2A33E2F501@comcast.net> I thought Jack was pretty much correct. It was also more than it was hard to get a good measurement, but that it was a lot of overhead to do it. At the time, the time to relay was more significant than propagation time, so hop-count was a reasonable substitute. That has changed and now propagation time is dominant. I thought IPv6 would go back to making TTL a time. Especially since propagation time can be locally measured. But given their track record, that was naive. Also, the reason for TTL was not latency, but that we had packets looping for hours sometimes days. Note that TTL doesn?t fix that it merely makes it go away. (IEEE actually fixed it.) Also, we have known since the late 70s that the necessary and sufficient condition for synchronization for reliable data transfer requires an upper bound on maximum packet lifetime and two other times. TTL provides that. And since the 1970s, it was obvious that queuing theory next to useless. The fact that it basically can only do Poisson and only solve for the steady-state. Traffic is bursty, not Poisson (or self-similar) and it is transients that are interesting. Traffic is bursty. That was the whole idea with packet switching and datagrams. I have been waiting for someone to develop the tools to get at what is really going on. Take care, John > On Oct 2, 2022, at 15:50, Brian E Carpenter via Internet-history wrote: > > Jack, > On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: >> The short answer is "Yes". The Time-To-Live field was intended to count >> down actual transit time as a datagram proceeded through the Internet. >> A datagram was to be discarded as soon as some algorithm determined it >> wasn't going to get to its destination before its TTL ran to zero. But >> we didn't have the means to measure time, so hop-counts were the >> placeholder. >> I wasn't involved in the IPV6 work, but I suspect the change of the >> field to "hop count" reflected the reality of what the field actually >> was. But it would have been better to have actually made Time work. > > To be blunt, why? > > There was no promise of guaranteed latency in those days, was there? > As soon as queueing theory entered the game, that wasn't an option. > So it wasn't just the absence of precise time, it was the presence of > random delays that made a hop count the right answer, not just the > convenient answer. > > I think that's why IPv6 never even considered anything but a hop count. > The same lies behind the original TOS bits and their rebranding as > the Differentiated Services Code Point many years later. My motto > during the diffserv debates was "You can't beat queueing theory." > > There are people in the IETF working hard on Detnet ("deterministic > networking") today. Maybe they have worked out how to beat queueing > theory, but I doubt it. What I learned from working on real-time > control systems is that you can't guarantee timing outside a very > limited and tightly managed set of resources, where unbounded > queues cannot occur. > > Brian > >> Much of these "original ideas" probably weren't ever written down in >> persistent media. Most discussions in the 1980 time frame were done >> either in person or more extensively in email. Disk space was scarce >> and expensive, so much of such email was probably never archived - >> especially email not on the more "formal" mailing lists of the day. >> As I recall, Time was considered very important, for a number of >> reasons. So here's what I remember... >> ----- >> Like every project using computers, the Internet was constrained by too >> little memory, too slow processors, and too limited bandwidth. A >> typical, and expensive, system might have a few dozen kilobytes of >> memory, a processor running at perhaps 1 MHz, and "high speed" >> communications circuits carrying 56 kilobits per second. So there was >> strong incentive not to waste resources. >> At the time, the ARPANET had been running for about ten years, and quite >> a lot of experience had been gained through its operation and crises. >> Over that time, a lot of mechanisms had been put in place, internally in >> the IMP algorithms and hardware, to "protect" the network and keep it >> running despite what the user computers tried to do. So, for example, >> an IMP could regulate the flow of traffic from any of its "host" >> computers, and even shut it off completely if needed. (Google "ARPANET >> RFNM counting" if curious). >> In the Internet, the gateways had no such mechanisms available. We were >> especially concerned about the "impedance mismatch" that would occur at >> a gateway connecting a LAN to a much slower and "skinnier" long-haul >> network. All of the "flow control" mechanisms that were implemented >> inside an ARPANET IMP would be instead implemented inside TCP software >> in users' host computers. >> We didn't know how that would work. But something had to be in the >> code.... So the principle was that IP datagrams could be simply >> discarded when necessary, wherever necessary, and TCP would retransmit >> them so they would eventually get delivered. >> We envisioned that approach could easily lead to "runaway" scenarios, >> with the Internet full of duplicate datagrams being dropped at any >> "impedance mismatch" point along the way. In fact, we saw exactly that >> at a gateway between ARPANET and SATNET - IIRC in one of Dave's >> transatlantic experiments ("Don't do that!!!") >> So, Source Quench was invented, as a way of telling some host to "slow >> down", and the gateways sent an SQ back to the source of any datagram it >> had to drop. Many of us didn't think that would work very well (e.g., a >> host might send one datagram and get back an SQ - what should it do to >> "slow down"...?). I recall that Dave knew exactly what to do. Since >> his machine's datagram had been dropped, it meant he should immediately >> retransmit it. Another "Don't do that!" moment.... >> But SQ was a placeholder too -- to be replaced by some "real" flow >> control mechanism as soon as the experimentation revealed what that >> should be. >> ----- >> TCP retransmissions were based on Time. If a TCP didn't receive a >> timely acknowledgement that data had been received, it could assume that >> someone along the way had dropped the datagram and it should retransmit >> it. SQ datagrams were also of course not guaranteed to get to their >> destination, so you couldn't count on them as a signal to retransmit. >> So Time was the only answer. >> But how to set the Timer in your TCP - that was subject to >> experimentation, with lots of ideas. If you sent a copy of your data >> too soon, it would just overload everything along the path through the >> Internet with superfluous data consuming those scarce resources. If you >> waited too long, your end-users would complain that the Internet was too >> slow. So the answer was to have each TCP estimate how long it was >> taking for a datagram to get to its destination, and set its own >> "retransmission timer" to slightly longer than that value. >> Of course, such a technique requires instrumentation and data. Also, >> since the delays might depend on the direction of a datagram's travel, >> you needed synchronized clocks at the two endpoint of a TCP connection, >> so they could accurately measure one-way transit times. >> Meanwhile, inside the gateways, there were ideas about how to do even >> better by using Time. For example, if the routing protocols were >> actually based on Time (shortest transit time) rather than Hops (number >> of gateways between here and destination), the Internet would provide >> better user performance and be more efficient. Even better - if a >> gateway could "know" that a particular datagram wouldn't get to its >> destination before it's TTL ran out, it could discard that datagram >> immediately, even though it still had time to live. No point in wasting >> network resources carrying a datagram already sentenced to death. >> We couldn't do all that. Didn't have the hardware, didn't have the >> algorithms, didn't have the protocols. So in the meantime, any computer >> handling an IP datagram should simply decrement the TTL value, and if it >> reached zero the datagram should be discarded. TTL effectively became a >> "hop count". >> When Dave got NTP running, and enough Time Servers were online and >> reliable, and the gateways and hosts had the needed hardware, Time could >> be measured, TTL could be set based on Time, and the Internet would be >> better. >> In the meanwhile, all of us TCP implementers just picked some value for >> our retransmission timers. I think I set mine to 3 seconds. No >> exhaustive analysis or sophisticated mathematics involved. It just felt >> right.....there was a lot of that going on in the early Internet. >> ----- >> While all the TCP work was going on, other uses were emerging. We knew >> that there was more to networking than just logging in to distant >> computers or transferring files between them - uses that had been common >> for years in the ARPANET. But the next "killer app" hadn't appeared >> yet, although there were lots of people trying to create one. >> In particular, "Packet Voice" was popular, with a contingent of >> researchers figuring out how to do that on the fledgling Internet. There >> were visions that someday it might even be possible to do Video. In >> particular, *interactive* voice was the goal, i.e., the ability to have >> a conversation by voice over the Internet (I don't recall when the term >> VOIP emerged, probably much later). >> In a resource-constrained network, you don't want to waste resources on >> datagrams that aren't useful. In conversational voice, a datagram that >> arrives too late isn't useful. A fragment of audio that should have >> gone to the speaker 500 milliseconds ago can only be discarded. It >> would be better that it hadn't been sent at all, but at least discarding >> it along the way, as soon as it's known to be too late to arrive, would >> be appropriate. >> Of course, that needs Time. UDP was created as an adjunct to TCP, >> providing a different kind of network service. Where TCP got all of >> the data to its destination, no matter how long it took, UDP would get >> as much data as possible to the destination, as long as it got there in >> time to be useful. Time was important. >> UDP implementations, in host computers, didn't have to worry about >> retransmissions. But they did still have to worry about how long it >> would take for a datagram to get to its destination. With that >> knowledge, they could set their datagrams' TTL values to something >> appropriate for the network conditions at the time. Perhaps they might >> even tell their human users "Sorry, conversational use not available >> right now." -- an Internet equivalent of the "busy signal" - if the >> current network transit times were too high to provide a good user >> experience. >> Within the world of gateways, the differing needs of TCP and UDP >> motivated different behaviors. That motivated the inclusion of the TOS >> - Type Of Service - field in the IP datagram header. Perhaps UDP >> packets would receive higher priority, being placed at the head of >> queues so they got transmitted sooner. Perhaps they would be discarded >> immediately if the gateway knew, based on its routing mechanisms, that >> the datagram would never get delivered in time. Perhaps UDP would be >> routed differently, using a terrestrial but low-bandwidth network, while >> TCP traffic was directed over a high-bandwidth but long-delay satellite >> path. A gateway mesh might have two or more independent routing >> mechanisms, each using a "shortest path" approach, but with different >> metrics for determining "short" - e.g., UDP using the shortest time >> route, while some TCP traffic travelled a route with least ("shortest") >> usage at the time. >> We couldn't do all that either. We needed Time, hardware, algorithms, >> protocols, etc. But the placeholders were there, in the TCP, IP, and >> UDP formats, ready for experimentation to figure all that stuff out. >> ----- >> When Time was implemented, there could be much needed experimentation to >> figure out the right answers. Meanwhile, we had to keep the Internet >> working. By the early 1980s, the ARPANET had been in operation for more >> than a decade, and lots of operational experience had accrued. We knew, >> for example, that things could "go wrong" and generate a crisis for the >> network operators to quickly fix. TTL, even as just a hop count, was >> one mechanism to suppress problems. We knew that "routing loops" could >> occur. TTL would at least prevent situations where datagrams >> circulated forever, orbiting inside the Internet until someone >> discovered and fixed whatever was causing a routing loop to keep those >> datagrams speeding around. >> Since the Internet was an Experiment, there were mechanisms put in place >> to help run experiments. IIRC, in general things were put in the IP >> headers when we thought they were important and would be needed long >> after the experimental phase was over - things like TTL, SQ, TOS. >> Essentially every field in the IP header, and every type of datagram, >> was there for some good reason, even though its initial implementation >> was known to be inadequate. The Internet was built on Placeholders.... >> Other mechanisms were put into the "Options" mechanism of the IP >> format. A lot of that was targeted towards supporting experiments, or >> as occasional tools to be used to debug problems in crises during >> Internet operations. >> E.g., all of the "Source Routing" mechanisms might be used to route >> traffic in particular paths that the current gateways wouldn't otherwise >> use. An example would be routing voice traffic over specific paths, >> which the normal gateway routing wouldn't use. The Voice experimenters >> could use those mechanisms to try out their ideas in a controlled >> experiment. >> Similarly, Source Routing might be used to debug network problems. A >> network analyst might use Source Routing to probe a particular remote >> computer interface, where the regular gateway mechanisms would avoid >> that path. >> So a general rule was that IP headers contained important mechanisms, >> often just as placeholders, while Options contained things useful only >> in particular circumstances. >> But all of these "original ideas" needed Time. We knew Dave was "on >> it".... >> ----- >> Hopefully this helps... I (and many others) probably should have >> written these "original ideas" down 40 years ago. We did, but I >> suspect all in the form of emails which have now been lost. Sorry >> about that. There was always so much code to write. And we didn't >> have the answers yet to motivate creating RFCs which were viewed as more >> permanent repositories of the solved problems. >> Sorry about that..... >> Jack Haverty >> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >>> Hello Jack, >>> >>> Thanks a lot for sharing this, as usual, I enjoy this kind of >>> stories :-) >>> >>> Jack/group, just a question regarding this topic. When you mentioned: >>> >>> "This caused a lot of concern about protocol elements such as >>> Time-To-Live, which were temporarily to be implemented purely as "hop >>> counts" >>> >>> >>> Do you mean, the original idea was to really drop the packet at >>> certain time, a *real* Time-To-Live concept?. >>> >>> >>> Thanks, >>> >>> P.S. That's why it was important to change the field's name to hop >>> count in v6 :-) >>> >>> >>> >>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>>> in the New Yorker >>>>> >>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>>> >>>>> >>>>> v >>>> >>>> Agree, nice story. Dave did a *lot* of good work. Reading the >>>> article reminded me of the genesis of NTP. >>>> >>>> IIRC.... >>>> >>>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>>> experimenter, and scientist. Like all good scientists, he wanted to >>>> run experiments to explore what the newfangled Internet was doing and >>>> test his theories. To do that required measurements and data. >>>> >>>> At the time, BBN was responsible for the "core gateways" that >>>> provided most of the long-haul Internet connectivity, e.g., between >>>> US west and east coasts and Europe. There were lots of ideas about >>>> how to do things - e.g., strategies for TCP retransmissions, >>>> techniques for maintaining dynamic tables of routing information, >>>> algorithms for dealing with limited bandwidth and memory, and other >>>> such stuff that was all intentionally very loosely defined within the >>>> protocols. The Internet was an Experiment. >>>> >>>> I remember talking with Dave back at the early Internet meetings, and >>>> his fervor to try things out, and his disappointment at the lack of >>>> the core gateway's ability to measure much of anything. In >>>> particular, it was difficult to measure how long things took in the >>>> Internet, since the gateways didn't even have real-time clocks. This >>>> caused a lot of concern about protocol elements such as Time-To-Live, >>>> which were temporarily to be implemented purely as "hop counts", >>>> pending the introduction of some mechanism for measuring Time into >>>> the gateways. (AFAIK, we're still waiting....) >>>> >>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>>> did have a pretty good mechanism for measuring time, at least between >>>> pairs of IMPs at either end of a communications circuit, because such >>>> circuits ran at specific speeds. So one IMP could tell how long it >>>> was taking to communicate with one of its neighbors, and used such >>>> data to drive the ARPANET internal routing mechanisms. >>>> >>>> In the Internet, gateways couldn't tell how long it took to send a >>>> datagram over one of its attached networks. The networks of the day >>>> simply didn't make such information available to its "users" (e.g., a >>>> gateway). >>>> >>>> But experiments require data, and labs require instruments to collect >>>> that data, and Dave wanted to test out lots of ideas, and we (BBN) >>>> couldn't offer any hope of such instrumentation in the core gateways >>>> any time soon. >>>> >>>> So Dave built it. >>>> >>>> And that's how NTP got started. IIRC, the rest of us were all just >>>> trying to get the Internet to work at all. Dave was interested in >>>> understanding how and why it worked. So while he built NTP, that >>>> didn't really affect any other projects. Plus most (at least me) >>>> didn't understand how it was possible to get such accurate >>>> synchronization when the delays through the Internet mesh were so >>>> large and variable. (I still don't). But Dave thought it was >>>> possible, and that's why your computer, phone, laptop, or whatever >>>> know what time it is today. >>>> >>>> Dave was responsible for another long-lived element of the >>>> Internet. Dave's experiments were sometimes disruptive to the >>>> "core" Internet that we were tasked to make a reliable 24x7 service. >>>> Where Dave The Scientist would say "I wonder what happens when I do >>>> this..." We The Engineers would say "Don't do that!" >>>> >>>> That was the original motivation for creating the notion of >>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>>> Internet from the antics of the Fuzzballs. I corralled Eric Rosen >>>> after one such Fuzzball-triggered incident and we sat down and >>>> created ASes, so that we could keep "our" AS running reliably. It >>>> was intended as an interim mechanism until all the experimentation >>>> revealed what should be the best algorithms and protocol features to >>>> put in the next generation, and the Internet Experiment advanced into >>>> a production network service. We defined ASes and EGP to protect >>>> the Internet from Dave's Fuzzball mania. >>>> >>>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>>> still Experimenting..... and The Internet is still an Experiment. >>>> >>>> Fun times, >>>> Jack Haverty >>>> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Sun Oct 2 14:16:29 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 3 Oct 2022 10:16:29 +1300 Subject: [ih] nice story about dave mills and NTP In-Reply-To: <9952DC83-62AD-452E-9077-6E2A33E2F501@comcast.net> References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> <9952DC83-62AD-452E-9077-6E2A33E2F501@comcast.net> Message-ID: <15bcabd3-58bd-9603-6dc6-64f9673fd92e@gmail.com> On 03-Oct-22 09:35, John Day wrote: > I thought Jack was pretty much correct. It was also more than it was hard to get a good measurement, but that it was a lot of overhead to do it. > > At the time, the time to relay was more significant than propagation time, so hop-count was a reasonable substitute. That has changed and now propagation time is dominant. I thought IPv6 would go back to making TTL a time. Especially since propagation time can be locally measured. But given their track record, that was naive. > > Also, the reason for TTL was not latency, but that we had packets looping for hours sometimes days. Note that TTL doesn?t fix that it merely makes it go away. (IEEE actually fixed it.) Also, we have known since the late 70s that the necessary and sufficient condition for synchronization for reliable data transfer requires an upper bound on maximum packet lifetime and two other times. TTL provides that. > > And since the 1970s, it was obvious that queuing theory next to useless. > > The fact that it basically can only do Poisson and only solve for the steady-state. Traffic is bursty, not Poisson (or self-similar) and it is transients that are interesting. Yes, the distributions look very different at different points in the network. You see both Poisson distributions and self-similarity in some places, but certainly not everywhere. > Traffic is bursty. That was the whole idea with packet switching and datagrams. I have been waiting for someone to develop the tools to get at what is really going on. Of course there's burstiness at the edges, which is why we have buffer bloat. I once read Volume 2 of Kleinrock. I can't say I understood it. However, all queues are ultimately G/G/n queues, and what that really tells us is that the theory is too complex to be much use. Nevertheless it governs what we see in the network. Brian > > Take care, > John > >> On Oct 2, 2022, at 15:50, Brian E Carpenter via Internet-history wrote: >> >> Jack, >> On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: >>> The short answer is "Yes". The Time-To-Live field was intended to count >>> down actual transit time as a datagram proceeded through the Internet. >>> A datagram was to be discarded as soon as some algorithm determined it >>> wasn't going to get to its destination before its TTL ran to zero. But >>> we didn't have the means to measure time, so hop-counts were the >>> placeholder. >>> I wasn't involved in the IPV6 work, but I suspect the change of the >>> field to "hop count" reflected the reality of what the field actually >>> was. But it would have been better to have actually made Time work. >> >> To be blunt, why? >> >> There was no promise of guaranteed latency in those days, was there? >> As soon as queueing theory entered the game, that wasn't an option. >> So it wasn't just the absence of precise time, it was the presence of >> random delays that made a hop count the right answer, not just the >> convenient answer. >> >> I think that's why IPv6 never even considered anything but a hop count. >> The same lies behind the original TOS bits and their rebranding as >> the Differentiated Services Code Point many years later. My motto >> during the diffserv debates was "You can't beat queueing theory." >> >> There are people in the IETF working hard on Detnet ("deterministic >> networking") today. Maybe they have worked out how to beat queueing >> theory, but I doubt it. What I learned from working on real-time >> control systems is that you can't guarantee timing outside a very >> limited and tightly managed set of resources, where unbounded >> queues cannot occur. >> >> Brian >> >>> Much of these "original ideas" probably weren't ever written down in >>> persistent media. Most discussions in the 1980 time frame were done >>> either in person or more extensively in email. Disk space was scarce >>> and expensive, so much of such email was probably never archived - >>> especially email not on the more "formal" mailing lists of the day. >>> As I recall, Time was considered very important, for a number of >>> reasons. So here's what I remember... >>> ----- >>> Like every project using computers, the Internet was constrained by too >>> little memory, too slow processors, and too limited bandwidth. A >>> typical, and expensive, system might have a few dozen kilobytes of >>> memory, a processor running at perhaps 1 MHz, and "high speed" >>> communications circuits carrying 56 kilobits per second. So there was >>> strong incentive not to waste resources. >>> At the time, the ARPANET had been running for about ten years, and quite >>> a lot of experience had been gained through its operation and crises. >>> Over that time, a lot of mechanisms had been put in place, internally in >>> the IMP algorithms and hardware, to "protect" the network and keep it >>> running despite what the user computers tried to do. So, for example, >>> an IMP could regulate the flow of traffic from any of its "host" >>> computers, and even shut it off completely if needed. (Google "ARPANET >>> RFNM counting" if curious). >>> In the Internet, the gateways had no such mechanisms available. We were >>> especially concerned about the "impedance mismatch" that would occur at >>> a gateway connecting a LAN to a much slower and "skinnier" long-haul >>> network. All of the "flow control" mechanisms that were implemented >>> inside an ARPANET IMP would be instead implemented inside TCP software >>> in users' host computers. >>> We didn't know how that would work. But something had to be in the >>> code.... So the principle was that IP datagrams could be simply >>> discarded when necessary, wherever necessary, and TCP would retransmit >>> them so they would eventually get delivered. >>> We envisioned that approach could easily lead to "runaway" scenarios, >>> with the Internet full of duplicate datagrams being dropped at any >>> "impedance mismatch" point along the way. In fact, we saw exactly that >>> at a gateway between ARPANET and SATNET - IIRC in one of Dave's >>> transatlantic experiments ("Don't do that!!!") >>> So, Source Quench was invented, as a way of telling some host to "slow >>> down", and the gateways sent an SQ back to the source of any datagram it >>> had to drop. Many of us didn't think that would work very well (e.g., a >>> host might send one datagram and get back an SQ - what should it do to >>> "slow down"...?). I recall that Dave knew exactly what to do. Since >>> his machine's datagram had been dropped, it meant he should immediately >>> retransmit it. Another "Don't do that!" moment.... >>> But SQ was a placeholder too -- to be replaced by some "real" flow >>> control mechanism as soon as the experimentation revealed what that >>> should be. >>> ----- >>> TCP retransmissions were based on Time. If a TCP didn't receive a >>> timely acknowledgement that data had been received, it could assume that >>> someone along the way had dropped the datagram and it should retransmit >>> it. SQ datagrams were also of course not guaranteed to get to their >>> destination, so you couldn't count on them as a signal to retransmit. >>> So Time was the only answer. >>> But how to set the Timer in your TCP - that was subject to >>> experimentation, with lots of ideas. If you sent a copy of your data >>> too soon, it would just overload everything along the path through the >>> Internet with superfluous data consuming those scarce resources. If you >>> waited too long, your end-users would complain that the Internet was too >>> slow. So the answer was to have each TCP estimate how long it was >>> taking for a datagram to get to its destination, and set its own >>> "retransmission timer" to slightly longer than that value. >>> Of course, such a technique requires instrumentation and data. Also, >>> since the delays might depend on the direction of a datagram's travel, >>> you needed synchronized clocks at the two endpoint of a TCP connection, >>> so they could accurately measure one-way transit times. >>> Meanwhile, inside the gateways, there were ideas about how to do even >>> better by using Time. For example, if the routing protocols were >>> actually based on Time (shortest transit time) rather than Hops (number >>> of gateways between here and destination), the Internet would provide >>> better user performance and be more efficient. Even better - if a >>> gateway could "know" that a particular datagram wouldn't get to its >>> destination before it's TTL ran out, it could discard that datagram >>> immediately, even though it still had time to live. No point in wasting >>> network resources carrying a datagram already sentenced to death. >>> We couldn't do all that. Didn't have the hardware, didn't have the >>> algorithms, didn't have the protocols. So in the meantime, any computer >>> handling an IP datagram should simply decrement the TTL value, and if it >>> reached zero the datagram should be discarded. TTL effectively became a >>> "hop count". >>> When Dave got NTP running, and enough Time Servers were online and >>> reliable, and the gateways and hosts had the needed hardware, Time could >>> be measured, TTL could be set based on Time, and the Internet would be >>> better. >>> In the meanwhile, all of us TCP implementers just picked some value for >>> our retransmission timers. I think I set mine to 3 seconds. No >>> exhaustive analysis or sophisticated mathematics involved. It just felt >>> right.....there was a lot of that going on in the early Internet. >>> ----- >>> While all the TCP work was going on, other uses were emerging. We knew >>> that there was more to networking than just logging in to distant >>> computers or transferring files between them - uses that had been common >>> for years in the ARPANET. But the next "killer app" hadn't appeared >>> yet, although there were lots of people trying to create one. >>> In particular, "Packet Voice" was popular, with a contingent of >>> researchers figuring out how to do that on the fledgling Internet. There >>> were visions that someday it might even be possible to do Video. In >>> particular, *interactive* voice was the goal, i.e., the ability to have >>> a conversation by voice over the Internet (I don't recall when the term >>> VOIP emerged, probably much later). >>> In a resource-constrained network, you don't want to waste resources on >>> datagrams that aren't useful. In conversational voice, a datagram that >>> arrives too late isn't useful. A fragment of audio that should have >>> gone to the speaker 500 milliseconds ago can only be discarded. It >>> would be better that it hadn't been sent at all, but at least discarding >>> it along the way, as soon as it's known to be too late to arrive, would >>> be appropriate. >>> Of course, that needs Time. UDP was created as an adjunct to TCP, >>> providing a different kind of network service. Where TCP got all of >>> the data to its destination, no matter how long it took, UDP would get >>> as much data as possible to the destination, as long as it got there in >>> time to be useful. Time was important. >>> UDP implementations, in host computers, didn't have to worry about >>> retransmissions. But they did still have to worry about how long it >>> would take for a datagram to get to its destination. With that >>> knowledge, they could set their datagrams' TTL values to something >>> appropriate for the network conditions at the time. Perhaps they might >>> even tell their human users "Sorry, conversational use not available >>> right now." -- an Internet equivalent of the "busy signal" - if the >>> current network transit times were too high to provide a good user >>> experience. >>> Within the world of gateways, the differing needs of TCP and UDP >>> motivated different behaviors. That motivated the inclusion of the TOS >>> - Type Of Service - field in the IP datagram header. Perhaps UDP >>> packets would receive higher priority, being placed at the head of >>> queues so they got transmitted sooner. Perhaps they would be discarded >>> immediately if the gateway knew, based on its routing mechanisms, that >>> the datagram would never get delivered in time. Perhaps UDP would be >>> routed differently, using a terrestrial but low-bandwidth network, while >>> TCP traffic was directed over a high-bandwidth but long-delay satellite >>> path. A gateway mesh might have two or more independent routing >>> mechanisms, each using a "shortest path" approach, but with different >>> metrics for determining "short" - e.g., UDP using the shortest time >>> route, while some TCP traffic travelled a route with least ("shortest") >>> usage at the time. >>> We couldn't do all that either. We needed Time, hardware, algorithms, >>> protocols, etc. But the placeholders were there, in the TCP, IP, and >>> UDP formats, ready for experimentation to figure all that stuff out. >>> ----- >>> When Time was implemented, there could be much needed experimentation to >>> figure out the right answers. Meanwhile, we had to keep the Internet >>> working. By the early 1980s, the ARPANET had been in operation for more >>> than a decade, and lots of operational experience had accrued. We knew, >>> for example, that things could "go wrong" and generate a crisis for the >>> network operators to quickly fix. TTL, even as just a hop count, was >>> one mechanism to suppress problems. We knew that "routing loops" could >>> occur. TTL would at least prevent situations where datagrams >>> circulated forever, orbiting inside the Internet until someone >>> discovered and fixed whatever was causing a routing loop to keep those >>> datagrams speeding around. >>> Since the Internet was an Experiment, there were mechanisms put in place >>> to help run experiments. IIRC, in general things were put in the IP >>> headers when we thought they were important and would be needed long >>> after the experimental phase was over - things like TTL, SQ, TOS. >>> Essentially every field in the IP header, and every type of datagram, >>> was there for some good reason, even though its initial implementation >>> was known to be inadequate. The Internet was built on Placeholders.... >>> Other mechanisms were put into the "Options" mechanism of the IP >>> format. A lot of that was targeted towards supporting experiments, or >>> as occasional tools to be used to debug problems in crises during >>> Internet operations. >>> E.g., all of the "Source Routing" mechanisms might be used to route >>> traffic in particular paths that the current gateways wouldn't otherwise >>> use. An example would be routing voice traffic over specific paths, >>> which the normal gateway routing wouldn't use. The Voice experimenters >>> could use those mechanisms to try out their ideas in a controlled >>> experiment. >>> Similarly, Source Routing might be used to debug network problems. A >>> network analyst might use Source Routing to probe a particular remote >>> computer interface, where the regular gateway mechanisms would avoid >>> that path. >>> So a general rule was that IP headers contained important mechanisms, >>> often just as placeholders, while Options contained things useful only >>> in particular circumstances. >>> But all of these "original ideas" needed Time. We knew Dave was "on >>> it".... >>> ----- >>> Hopefully this helps... I (and many others) probably should have >>> written these "original ideas" down 40 years ago. We did, but I >>> suspect all in the form of emails which have now been lost. Sorry >>> about that. There was always so much code to write. And we didn't >>> have the answers yet to motivate creating RFCs which were viewed as more >>> permanent repositories of the solved problems. >>> Sorry about that..... >>> Jack Haverty >>> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >>>> Hello Jack, >>>> >>>> Thanks a lot for sharing this, as usual, I enjoy this kind of >>>> stories :-) >>>> >>>> Jack/group, just a question regarding this topic. When you mentioned: >>>> >>>> "This caused a lot of concern about protocol elements such as >>>> Time-To-Live, which were temporarily to be implemented purely as "hop >>>> counts" >>>> >>>> >>>> Do you mean, the original idea was to really drop the packet at >>>> certain time, a *real* Time-To-Live concept?. >>>> >>>> >>>> Thanks, >>>> >>>> P.S. That's why it was important to change the field's name to hop >>>> count in v6 :-) >>>> >>>> >>>> >>>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>>>> in the New Yorker >>>>>> >>>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>>>> >>>>>> >>>>>> v >>>>> >>>>> Agree, nice story. Dave did a *lot* of good work. Reading the >>>>> article reminded me of the genesis of NTP. >>>>> >>>>> IIRC.... >>>>> >>>>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>>>> experimenter, and scientist. Like all good scientists, he wanted to >>>>> run experiments to explore what the newfangled Internet was doing and >>>>> test his theories. To do that required measurements and data. >>>>> >>>>> At the time, BBN was responsible for the "core gateways" that >>>>> provided most of the long-haul Internet connectivity, e.g., between >>>>> US west and east coasts and Europe. There were lots of ideas about >>>>> how to do things - e.g., strategies for TCP retransmissions, >>>>> techniques for maintaining dynamic tables of routing information, >>>>> algorithms for dealing with limited bandwidth and memory, and other >>>>> such stuff that was all intentionally very loosely defined within the >>>>> protocols. The Internet was an Experiment. >>>>> >>>>> I remember talking with Dave back at the early Internet meetings, and >>>>> his fervor to try things out, and his disappointment at the lack of >>>>> the core gateway's ability to measure much of anything. In >>>>> particular, it was difficult to measure how long things took in the >>>>> Internet, since the gateways didn't even have real-time clocks. This >>>>> caused a lot of concern about protocol elements such as Time-To-Live, >>>>> which were temporarily to be implemented purely as "hop counts", >>>>> pending the introduction of some mechanism for measuring Time into >>>>> the gateways. (AFAIK, we're still waiting....) >>>>> >>>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>>>> did have a pretty good mechanism for measuring time, at least between >>>>> pairs of IMPs at either end of a communications circuit, because such >>>>> circuits ran at specific speeds. So one IMP could tell how long it >>>>> was taking to communicate with one of its neighbors, and used such >>>>> data to drive the ARPANET internal routing mechanisms. >>>>> >>>>> In the Internet, gateways couldn't tell how long it took to send a >>>>> datagram over one of its attached networks. The networks of the day >>>>> simply didn't make such information available to its "users" (e.g., a >>>>> gateway). >>>>> >>>>> But experiments require data, and labs require instruments to collect >>>>> that data, and Dave wanted to test out lots of ideas, and we (BBN) >>>>> couldn't offer any hope of such instrumentation in the core gateways >>>>> any time soon. >>>>> >>>>> So Dave built it. >>>>> >>>>> And that's how NTP got started. IIRC, the rest of us were all just >>>>> trying to get the Internet to work at all. Dave was interested in >>>>> understanding how and why it worked. So while he built NTP, that >>>>> didn't really affect any other projects. Plus most (at least me) >>>>> didn't understand how it was possible to get such accurate >>>>> synchronization when the delays through the Internet mesh were so >>>>> large and variable. (I still don't). But Dave thought it was >>>>> possible, and that's why your computer, phone, laptop, or whatever >>>>> know what time it is today. >>>>> >>>>> Dave was responsible for another long-lived element of the >>>>> Internet. Dave's experiments were sometimes disruptive to the >>>>> "core" Internet that we were tasked to make a reliable 24x7 service. >>>>> Where Dave The Scientist would say "I wonder what happens when I do >>>>> this..." We The Engineers would say "Don't do that!" >>>>> >>>>> That was the original motivation for creating the notion of >>>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>>>> Internet from the antics of the Fuzzballs. I corralled Eric Rosen >>>>> after one such Fuzzball-triggered incident and we sat down and >>>>> created ASes, so that we could keep "our" AS running reliably. It >>>>> was intended as an interim mechanism until all the experimentation >>>>> revealed what should be the best algorithms and protocol features to >>>>> put in the next generation, and the Internet Experiment advanced into >>>>> a production network service. We defined ASes and EGP to protect >>>>> the Internet from Dave's Fuzzball mania. >>>>> >>>>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>>>> still Experimenting..... and The Internet is still an Experiment. >>>>> >>>>> Fun times, >>>>> Jack Haverty >>>>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From louie at transsys.com Sun Oct 2 20:44:07 2022 From: louie at transsys.com (Louis Mamakos) Date: Sun, 02 Oct 2022 23:44:07 -0400 Subject: [ih] nice story about dave mills and NTP In-Reply-To: <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> Message-ID: <7D395EAB-48C4-4B89-B36C-DBB5CC6C4C7E@transsys.com> On 2 Oct 2022, at 15:50, Brian E Carpenter via Internet-history wrote: > I think that's why IPv6 never even considered anything but a hop count. > The same lies behind the original TOS bits and their rebranding as > the Differentiated Services Code Point many years later. My motto > during the diffserv debates was "You can't beat queueing theory." The IPv4 TTL being a hop count is what enabled one of the most essential and effective debugging tools for the public Internet: traceroute. Sure and eventually killing off looping packets for those routing loops that don't ever happen, the ones we'd use traceroute to discover.. What's interesting these days is that some network elements actually do have very precise means to measure how long a packet is queued between the ingress and egress interface; it's there to support PTP. Which sort of brings this back around to the Mills and his NTP. I think the article got it a little wrong; the genesis of NTP was the HELLO routing protocol the fuzzball used. It was a distance vector routing protocol that did minimum delay routing to the destination. Eventually, it got pulled out into NTP. Other fun fact: the first NTP RFC was the first (also) published in PostScript, so you could more fully enjoy the mathematics. I ended up implementing HELLO for an IP stack I wrote for a UNIVAC 1108, coincidentally as a class project for a "Special Topics in Networking" Mills taught at the University of Maryland in the early 1980's while he was still as Linkabit, before going to UDEL. That was the start of a small Fuzzball infestation at UMD for some year, eventually including a stratum-1 NTP clock. I'm sure that I was of many who's career was directly influenced by Dave, and I have really fond memories of a couple of classes he taught, and later work with him. Louis Mamakos From bob.hinden at gmail.com Mon Oct 3 08:32:48 2022 From: bob.hinden at gmail.com (Bob Hinden) Date: Mon, 3 Oct 2022 08:32:48 -0700 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> Message-ID: Jack, > On Oct 2, 2022, at 10:55 AM, Jack Haverty via Internet-history wrote: > > The short answer is "Yes". The Time-To-Live field was intended to count down actual transit time as a datagram proceeded through the Internet. A datagram was to be discarded as soon as some algorithm determined it wasn't going to get to its destination before its TTL ran to zero. But we didn't have the means to measure time, so hop-counts were the placeholder. > > I wasn't involved in the IPV6 work, but I suspect the change of the field to "hop count" reflected the reality of what the field actually was. But it would have been better to have actually made Time work. That is correct, the usage of the TTL field in IPv4 was a hop count, so we called the field in IPv6 Hop Limit to better describe its function. From RFC8200: Hop Limit 8-bit unsigned integer. Decremented by 1 by each node that forwards the packet. When forwarding, the packet is discarded if Hop Limit was zero when received or is decremented to zero. A node that is the destination of a packet should not discard a packet with Hop Limit equal to zero; it should process the packet normally. Like Brian, I am not sure that making a time field work would be worthwhile. It would require a lot of complexity at the IP layer and I assume would require synchronized time stamps in all of the packets. Bob > > Much of these "original ideas" probably weren't ever written down in persistent media. Most discussions in the 1980 time frame were done either in person or more extensively in email. Disk space was scarce and expensive, so much of such email was probably never archived - especially email not on the more "formal" mailing lists of the day. > > As I recall, Time was considered very important, for a number of reasons. So here's what I remember... > ----- > > Like every project using computers, the Internet was constrained by too little memory, too slow processors, and too limited bandwidth. A typical, and expensive, system might have a few dozen kilobytes of memory, a processor running at perhaps 1 MHz, and "high speed" communications circuits carrying 56 kilobits per second. So there was strong incentive not to waste resources. > > At the time, the ARPANET had been running for about ten years, and quite a lot of experience had been gained through its operation and crises. Over that time, a lot of mechanisms had been put in place, internally in the IMP algorithms and hardware, to "protect" the network and keep it running despite what the user computers tried to do. So, for example, an IMP could regulate the flow of traffic from any of its "host" computers, and even shut it off completely if needed. (Google "ARPANET RFNM counting" if curious). > > In the Internet, the gateways had no such mechanisms available. We were especially concerned about the "impedance mismatch" that would occur at a gateway connecting a LAN to a much slower and "skinnier" long-haul network. All of the "flow control" mechanisms that were implemented inside an ARPANET IMP would be instead implemented inside TCP software in users' host computers. > > We didn't know how that would work. But something had to be in the code.... So the principle was that IP datagrams could be simply discarded when necessary, wherever necessary, and TCP would retransmit them so they would eventually get delivered. > > We envisioned that approach could easily lead to "runaway" scenarios, with the Internet full of duplicate datagrams being dropped at any "impedance mismatch" point along the way. In fact, we saw exactly that at a gateway between ARPANET and SATNET - IIRC in one of Dave's transatlantic experiments ("Don't do that!!!") > > So, Source Quench was invented, as a way of telling some host to "slow down", and the gateways sent an SQ back to the source of any datagram it had to drop. Many of us didn't think that would work very well (e.g., a host might send one datagram and get back an SQ - what should it do to "slow down"...?). I recall that Dave knew exactly what to do. Since his machine's datagram had been dropped, it meant he should immediately retransmit it. Another "Don't do that!" moment.... > > But SQ was a placeholder too -- to be replaced by some "real" flow control mechanism as soon as the experimentation revealed what that should be. > > ----- > > TCP retransmissions were based on Time. If a TCP didn't receive a timely acknowledgement that data had been received, it could assume that someone along the way had dropped the datagram and it should retransmit it. SQ datagrams were also of course not guaranteed to get to their destination, so you couldn't count on them as a signal to retransmit. So Time was the only answer. > > But how to set the Timer in your TCP - that was subject to experimentation, with lots of ideas. If you sent a copy of your data too soon, it would just overload everything along the path through the Internet with superfluous data consuming those scarce resources. If you waited too long, your end-users would complain that the Internet was too slow. So the answer was to have each TCP estimate how long it was taking for a datagram to get to its destination, and set its own "retransmission timer" to slightly longer than that value. > > Of course, such a technique requires instrumentation and data. Also, since the delays might depend on the direction of a datagram's travel, you needed synchronized clocks at the two endpoint of a TCP connection, so they could accurately measure one-way transit times. > > Meanwhile, inside the gateways, there were ideas about how to do even better by using Time. For example, if the routing protocols were actually based on Time (shortest transit time) rather than Hops (number of gateways between here and destination), the Internet would provide better user performance and be more efficient. Even better - if a gateway could "know" that a particular datagram wouldn't get to its destination before it's TTL ran out, it could discard that datagram immediately, even though it still had time to live. No point in wasting network resources carrying a datagram already sentenced to death. > > We couldn't do all that. Didn't have the hardware, didn't have the algorithms, didn't have the protocols. So in the meantime, any computer handling an IP datagram should simply decrement the TTL value, and if it reached zero the datagram should be discarded. TTL effectively became a "hop count". > > When Dave got NTP running, and enough Time Servers were online and reliable, and the gateways and hosts had the needed hardware, Time could be measured, TTL could be set based on Time, and the Internet would be better. > > In the meanwhile, all of us TCP implementers just picked some value for our retransmission timers. I think I set mine to 3 seconds. No exhaustive analysis or sophisticated mathematics involved. It just felt right.....there was a lot of that going on in the early Internet. > > ----- > > While all the TCP work was going on, other uses were emerging. We knew that there was more to networking than just logging in to distant computers or transferring files between them - uses that had been common for years in the ARPANET. But the next "killer app" hadn't appeared yet, although there were lots of people trying to create one. > > In particular, "Packet Voice" was popular, with a contingent of researchers figuring out how to do that on the fledgling Internet. There were visions that someday it might even be possible to do Video. In particular, *interactive* voice was the goal, i.e., the ability to have a conversation by voice over the Internet (I don't recall when the term VOIP emerged, probably much later). > > In a resource-constrained network, you don't want to waste resources on datagrams that aren't useful. In conversational voice, a datagram that arrives too late isn't useful. A fragment of audio that should have gone to the speaker 500 milliseconds ago can only be discarded. It would be better that it hadn't been sent at all, but at least discarding it along the way, as soon as it's known to be too late to arrive, would be appropriate. > > Of course, that needs Time. UDP was created as an adjunct to TCP, providing a different kind of network service. Where TCP got all of the data to its destination, no matter how long it took, UDP would get as much data as possible to the destination, as long as it got there in time to be useful. Time was important. > > UDP implementations, in host computers, didn't have to worry about retransmissions. But they did still have to worry about how long it would take for a datagram to get to its destination. With that knowledge, they could set their datagrams' TTL values to something appropriate for the network conditions at the time. Perhaps they might even tell their human users "Sorry, conversational use not available right now." -- an Internet equivalent of the "busy signal" - if the current network transit times were too high to provide a good user experience. > > Within the world of gateways, the differing needs of TCP and UDP motivated different behaviors. That motivated the inclusion of the TOS - Type Of Service - field in the IP datagram header. Perhaps UDP packets would receive higher priority, being placed at the head of queues so they got transmitted sooner. Perhaps they would be discarded immediately if the gateway knew, based on its routing mechanisms, that the datagram would never get delivered in time. Perhaps UDP would be routed differently, using a terrestrial but low-bandwidth network, while TCP traffic was directed over a high-bandwidth but long-delay satellite path. A gateway mesh might have two or more independent routing mechanisms, each using a "shortest path" approach, but with different metrics for determining "short" - e.g., UDP using the shortest time route, while some TCP traffic travelled a route with least ("shortest") usage at the time. > > We couldn't do all that either. We needed Time, hardware, algorithms, protocols, etc. But the placeholders were there, in the TCP, IP, and UDP formats, ready for experimentation to figure all that stuff out. > > ----- > > When Time was implemented, there could be much needed experimentation to figure out the right answers. Meanwhile, we had to keep the Internet working. By the early 1980s, the ARPANET had been in operation for more than a decade, and lots of operational experience had accrued. We knew, for example, that things could "go wrong" and generate a crisis for the network operators to quickly fix. TTL, even as just a hop count, was one mechanism to suppress problems. We knew that "routing loops" could occur. TTL would at least prevent situations where datagrams circulated forever, orbiting inside the Internet until someone discovered and fixed whatever was causing a routing loop to keep those datagrams speeding around. > > Since the Internet was an Experiment, there were mechanisms put in place to help run experiments. IIRC, in general things were put in the IP headers when we thought they were important and would be needed long after the experimental phase was over - things like TTL, SQ, TOS. > > Essentially every field in the IP header, and every type of datagram, was there for some good reason, even though its initial implementation was known to be inadequate. The Internet was built on Placeholders.... > > Other mechanisms were put into the "Options" mechanism of the IP format. A lot of that was targeted towards supporting experiments, or as occasional tools to be used to debug problems in crises during Internet operations. > > E.g., all of the "Source Routing" mechanisms might be used to route traffic in particular paths that the current gateways wouldn't otherwise use. An example would be routing voice traffic over specific paths, which the normal gateway routing wouldn't use. The Voice experimenters could use those mechanisms to try out their ideas in a controlled experiment. > > Similarly, Source Routing might be used to debug network problems. A network analyst might use Source Routing to probe a particular remote computer interface, where the regular gateway mechanisms would avoid that path. > > So a general rule was that IP headers contained important mechanisms, often just as placeholders, while Options contained things useful only in particular circumstances. > > But all of these "original ideas" needed Time. We knew Dave was "on it".... > > ----- > > Hopefully this helps... I (and many others) probably should have written these "original ideas" down 40 years ago. We did, but I suspect all in the form of emails which have now been lost. Sorry about that. There was always so much code to write. And we didn't have the answers yet to motivate creating RFCs which were viewed as more permanent repositories of the solved problems. > > Sorry about that..... > > Jack Haverty > > > > On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >> Hello Jack, >> >> Thanks a lot for sharing this, as usual, I enjoy this kind of stories :-) >> >> Jack/group, just a question regarding this topic. When you mentioned: >> >> "This caused a lot of concern about protocol elements such as Time-To-Live, which were temporarily to be implemented purely as "hop counts" >> >> >> Do you mean, the original idea was to really drop the packet at certain time, a *real* Time-To-Live concept?. >> >> >> Thanks, >> >> P.S. That's why it was important to change the field's name to hop count in v6 :-) >> >> >> >> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>> in the New Yorker >>>> >>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>> >>>> v >>> >>> Agree, nice story. Dave did a *lot* of good work. Reading the article reminded me of the genesis of NTP. >>> >>> IIRC.... >>> >>> Back in the early days circa 1980, Dave was the unabashed tinkerer, experimenter, and scientist. Like all good scientists, he wanted to run experiments to explore what the newfangled Internet was doing and test his theories. To do that required measurements and data. >>> >>> At the time, BBN was responsible for the "core gateways" that provided most of the long-haul Internet connectivity, e.g., between US west and east coasts and Europe. There were lots of ideas about how to do things - e.g., strategies for TCP retransmissions, techniques for maintaining dynamic tables of routing information, algorithms for dealing with limited bandwidth and memory, and other such stuff that was all intentionally very loosely defined within the protocols. The Internet was an Experiment. >>> >>> I remember talking with Dave back at the early Internet meetings, and his fervor to try things out, and his disappointment at the lack of the core gateway's ability to measure much of anything. In particular, it was difficult to measure how long things took in the Internet, since the gateways didn't even have real-time clocks. This caused a lot of concern about protocol elements such as Time-To-Live, which were temporarily to be implemented purely as "hop counts", pending the introduction of some mechanism for measuring Time into the gateways. (AFAIK, we're still waiting....) >>> >>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs did have a pretty good mechanism for measuring time, at least between pairs of IMPs at either end of a communications circuit, because such circuits ran at specific speeds. So one IMP could tell how long it was taking to communicate with one of its neighbors, and used such data to drive the ARPANET internal routing mechanisms. >>> >>> In the Internet, gateways couldn't tell how long it took to send a datagram over one of its attached networks. The networks of the day simply didn't make such information available to its "users" (e.g., a gateway). >>> >>> But experiments require data, and labs require instruments to collect that data, and Dave wanted to test out lots of ideas, and we (BBN) couldn't offer any hope of such instrumentation in the core gateways any time soon. >>> >>> So Dave built it. >>> >>> And that's how NTP got started. IIRC, the rest of us were all just trying to get the Internet to work at all. Dave was interested in understanding how and why it worked. So while he built NTP, that didn't really affect any other projects. Plus most (at least me) didn't understand how it was possible to get such accurate synchronization when the delays through the Internet mesh were so large and variable. (I still don't). But Dave thought it was possible, and that's why your computer, phone, laptop, or whatever know what time it is today. >>> >>> Dave was responsible for another long-lived element of the Internet. Dave's experiments were sometimes disruptive to the "core" Internet that we were tasked to make a reliable 24x7 service. Where Dave The Scientist would say "I wonder what happens when I do this..." We The Engineers would say "Don't do that!" >>> >>> That was the original motivation for creating the notion of "Autonomous Systems" and EGP - a way to insulate the "core" of the Internet from the antics of the Fuzzballs. I corralled Eric Rosen after one such Fuzzball-triggered incident and we sat down and created ASes, so that we could keep "our" AS running reliably. It was intended as an interim mechanism until all the experimentation revealed what should be the best algorithms and protocol features to put in the next generation, and the Internet Experiment advanced into a production network service. We defined ASes and EGP to protect the Internet from Dave's Fuzzball mania. >>> >>> AFAIK, that hasn't happened yet ... and from that article, Dave is still Experimenting..... and The Internet is still an Experiment. >>> >>> Fun times, >>> Jack Haverty >>> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From alejandroacostaalamo at gmail.com Tue Oct 4 06:33:00 2022 From: alejandroacostaalamo at gmail.com (Alejandro Acosta) Date: Tue, 4 Oct 2022 09:33:00 -0400 Subject: [ih] nice story about dave mills and NTP In-Reply-To: References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> Message-ID: <0ce4c369-5c1e-e564-1ade-10b7caa8dbc1@gmail.com> Thanks! On 2/10/22 1:55 PM, Jack Haverty via Internet-history wrote: > The short answer is "Yes".? The Time-To-Live field was intended to > count down actual transit time as a datagram proceeded through the > Internet.?? A datagram was to be discarded as soon as some algorithm > determined it wasn't going to get to its destination before its TTL > ran to zero.?? But we didn't have the means to measure time, so > hop-counts were the placeholder. > > I wasn't involved in the IPV6 work, but I suspect the change of the > field to "hop count" reflected the reality of what the field actually > was.?? But it would have been better to have actually made Time work. > > Much of these "original ideas" probably weren't ever written down in > persistent media.? Most discussions in the 1980 time frame were done > either in person or more extensively in email.?? Disk space was scarce > and expensive, so much of such email was probably never archived - > especially email not on the more "formal" mailing lists of the day. > > As I recall, Time was considered very important, for a number of > reasons.? So here's what I remember... > ----- > > Like every project using computers, the Internet was constrained by > too little memory, too slow processors, and too limited bandwidth. A > typical, and expensive, system might have a few dozen kilobytes of > memory, a processor running at perhaps 1 MHz, and "high speed" > communications circuits carrying 56 kilobits per second.?? So there > was strong incentive not to waste resources. > > At the time, the ARPANET had been running for about ten years, and > quite a lot of experience had been gained through its operation and > crises.? Over that time, a lot of mechanisms had been put in place, > internally in the IMP algorithms and hardware, to "protect" the > network and keep it running despite what the user computers tried to > do.? So, for example, an IMP could regulate the flow of traffic from > any of its "host" computers, and even shut it off completely if > needed.? (Google "ARPANET RFNM counting" if curious). > > In the Internet, the gateways had no such mechanisms available. We > were especially concerned about the "impedance mismatch" that would > occur at a gateway connecting a LAN to a much slower and "skinnier" > long-haul network.? All of the "flow control" mechanisms that were > implemented inside an ARPANET IMP would be instead implemented inside > TCP software in users' host computers. > > We didn't know how that would work.?? But something had to be in the > code....? So the principle was that IP datagrams could be simply > discarded when necessary, wherever necessary, and TCP would retransmit > them so they would eventually get delivered. > > We envisioned that approach could easily lead to "runaway" scenarios, > with the Internet full of duplicate datagrams being dropped at any > "impedance mismatch" point along the way.?? In fact, we saw exactly > that at a gateway between ARPANET and SATNET - IIRC in one of Dave's > transatlantic experiments ("Don't do that!!!") > > So, Source Quench was invented, as a way of telling some host to "slow > down", and the gateways sent an SQ back to the source of any datagram > it had to drop.? Many of us didn't think that would work very well > (e.g., a host might send one datagram and get back an SQ - what should > it do to "slow down"...?).?? I recall that Dave knew exactly what to > do.? Since his machine's datagram had been dropped, it meant he should > immediately retransmit it.?? Another "Don't do that!" moment.... > > But SQ was a placeholder too -- to be replaced by some "real" flow > control mechanism as soon as the experimentation revealed what that > should be. > > ----- > > TCP retransmissions were based on Time.? If a TCP didn't receive a > timely acknowledgement that data had been received, it could assume > that someone along the way had dropped the datagram and it should > retransmit it.? SQ datagrams were also of course not guaranteed to get > to their destination, so you couldn't count on them as a signal to > retransmit.? So Time was the only answer. > > But how to set the Timer in your TCP - that was subject to > experimentation, with lots of ideas.? If you sent a copy of your data > too soon, it would just overload everything along the path through the > Internet with superfluous data consuming those scarce resources.? If > you waited too long, your end-users would complain that the Internet > was too slow.?? So the answer was to have each TCP estimate how long > it was taking for a datagram to get to its destination, and set its > own "retransmission timer" to slightly longer than that value. > > Of course, such a technique requires instrumentation and data. Also, > since the delays might depend on the direction of a datagram's travel, > you needed synchronized clocks at the two endpoint of a TCP > connection, so they could accurately measure one-way transit times. > > Meanwhile, inside the gateways, there were ideas about how to do even > better by using Time.? For example, if the routing protocols were > actually based on Time (shortest transit time) rather than Hops > (number of gateways between here and destination), the Internet would > provide better user performance and be more efficient.? Even better - > if a gateway could "know" that a particular datagram wouldn't get to > its destination before it's TTL ran out, it could discard that > datagram immediately, even though it still had time to live.? No point > in wasting network resources carrying a datagram already sentenced to > death. > > We couldn't do all that.?? Didn't have the hardware, didn't have the > algorithms, didn't have the protocols.? So in the meantime, any > computer handling an IP datagram should simply decrement the TTL > value, and if it reached zero the datagram should be discarded. TTL > effectively became a "hop count". > > When Dave got NTP running, and enough Time Servers were online and > reliable, and the gateways and hosts had the needed hardware, Time > could be measured, TTL could be set based on Time, and the Internet > would be better. > > In the meanwhile, all of us TCP implementers just picked some value > for our retransmission timers.? I think I set mine to 3 seconds. No > exhaustive analysis or sophisticated mathematics involved.? It just > felt right.....there was a lot of that going on in the early Internet. > > ----- > > While all the TCP work was going on, other uses were emerging.? We > knew that there was more to networking than just logging in to distant > computers or transferring files between them - uses that had been > common for years in the ARPANET.?? But the next "killer app" hadn't > appeared yet, although there were lots of people trying to create one. > > In particular, "Packet Voice" was popular, with a contingent of > researchers figuring out how to do that on the fledgling Internet. > There were visions that someday it might even be possible to do > Video.? In particular, *interactive* voice was the goal, i.e., the > ability to have a conversation by voice over the Internet (I don't > recall when the term VOIP emerged, probably much later). > > In a resource-constrained network, you don't want to waste resources > on datagrams that aren't useful.? In conversational voice, a datagram > that arrives too late isn't useful.? A fragment of audio that should > have gone to the speaker 500 milliseconds ago can only be discarded.? > It would be better that it hadn't been sent at all, but at least > discarding it along the way, as soon as it's known to be too late to > arrive, would be appropriate. > > Of course, that needs Time.? UDP was created as an adjunct to TCP, > providing a different kind of network service.?? Where TCP got all of > the data to its destination, no matter how long it took, UDP would get > as much data as possible to the destination, as long as it got there > in time to be useful.?? Time was important. > > UDP implementations, in host computers, didn't have to worry about > retransmissions.? But they did still have to worry about how long it > would take for a datagram to get to its destination.? With that > knowledge, they could set their datagrams' TTL values to something > appropriate for the network conditions at the time.? Perhaps they > might even tell their human users "Sorry, conversational use not > available right now." -- an Internet equivalent of the "busy signal" - > if the current network transit times were too high to provide a good > user experience. > > Within the world of gateways, the differing needs of TCP and UDP > motivated different behaviors.? That motivated the inclusion of the > TOS - Type Of Service - field in the IP datagram header. Perhaps UDP > packets would receive higher priority, being placed at the head of > queues so they got transmitted sooner.? Perhaps they would be > discarded immediately if the gateway knew, based on its routing > mechanisms, that the datagram would never get delivered in time. > Perhaps UDP would be routed differently, using a terrestrial but > low-bandwidth network, while TCP traffic was directed over a > high-bandwidth but long-delay satellite path.?? A gateway mesh might > have two or more independent routing mechanisms, each using a > "shortest path" approach, but with different metrics for determining > "short" - e.g., UDP using the shortest time route, while some TCP > traffic travelled a route with least ("shortest") usage at the time. > > We couldn't do all that either.? We needed Time, hardware, algorithms, > protocols, etc.? But the placeholders were there, in the TCP, IP, and > UDP formats, ready for experimentation to figure all that stuff out. > > ----- > > When Time was implemented, there could be much needed experimentation > to figure out the right answers.? Meanwhile, we had to keep the > Internet working.? By the early 1980s, the ARPANET had been in > operation for more than a decade, and lots of operational experience > had accrued.? We knew, for example, that things could "go wrong" and > generate a crisis for the network operators to quickly fix.??? TTL, > even as just a hop count, was one mechanism to suppress problems.? We > knew that "routing loops" could occur.?? TTL would at least prevent > situations where datagrams circulated forever, orbiting inside the > Internet until someone discovered and fixed whatever was causing a > routing loop to keep those datagrams speeding around. > > Since the Internet was an Experiment, there were mechanisms put in > place to help run experiments.? IIRC, in general things were put in > the IP headers when we thought they were important and would be needed > long after the experimental phase was over - things like TTL, SQ, TOS. > > Essentially every field in the IP header, and every type of datagram, > was there for some good reason, even though its initial implementation > was known to be inadequate.?? The Internet was built on Placeholders.... > > Other mechanisms were put into the "Options" mechanism of the IP > format.?? A lot of that was targeted towards supporting experiments, > or as occasional tools to be used to debug problems in crises during > Internet operations. > > E.g., all of the "Source Routing" mechanisms might be used to route > traffic in particular paths that the current gateways wouldn't > otherwise use.? An example would be routing voice traffic over > specific paths, which the normal gateway routing wouldn't use.?? The > Voice experimenters could use those mechanisms to try out their ideas > in a controlled experiment. > > Similarly, Source Routing might be used to debug network problems. A > network analyst might use Source Routing to probe a particular remote > computer interface, where the regular gateway mechanisms would avoid > that path. > > So a general rule was that IP headers contained important mechanisms, > often just as placeholders, while Options contained things useful only > in particular circumstances. > > But all of these "original ideas" needed Time.?? We knew Dave was "on > it".... > > ----- > > Hopefully this helps...? I (and many others) probably should have > written these "original ideas" down 40 years ago.?? We did, but I > suspect all in the form of emails which have now been lost. Sorry > about that.?? There was always so much code to write.? And we didn't > have the answers yet to motivate creating RFCs which were viewed as > more permanent repositories of the solved problems. > > Sorry about that..... > > Jack Haverty > > > > On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >> Hello Jack, >> >> ? Thanks a lot for sharing this, as usual, I enjoy this kind of >> stories :-) >> >> ? Jack/group, just a question regarding this topic. When you mentioned: >> >> "This caused a lot of concern about protocol elements such as >> Time-To-Live, which were temporarily to be implemented purely as "hop >> counts" >> >> >> ? Do you mean, the original idea was to really drop the packet at >> certain time, a *real* Time-To-Live concept?. >> >> >> Thanks, >> >> P.S. That's why it was important to change the field's name to hop >> count in v6 :-) >> >> >> >> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>> in the New Yorker >>>> >>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>> >>>> >>>> v >>> >>> Agree, nice story.?? Dave did a *lot* of good work.? Reading the >>> article reminded me of the genesis of NTP. >>> >>> IIRC.... >>> >>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>> experimenter, and scientist.? Like all good scientists, he wanted to >>> run experiments to explore what the newfangled Internet was doing >>> and test his theories.?? To do that required measurements and data. >>> >>> At the time, BBN was responsible for the "core gateways" that >>> provided most of the long-haul Internet connectivity, e.g., between >>> US west and east coasts and Europe.? There were lots of ideas about >>> how to do things - e.g., strategies for TCP retransmissions, >>> techniques for maintaining dynamic tables of routing information, >>> algorithms for dealing with limited bandwidth and memory, and other >>> such stuff that was all intentionally very loosely defined within >>> the protocols.?? The Internet was an Experiment. >>> >>> I remember talking with Dave back at the early Internet meetings, >>> and his fervor to try things out, and his disappointment at the lack >>> of the core gateway's ability to measure much of anything. In >>> particular, it was difficult to measure how long things took in the >>> Internet, since the gateways didn't even have real-time clocks. This >>> caused a lot of concern about protocol elements such as >>> Time-To-Live, which were temporarily to be implemented purely as >>> "hop counts", pending the introduction of some mechanism for >>> measuring Time into the gateways.? (AFAIK, we're still waiting....) >>> >>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>> did have a pretty good mechanism for measuring time, at least >>> between pairs of IMPs at either end of a communications circuit, >>> because such circuits ran at specific speeds.?? So one IMP could >>> tell how long it was taking to communicate with one of its >>> neighbors, and used such data to drive the ARPANET internal routing >>> mechanisms. >>> >>> In the Internet, gateways couldn't tell how long it took to send a >>> datagram over one of its attached networks.?? The networks of the >>> day simply didn't make such information available to its "users" >>> (e.g., a gateway). >>> >>> But experiments require data, and labs require instruments to >>> collect that data, and Dave wanted to test out lots of ideas, and we >>> (BBN) couldn't offer any hope of such instrumentation in the core >>> gateways any time soon. >>> >>> So Dave built it. >>> >>> And that's how NTP got started.? IIRC, the rest of us were all just >>> trying to get the Internet to work at all.?? Dave was interested in >>> understanding how and why it worked.? So while he built NTP, that >>> didn't really affect any other projects. Plus most (at least me) >>> didn't understand how it was possible to get such accurate >>> synchronization when the delays through the Internet mesh were so >>> large and variable.?? (I still don't). But Dave thought it was >>> possible, and that's why your computer, phone, laptop, or whatever >>> know what time it is today. >>> >>> Dave was responsible for another long-lived element of the >>> Internet.?? Dave's experiments were sometimes disruptive to the >>> "core" Internet that we were tasked to make a reliable 24x7 >>> service.? Where Dave The Scientist would say "I wonder what happens >>> when I do this..." We The Engineers would say "Don't do that!" >>> >>> That was the original motivation for creating the notion of >>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>> Internet from the antics of the Fuzzballs.? I corralled Eric Rosen >>> after one such Fuzzball-triggered incident and we sat down and >>> created ASes, so that we could keep "our" AS running reliably.? It >>> was intended as an interim mechanism until all the experimentation >>> revealed what should be the best algorithms and protocol features to >>> put in the next generation, and the Internet Experiment advanced >>> into a production network service.?? We defined ASes and EGP to >>> protect the Internet from Dave's Fuzzball mania. >>> >>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>> still Experimenting..... and The Internet is still an Experiment. >>> >>> Fun times, >>> Jack Haverty >>> > From jack at 3kitty.org Tue Oct 4 14:26:10 2022 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 4 Oct 2022 14:26:10 -0700 Subject: [ih] The Importance of Time in the Internet In-Reply-To: <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> References: <8f98527b-f650-c12f-c038-87d2d10a9bfd@gmail.com> <5f7caf24-95ab-6ba7-b3ca-6d7a7b2b8e36@gmail.com> Message-ID: <522b0724-8d6f-b15b-142b-9b5dca6aaad5@3kitty.org> Brian asked:? "To be blunt, why? " - from? -- [ih] nice story about dave mills and NTP OK, I'll try to explain why I believed Time was so important to The Internet back in the 1980s.? Or at least what I remember.... changing the subject line to be more relevant. Basically, the "why" is "to provide the network services that the Users need."? In other words, to keep the customers happy.?? That's the short answer.?? Here's the longer story: --------------------------- As far as I can remember, there wasn't any "specifications" document of The Internet back in the early 80s when IPV4 et al were congealing.? Nothing like a "Requirements" document that you'd typically find for major government projects that detailed what the resultant system had to be able to do. Yes, there have been lots of documents, e.g., RFCs, detailing the formats, protocols, algorithms, and myriad technical details of the evolving design.?? But I can't remembar any document specifying what The Internet was expected to provide as services to its Users. IIRC, even the seminal 1974 Cerf/Kahn paper on "A Protocol for Packet Network Interconnection" that created TCP says nothing about what such an aggregate of networks would provide as services to its users' attached computers.? In other words, what should "customers" of The Internet expect to be able to do with it? That's understandable for a research environment.? But to actually build the early Internet, we had to have some idea of what the thing being built should do, in order to figure out what's still missing, what might or might not work, what someone should think about for the future, and so on. I believe ARPA's strategy, at least in the 80s, was to define what The Internet had to be able to do by using a handful of "scenarios" of how The Internet might be used in a real-world (customer) situation.? In addition, it was important to have concrete physical demonstrations in order to show that the ideas actually worked. Such demonstrations showed how the technology might actually be useful in the real world, and that theory and research had connections to practice and real-world situations. The "customer" of the early Internet was the government(s) - largely the US, but several countries in Europe were also involved. Specifically, the military world was the customer.?? Keeping the customer happy, by seeing working demonstrations that related to real-world situations, was crucial to keeping the funding flowing. Generals and government VIPs care about what they can envision using.?? Generals don't read RFCs.? But they do open their wallets when they see something that will be useful to them. At the early Internet meetings, and especially at the ICCB (now IAB) initial meetings, I remember Vint often describing one such scenario, which we used to drive thought experiments to imagine how some technical idea would behave in the real world.?? It was of course a military scenario, in which a battlefield commander is in contact with the chain of command up to the President, as well as with diverse military elements in the air, on ships, in moving vehicles on the ground, in intelligence centers, and everything else you can imagine is used in a military scenario.?? Allies too. That's what the customer wanted to do. In that 1980s scenario, a "command and control" conference is being held, using The Internet to connect the widely scattered participants.?? A general might be using a shared multimedia display (think of a static graphical map with a cursor/pointer - no thought of interactive video in the 80s...) to understand what was happening "in the field", consult with advisors and other command staffs, and order appropriate actions.?? While pointing at the map, the orders are given. Soldier in a Jeep: "The enemy supply depot is here, and a large body of infantry is here" ... ... General: "OK, send the third Division here, and have that bomber squadron hit here." While speaking, the field commanders and General draw a cursor on their screen, indicating the various locations.? Everyone else sees a similar screen.? Questions and clarifications happen quickly, in a conversational manner familiar to military members from their long experience using radios.?? But it's all online, through the Internet. So what can go wrong? Most obvious is that the datagrams supporting the interactive conversations need to get to their destinations in time to be useful in delivering the audio, graphics, etc., to all the members of the conversation, and properly synchronized.?? That need related directly to lots of mechanisms we put into the Internet IPV4 technology - TTL, TOS, Multicast, etc.? If the data doesn't arrive soon enough, the conversation will be painful and prone to errors and misinterpretation. But there was also a need to be able to synchronize diverse data streams, so that the content delivered by a voice transmission, perhaps flowing over UDP, was "in sync" with graphical information carried by a TCP connection.?? Those applications needed to know how The Internet was handling their datagrams, and how long it was taking for them to get delivered through whatever path of networks was still functioning at the time.? Does this speech fragment coincide in time with that graphics update - that kind of situation. In the scenario, it was crucial that the field reports and General's commands were in sync with the cursor movements on the shared graphics screens.?? Otherwise very bad things could happen.? (think about it...) Time was important. Within the physical Internet of the 80s, there were enough implementations of the pieces to demonstrate such capabilities. The ARPANET provided connectivity among fixed locations in the US and some other places, including governmental sites such as the Pentagon.? SATNET provided transatlantic connectivity.?? A clone of SATNET, called MATNET, was deployed by the Navy.? One MATNET node was on an aircraft carrier (USS Carl Vinson), which could have been where that squadron of bombers in the Scenario came from.? Army personnel were moving around a battlefield in Jeeps and helicopters, in field exercises with Packet Radios in their vehicles.?? They could move quickly wherever the orders told them to go, and the Packet Radio networks would keep them in contact with all the other players in a demo of that Scenario. Networks were slow in those days, with 56 kilobits/second considered "fast".? ARPA had deployed a "Wideband Net" using satellite technology, that used a 3 megabits/second channel.? That could obviously carry much more traffic than other networks.?? But the Wideband Net (aka WBNET) was connected only to the ARPANET.?? Like the ARPANET, the WBNET spanned the continental US, able to carry perhaps 10 times the traffic that the ARPANET could support.?? But how to actually use the WBNET - that was the problem. Since routing in the 1980s Internet was effectively based on "hop count", despite the name given to the TTL field, the gateways, and the "host" computers on the ARPANET, would never send any traffic towards the WBNET. ? Such traffic would always be two "hops" longer through a WBNET path than if it travelled directly through the ARPANET.? The WBNET was never going to be the chosen route from anywhere to anywhere else in The Internet. In the scenario, if the WBNET was somehow effectively utilized, perhaps it would be possible to convey much more detailed maps and other graphics.? Maybe even video. But there was no way to use WBNET. ? So we put "Source Routing" mechanisms into the IPV4 headers, as a way for experimenters to force traffic over the WBNET, despite the gateways belief that such a path was never the best way to go.?? In effect, the "host" computers were making their own decision about how their traffic should be carried through the Internet, likely contradicting the decision made by the routing mechanisms in the Gateways.? There was even a term for the necessary algorithms and code in those "host" computers - they had to act as "Half Gateways".? To make decisions about where to send their datagrams, the hosts had to somehow participate in the exchange of routing information with the networks' Gateways.? At the time that was only done by hand, configuring the host code to send appropriate packets with Source Routing to perform particular experiments.? No design of a "Half Gateway" was developed AFAIK. In the ICCB's list of "Things that need to be done", this was part of the "Expressway Routing" issue.?? The analogy we used was from everyone's familiarity driving in urban areas.? Even though you can get from point A to point B by using just the city streets "network", it's often better and faster to head for the nearest freeway entrance, even thought it involves going a short distance in the "wrong direction".? The route may be longer with three hops through Streets/Freeway/Streets, but it's the fastest way to get there, much better than just travelling on Streets.?? Datagrams have needs just like travellers in cars; their passengers need to get to the destination before the event starts.?? Time matters.?? So does achievable bandwidth, to get enough information delivered so that good decisions can be made.? You can't always count on getting both. We thought gateways should be smart about Expressway Routing, and offer different types of service for different user needs, but didn't know how to do it.? Meanwhile, I don't know the details, but I believe there was quite a lot of such experimentation using the WBNET.?? The expectation was that such experiments could work out how to best transport voice, graphical, and other such "non traditional" network traffic.?? Later the gateways would know how to better use all the available resources and match their routes to the particular traffic's needs, and Source Routing would no longer be needed (at least for that situation). All of what I just wrote happened almost 40 years ago, so things have changed.?? A lot.? Maybe Time is no longer important, and notions such as TOS are no longer needed.? But today, in 2022, I see the talking heads on TV interviewing reporters, experts, or random people "out there" somewhere in the world.?? The Internet seems to be everywhere (even active battlefields!) and it's used a lot.? I've been amazed at how well it works -- usually.? But you still sometimes see video breaking up, fragments of conversations being lost, and sometimes it gets bad enough that the anchor person apologizes for the "technical difficulties" and promises to get the interviewee back as soon as they can. Perhaps that's caused by a loose cable somewhere.? Or perhaps it's caused by "buffer bloat" somewhere, which may have disappeared if you try later.? Perhaps it would work better if the Internet had TTL, TOS, and other such stuff that was envisioned in the 80s. Meanwhile, the Users (like me) have just become used to the fact that such things happen, you have to expect them, and just try again. The General would not be happy. I hope I'm wrong, but I fear "technical difficulties" has become a de facto feature of the Internet technology, now baked into the technical design.?? Anyway, I hope I've explained why I (still) think Time is important.?? It's all about The Internet providing the services that the customers need to do what they need to do. ------- One last thing while I'm remembering it, just to capture a bit more of the 80s Internet history for the historians.? At the time, we had some ideas about how to solve these "Time" problems.? One idea was somewhat heretical.?? I don't remember who was in the "We" group of heretics who were pursuing that idea.?? But I admit to being such a heretic. The gist of the Idea was "Packet Switching is Not Always the Right Answer!" Pure Heresy! in the 1980s' Internet Community. The core observation was that if you had a fairly consistent flow of data (bits, not packets) between point A and point B, the best way to carry that traffic was to simply have an appropriately sized circuit between A and B.?? If you had some traffic that needed low-latency service, you'd route it over that circuit.?? Other traffic, that wouldn't "fit" in the circuit could be routed over longer paths using classic packet switching.?? Clever routing algorithms could make such decisions, selecting paths appropriate for each type of traffic using the settings conveyed in the TOS and TTL fields.?? A heavy flow of traffic between two points might even utilize several distinct pathways through the Internet, and achieve throughput from A to B greater than what any single "best route" could accomplish. In the ICCB, this was called the "Multipath Routing" issue.? It wasn't a new issue; the same situation existed in the ARPANET and solutions werebeing researched for introduction into the IMP software.?? There was quite a lot of such research going on, exploring how to improve the behavior of the ARPANET and its clones (the DDN, Defense Data Network, being a prime example of where new techniques would be very useful). In the ARPANET, ten years of operations had led to the development of machinery to change the topology of the network as traffic patterns changed.? Analysts would look at traffic statistics, and at network performance data such as packet transit times, and run mathematical models to decide where it would be appropriate to have telephone circuits between pairs of IMPs.? Collecting such data, doing the analysis, and "provisioning" the circuits (getting the appropriate phone company to install them) took time - months at least, perhaps sometimes even years. In the telephony network, there were even more years of experience using Circuit Switches - the technology of traditional phone calls, where the network switches allocated a specific quantity of bandwidth along circuits between switching centers, dedicating some bandwidth to each call and patching them all together in series so the end users thought that they had a simple wire connecting the two ends of the call.?? Packet switching provided Virtual Circuits and would try its best to handle whatever the Users gave it.? Circuit Switching provided real Circuits that provided stable bandwidth and delay, or told you it couldn't ("busy signal"). In the 80s ARPANET, we had experimented with faster ways to add or subtract bandwidth, by simply using dial-up modems.?? An IMP could "add a circuit" to another IMP by using the dial-up telephony network to "make a call" to the other IMP, and the routing mechanisms would notice that that circuit had "come up", and simply incorporate it into the traffic flows.?? Such mechanisms were manually triggered, since the IMP software didn't know how to make decisions about such "dynamic topology".? We used it successfully to enable a new IMP to join an existing network by simply "dialing in" to a modem on some IMP already running in that network.?? The new IMP would quickly become just another operating node in the existing network, and its attached host computers could then make connections to other sites on the network. The heretical idea in the Internet arena was that a similar "dynamic topology" mechanism could be introduced, where bandwidth between points A and B could be added and subtracted on the fly between pairs of Gateways, as some human operator, or very clever algorithm, determined it was appropriate. With such a mechanism, (we hoped that) different types of service could be supported on the Internet.? Gateways might determine that there was need for a low-latency pathway between points A and B, and that it was unable to provide such service with the current number of "hops" (more specifically Time) involved in the current best route.?? So it could "dial up" more bandwidth directly between A and B, thereby eliminating multiple hops through intermediate gateways and associated packet transmission delays, buffering, etc. So, Packet Switching was not always the right answer.? When you need a Circuit, you should use Circuit Switching....?? Heresy! There were all sorts of ideas floating around about how that might work.? One example I remember was called something like "Cut Through Routing".?? The basic idea was that a Gateway, when it started to receive a datagram, could look at the header and identify that datagram as being high priority, and associated with an ongoing traffic flow that needed low latency.?? The gateway could then start transmitting that same datagram on the way to its next outbound destination -- even before the datagram had been completely received from the incoming circuit.?? This would reduce transit time through that node to possibly just a handful of "bit times", rather than however long it would take to receive and then retransmit the entire datagram.? But there were problems with such a scheme - what do you do about checksums? Obviously such a system would require a lot of new work.? In the interim, to gain experience from operations and hopefully figure out what those clever routing algorithms should do, we envisioned a network in which a "node" contained two separate pieces of equipment - a typical Gateway (now called a Router), and a typical Circuit Switch (as you would find in an 80s telephony network).??? Until the algorithms were figured out, a human operator/analyst would make the decisions about how to use the packet and circuit capabilities, much as the dial-up modems were being used, and hopefully figure out how such things should work so it could be transformed into algorithms, protocols, and code. At BBN, we actually proposed such a network project to one client (not ARPA), using off-the-shelf IMPs, Circuit Switches, and Gateways to create each network node.? The Circuit network would provide circuits to be used by the Packet Network, and such Circuits could be reconfigured on demand as needed.? If two Gateways really needed a circuit connecting them, it could be "provisioned" by simply issuing commands to the Circuit Switches.?? The Gateways would (eventually) realize that they had a new circuit available, and it would become the shortest route between A and B. BBN even bought a small company that had been making Circuit Switches for the Telephony market.? AFAIK, that project didn't happen.? I suspect the client realized that there was a bit too much "research" that still needed to be done before such a system would be ready for production use. Anyway, I thought this recollection of 1980s networking might be of historical interest.? After 40 years, things have no doubt changed a lot.? I don't know much about how modern network nodes actually work.?? Perhaps they now do use a hybrid of packet and circuit switching and use dynamic topology?? Perhaps it's all now in silicon deep inside where the fiber light is transformed back and forth into electrons.? Perhaps it's all done optically using some kind of quantum technique...?? Or perhaps they just have added more memory everywhere and hoped that lots of buffering would be enough to meet the Users' needs.?? Memory is cheaper to get than new algorithms and protocols. In any event, I hope explains why I think Time was, and is still, important to The Internet.? It's not an easy problem.? And my own empirical and anecdotal observation, as just a User now, is that bad things still seem to happen far too frequently to explain as technical difficulties. Although many people use The Internet today, there are some communities that find it unusable.? Serious Gamers I've talked with struggle to find places to plug in to The Internet where they can enjoy their games.? I also wonder, as we watch the news from "the front", wherever that is today, whether today's military actually uses The Internet as that 1980s scenario envisioned.?? Or perhaps they have their own private internet now, tuned to do what they need it to do? Hope this helps some Historians.?? Someone should have written it down 40 years ago, in a form more permanent than emails.?? Sorry about that.... Thanks for getting this far, Jack Haverty On 10/2/22 12:50, Brian E Carpenter wrote: > Jack, > On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: >> The short answer is "Yes".? The Time-To-Live field was intended to count >> down actual transit time as a datagram proceeded through the Internet. >> A datagram was to be discarded as soon as some algorithm determined it >> wasn't going to get to its destination before its TTL ran to zero.?? But >> we didn't have the means to measure time, so hop-counts were the >> placeholder. >> >> I wasn't involved in the IPV6 work, but I suspect the change of the >> field to "hop count" reflected the reality of what the field actually >> was.?? But it would have been better to have actually made Time work. > > To be blunt, why? > > There was no promise of guaranteed latency in those days, was there? > As soon as queueing theory entered the game, that wasn't an option. > So it wasn't just the absence of precise time, it was the presence of > random delays that made a hop count the right answer, not just the > convenient answer. > > I think that's why IPv6 never even considered anything but a hop count. > The same lies behind the original TOS bits and their rebranding as > the Differentiated Services Code Point many years later. My motto > during the diffserv debates was "You can't beat queueing theory." > > There are people in the IETF working hard on Detnet ("deterministic > networking") today. Maybe they have worked out how to beat queueing > theory, but I doubt it. What I learned from working on real-time > control systems is that you can't guarantee timing outside a very > limited and tightly managed set of resources, where unbounded > queues cannot occur. > > ?? Brian > >> >> Much of these "original ideas" probably weren't ever written down in >> persistent media.? Most discussions in the 1980 time frame were done >> either in person or more extensively in email.?? Disk space was scarce >> and expensive, so much of such email was probably never archived - >> especially email not on the more "formal" mailing lists of the day. >> >> As I recall, Time was considered very important, for a number of >> reasons.? So here's what I remember... >> ----- >> >> Like every project using computers, the Internet was constrained by too >> little memory, too slow processors, and too limited bandwidth. A >> typical, and expensive, system might have a few dozen kilobytes of >> memory, a processor running at perhaps 1 MHz, and "high speed" >> communications circuits carrying 56 kilobits per second.?? So there was >> strong incentive not to waste resources. >> >> At the time, the ARPANET had been running for about ten years, and quite >> a lot of experience had been gained through its operation and crises. >> Over that time, a lot of mechanisms had been put in place, internally in >> the IMP algorithms and hardware, to "protect" the network and keep it >> running despite what the user computers tried to do.? So, for example, >> an IMP could regulate the flow of traffic from any of its "host" >> computers, and even shut it off completely if needed.? (Google "ARPANET >> RFNM counting" if curious). >> >> In the Internet, the gateways had no such mechanisms available. We were >> especially concerned about the "impedance mismatch" that would occur at >> a gateway connecting a LAN to a much slower and "skinnier" long-haul >> network.? All of the "flow control" mechanisms that were implemented >> inside an ARPANET IMP would be instead implemented inside TCP software >> in users' host computers. >> >> We didn't know how that would work.?? But something had to be in the >> code....? So the principle was that IP datagrams could be simply >> discarded when necessary, wherever necessary, and TCP would retransmit >> them so they would eventually get delivered. >> >> We envisioned that approach could easily lead to "runaway" scenarios, >> with the Internet full of duplicate datagrams being dropped at any >> "impedance mismatch" point along the way.?? In fact, we saw exactly that >> at a gateway between ARPANET and SATNET - IIRC in one of Dave's >> transatlantic experiments ("Don't do that!!!") >> >> So, Source Quench was invented, as a way of telling some host to "slow >> down", and the gateways sent an SQ back to the source of any datagram it >> had to drop.? Many of us didn't think that would work very well (e.g., a >> host might send one datagram and get back an SQ - what should it do to >> "slow down"...?).?? I recall that Dave knew exactly what to do. Since >> his machine's datagram had been dropped, it meant he should immediately >> retransmit it.?? Another "Don't do that!" moment.... >> >> But SQ was a placeholder too -- to be replaced by some "real" flow >> control mechanism as soon as the experimentation revealed what that >> should be. >> >> ----- >> >> TCP retransmissions were based on Time.? If a TCP didn't receive a >> timely acknowledgement that data had been received, it could assume that >> someone along the way had dropped the datagram and it should retransmit >> it.? SQ datagrams were also of course not guaranteed to get to their >> destination, so you couldn't count on them as a signal to retransmit. >> So Time was the only answer. >> >> But how to set the Timer in your TCP - that was subject to >> experimentation, with lots of ideas.? If you sent a copy of your data >> too soon, it would just overload everything along the path through the >> Internet with superfluous data consuming those scarce resources.? If you >> waited too long, your end-users would complain that the Internet was too >> slow.?? So the answer was to have each TCP estimate how long it was >> taking for a datagram to get to its destination, and set its own >> "retransmission timer" to slightly longer than that value. >> >> Of course, such a technique requires instrumentation and data. Also, >> since the delays might depend on the direction of a datagram's travel, >> you needed synchronized clocks at the two endpoint of a TCP connection, >> so they could accurately measure one-way transit times. >> >> Meanwhile, inside the gateways, there were ideas about how to do even >> better by using Time.? For example, if the routing protocols were >> actually based on Time (shortest transit time) rather than Hops (number >> of gateways between here and destination), the Internet would provide >> better user performance and be more efficient.? Even better - if a >> gateway could "know" that a particular datagram wouldn't get to its >> destination before it's TTL ran out, it could discard that datagram >> immediately, even though it still had time to live.? No point in wasting >> network resources carrying a datagram already sentenced to death. >> >> We couldn't do all that.?? Didn't have the hardware, didn't have the >> algorithms, didn't have the protocols.? So in the meantime, any computer >> handling an IP datagram should simply decrement the TTL value, and if it >> reached zero the datagram should be discarded. TTL effectively became a >> "hop count". >> >> When Dave got NTP running, and enough Time Servers were online and >> reliable, and the gateways and hosts had the needed hardware, Time could >> be measured, TTL could be set based on Time, and the Internet would be >> better. >> >> In the meanwhile, all of us TCP implementers just picked some value for >> our retransmission timers.? I think I set mine to 3 seconds. No >> exhaustive analysis or sophisticated mathematics involved.? It just felt >> right.....there was a lot of that going on in the early Internet. >> >> ----- >> >> While all the TCP work was going on, other uses were emerging. We knew >> that there was more to networking than just logging in to distant >> computers or transferring files between them - uses that had been common >> for years in the ARPANET.?? But the next "killer app" hadn't appeared >> yet, although there were lots of people trying to create one. >> >> In particular, "Packet Voice" was popular, with a contingent of >> researchers figuring out how to do that on the fledgling Internet. There >> were visions that someday it might even be possible to do Video.? In >> particular, *interactive* voice was the goal, i.e., the ability to have >> a conversation by voice over the Internet (I don't recall when the term >> VOIP emerged, probably much later). >> >> In a resource-constrained network, you don't want to waste resources on >> datagrams that aren't useful.? In conversational voice, a datagram that >> arrives too late isn't useful.? A fragment of audio that should have >> gone to the speaker 500 milliseconds ago can only be discarded. It >> would be better that it hadn't been sent at all, but at least discarding >> it along the way, as soon as it's known to be too late to arrive, would >> be appropriate. >> >> Of course, that needs Time.? UDP was created as an adjunct to TCP, >> providing a different kind of network service.?? Where TCP got all of >> the data to its destination, no matter how long it took, UDP would get >> as much data as possible to the destination, as long as it got there in >> time to be useful.?? Time was important. >> >> UDP implementations, in host computers, didn't have to worry about >> retransmissions.? But they did still have to worry about how long it >> would take for a datagram to get to its destination.? With that >> knowledge, they could set their datagrams' TTL values to something >> appropriate for the network conditions at the time.? Perhaps they might >> even tell their human users "Sorry, conversational use not available >> right now." -- an Internet equivalent of the "busy signal" - if the >> current network transit times were too high to provide a good user >> experience. >> >> Within the world of gateways, the differing needs of TCP and UDP >> motivated different behaviors.? That motivated the inclusion of the TOS >> - Type Of Service - field in the IP datagram header.? Perhaps UDP >> packets would receive higher priority, being placed at the head of >> queues so they got transmitted sooner.? Perhaps they would be discarded >> immediately if the gateway knew, based on its routing mechanisms, that >> the datagram would never get delivered in time. Perhaps UDP would be >> routed differently, using a terrestrial but low-bandwidth network, while >> TCP traffic was directed over a high-bandwidth but long-delay satellite >> path.?? A gateway mesh might have two or more independent routing >> mechanisms, each using a "shortest path" approach, but with different >> metrics for determining "short" - e.g., UDP using the shortest time >> route, while some TCP traffic travelled a route with least ("shortest") >> usage at the time. >> >> We couldn't do all that either.? We needed Time, hardware, algorithms, >> protocols, etc.? But the placeholders were there, in the TCP, IP, and >> UDP formats, ready for experimentation to figure all that stuff out. >> >> ----- >> >> When Time was implemented, there could be much needed experimentation to >> figure out the right answers.? Meanwhile, we had to keep the Internet >> working.? By the early 1980s, the ARPANET had been in operation for more >> than a decade, and lots of operational experience had accrued. We knew, >> for example, that things could "go wrong" and generate a crisis for the >> network operators to quickly fix.??? TTL, even as just a hop count, was >> one mechanism to suppress problems.? We knew that "routing loops" could >> occur.?? TTL would at least prevent situations where datagrams >> circulated forever, orbiting inside the Internet until someone >> discovered and fixed whatever was causing a routing loop to keep those >> datagrams speeding around. >> >> Since the Internet was an Experiment, there were mechanisms put in place >> to help run experiments.? IIRC, in general things were put in the IP >> headers when we thought they were important and would be needed long >> after the experimental phase was over - things like TTL, SQ, TOS. >> >> Essentially every field in the IP header, and every type of datagram, >> was there for some good reason, even though its initial implementation >> was known to be inadequate.?? The Internet was built on Placeholders.... >> >> Other mechanisms were put into the "Options" mechanism of the IP >> format.?? A lot of that was targeted towards supporting experiments, or >> as occasional tools to be used to debug problems in crises during >> Internet operations. >> >> E.g., all of the "Source Routing" mechanisms might be used to route >> traffic in particular paths that the current gateways wouldn't otherwise >> use.? An example would be routing voice traffic over specific paths, >> which the normal gateway routing wouldn't use.?? The Voice experimenters >> could use those mechanisms to try out their ideas in a controlled >> experiment. >> >> Similarly, Source Routing might be used to debug network problems. A >> network analyst might use Source Routing to probe a particular remote >> computer interface, where the regular gateway mechanisms would avoid >> that path. >> >> So a general rule was that IP headers contained important mechanisms, >> often just as placeholders, while Options contained things useful only >> in particular circumstances. >> >> But all of these "original ideas" needed Time.?? We knew Dave was "on >> it".... >> >> ----- >> >> Hopefully this helps...? I (and many others) probably should have >> written these "original ideas" down 40 years ago.?? We did, but I >> suspect all in the form of emails which have now been lost. Sorry >> about that.?? There was always so much code to write.? And we didn't >> have the answers yet to motivate creating RFCs which were viewed as more >> permanent repositories of the solved problems. >> >> Sorry about that..... >> >> Jack Haverty >> >> >> >> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >>> Hello Jack, >>> >>> ?? Thanks a lot for sharing this, as usual, I enjoy this kind of >>> stories :-) >>> >>> ?? Jack/group, just a question regarding this topic. When you >>> mentioned: >>> >>> "This caused a lot of concern about protocol elements such as >>> Time-To-Live, which were temporarily to be implemented purely as "hop >>> counts" >>> >>> >>> ?? Do you mean, the original idea was to really drop the packet at >>> certain time, a *real* Time-To-Live concept?. >>> >>> >>> Thanks, >>> >>> P.S. That's why it was important to change the field's name to hop >>> count in v6 :-) >>> >>> >>> >>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>>> in the New Yorker >>>>> >>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>>> >>>>> >>>>> >>>>> v >>>> >>>> Agree, nice story.?? Dave did a *lot* of good work.? Reading the >>>> article reminded me of the genesis of NTP. >>>> >>>> IIRC.... >>>> >>>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>>> experimenter, and scientist.? Like all good scientists, he wanted to >>>> run experiments to explore what the newfangled Internet was doing and >>>> test his theories.?? To do that required measurements and data. >>>> >>>> At the time, BBN was responsible for the "core gateways" that >>>> provided most of the long-haul Internet connectivity, e.g., between >>>> US west and east coasts and Europe.? There were lots of ideas about >>>> how to do things - e.g., strategies for TCP retransmissions, >>>> techniques for maintaining dynamic tables of routing information, >>>> algorithms for dealing with limited bandwidth and memory, and other >>>> such stuff that was all intentionally very loosely defined within the >>>> protocols.?? The Internet was an Experiment. >>>> >>>> I remember talking with Dave back at the early Internet meetings, and >>>> his fervor to try things out, and his disappointment at the lack of >>>> the core gateway's ability to measure much of anything. In >>>> particular, it was difficult to measure how long things took in the >>>> Internet, since the gateways didn't even have real-time clocks. This >>>> caused a lot of concern about protocol elements such as Time-To-Live, >>>> which were temporarily to be implemented purely as "hop counts", >>>> pending the introduction of some mechanism for measuring Time into >>>> the gateways.? (AFAIK, we're still waiting....) >>>> >>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>>> did have a pretty good mechanism for measuring time, at least between >>>> pairs of IMPs at either end of a communications circuit, because such >>>> circuits ran at specific speeds.?? So one IMP could tell how long it >>>> was taking to communicate with one of its neighbors, and used such >>>> data to drive the ARPANET internal routing mechanisms. >>>> >>>> In the Internet, gateways couldn't tell how long it took to send a >>>> datagram over one of its attached networks.?? The networks of the day >>>> simply didn't make such information available to its "users" (e.g., a >>>> gateway). >>>> >>>> But experiments require data, and labs require instruments to collect >>>> that data, and Dave wanted to test out lots of ideas, and we (BBN) >>>> couldn't offer any hope of such instrumentation in the core gateways >>>> any time soon. >>>> >>>> So Dave built it. >>>> >>>> And that's how NTP got started.? IIRC, the rest of us were all just >>>> trying to get the Internet to work at all.?? Dave was interested in >>>> understanding how and why it worked.? So while he built NTP, that >>>> didn't really affect any other projects.? Plus most (at least me) >>>> didn't understand how it was possible to get such accurate >>>> synchronization when the delays through the Internet mesh were so >>>> large and variable.?? (I still don't). But Dave thought it was >>>> possible, and that's why your computer, phone, laptop, or whatever >>>> know what time it is today. >>>> >>>> Dave was responsible for another long-lived element of the >>>> Internet.?? Dave's experiments were sometimes disruptive to the >>>> "core" Internet that we were tasked to make a reliable 24x7 service. >>>> Where Dave The Scientist would say "I wonder what happens when I do >>>> this..." We The Engineers would say "Don't do that!" >>>> >>>> That was the original motivation for creating the notion of >>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>>> Internet from the antics of the Fuzzballs.? I corralled Eric Rosen >>>> after one such Fuzzball-triggered incident and we sat down and >>>> created ASes, so that we could keep "our" AS running reliably.? It >>>> was intended as an interim mechanism until all the experimentation >>>> revealed what should be the best algorithms and protocol features to >>>> put in the next generation, and the Internet Experiment advanced into >>>> a production network service.?? We defined ASes and EGP to protect >>>> the Internet from Dave's Fuzzball mania. >>>> >>>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>>> still Experimenting..... and The Internet is still an Experiment. >>>> >>>> Fun times, >>>> Jack Haverty >>>> >> From jhlowry at mac.com Tue Oct 4 14:51:35 2022 From: jhlowry at mac.com (John Lowry) Date: Tue, 4 Oct 2022 17:51:35 -0400 Subject: [ih] The Importance of Time in the Internet In-Reply-To: <522b0724-8d6f-b15b-142b-9b5dca6aaad5@3kitty.org> References: <522b0724-8d6f-b15b-142b-9b5dca6aaad5@3kitty.org> Message-ID: Jack, As an ?adversarial architect ?, I agree. But that is my point. Physics rules. Please PLEASE give me a variable like time to target as a critical asset. I will destroy you. What is the trusted source for time ? Countdown is harder to manipulate. If I want to control the outcome, and time is what you used then control of time will rule. I don?t care about the domain. Take a look at phasor requirements and why they refuse to rely on ?the internet? for synchronizing. You?re better off with ?indeterminancies? like countdowns and physical sensors. Remember that we live in a physical universe. Sent from my iPad > On Oct 4, 2022, at 5:26 PM, Jack Haverty via Internet-history wrote: > > ?Brian asked: "To be blunt, why? " - from -- [ih] nice story about dave mills and NTP > > OK, I'll try to explain why I believed Time was so important to The Internet back in the 1980s. Or at least what I remember.... changing the subject line to be more relevant. > > Basically, the "why" is "to provide the network services that the Users need." In other words, to keep the customers happy. That's the short answer. Here's the longer story: > > --------------------------- > > As far as I can remember, there wasn't any "specifications" document of The Internet back in the early 80s when IPV4 et al were congealing. Nothing like a "Requirements" document that you'd typically find for major government projects that detailed what the resultant system had to be able to do. > > Yes, there have been lots of documents, e.g., RFCs, detailing the formats, protocols, algorithms, and myriad technical details of the evolving design. But I can't remembar any document specifying what The Internet was expected to provide as services to its Users. IIRC, even the seminal 1974 Cerf/Kahn paper on "A Protocol for Packet Network Interconnection" that created TCP says nothing about what such an aggregate of networks would provide as services to its users' attached computers. In other words, what should "customers" of The Internet expect to be able to do with it? > > That's understandable for a research environment. But to actually build the early Internet, we had to have some idea of what the thing being built should do, in order to figure out what's still missing, what might or might not work, what someone should think about for the future, and so on. > > I believe ARPA's strategy, at least in the 80s, was to define what The Internet had to be able to do by using a handful of "scenarios" of how The Internet might be used in a real-world (customer) situation. In addition, it was important to have concrete physical demonstrations in order to show that the ideas actually worked. Such demonstrations showed how the technology might actually be useful in the real world, and that theory and research had connections to practice and real-world situations. > > The "customer" of the early Internet was the government(s) - largely the US, but several countries in Europe were also involved. Specifically, the military world was the customer. Keeping the customer happy, by seeing working demonstrations that related to real-world situations, was crucial to keeping the funding flowing. Generals and government VIPs care about what they can envision using. Generals don't read RFCs. But they do open their wallets when they see something that will be useful to them. > > At the early Internet meetings, and especially at the ICCB (now IAB) initial meetings, I remember Vint often describing one such scenario, which we used to drive thought experiments to imagine how some technical idea would behave in the real world. It was of course a military scenario, in which a battlefield commander is in contact with the chain of command up to the President, as well as with diverse military elements in the air, on ships, in moving vehicles on the ground, in intelligence centers, and everything else you can imagine is used in a military scenario. Allies too. That's what the customer wanted to do. > > In that 1980s scenario, a "command and control" conference is being held, using The Internet to connect the widely scattered participants. A general might be using a shared multimedia display (think of a static graphical map with a cursor/pointer - no thought of interactive video in the 80s...) to understand what was happening "in the field", consult with advisors and other command staffs, and order appropriate actions. While pointing at the map, the orders are given. > > Soldier in a Jeep: "The enemy supply depot is here, and a large body of infantry is here" > ... > ... > General: "OK, send the third Division here, and have that bomber squadron hit here." > > While speaking, the field commanders and General draw a cursor on their screen, indicating the various locations. Everyone else sees a similar screen. Questions and clarifications happen quickly, in a conversational manner familiar to military members from their long experience using radios. But it's all online, through the Internet. > > So what can go wrong? > > Most obvious is that the datagrams supporting the interactive conversations need to get to their destinations in time to be useful in delivering the audio, graphics, etc., to all the members of the conversation, and properly synchronized. That need related directly to lots of mechanisms we put into the Internet IPV4 technology - TTL, TOS, Multicast, etc. If the data doesn't arrive soon enough, the conversation will be painful and prone to errors and misinterpretation. > > But there was also a need to be able to synchronize diverse data streams, so that the content delivered by a voice transmission, perhaps flowing over UDP, was "in sync" with graphical information carried by a TCP connection. Those applications needed to know how The Internet was handling their datagrams, and how long it was taking for them to get delivered through whatever path of networks was still functioning at the time. Does this speech fragment coincide in time with that graphics update - that kind of situation. > > In the scenario, it was crucial that the field reports and General's commands were in sync with the cursor movements on the shared graphics screens. Otherwise very bad things could happen. (think about it...) > > Time was important. > > Within the physical Internet of the 80s, there were enough implementations of the pieces to demonstrate such capabilities. The ARPANET provided connectivity among fixed locations in the US and some other places, including governmental sites such as the Pentagon. SATNET provided transatlantic connectivity. A clone of SATNET, called MATNET, was deployed by the Navy. One MATNET node was on an aircraft carrier (USS Carl Vinson), which could have been where that squadron of bombers in the Scenario came from. Army personnel were moving around a battlefield in Jeeps and helicopters, in field exercises with Packet Radios in their vehicles. They could move quickly wherever the orders told them to go, and the Packet Radio networks would keep them in contact with all the other players in a demo of that Scenario. > > Networks were slow in those days, with 56 kilobits/second considered "fast". ARPA had deployed a "Wideband Net" using satellite technology, that used a 3 megabits/second channel. That could obviously carry much more traffic than other networks. But the Wideband Net (aka WBNET) was connected only to the ARPANET. Like the ARPANET, the WBNET spanned the continental US, able to carry perhaps 10 times the traffic that the ARPANET could support. But how to actually use the WBNET - that was the problem. > > Since routing in the 1980s Internet was effectively based on "hop count", despite the name given to the TTL field, the gateways, and the "host" computers on the ARPANET, would never send any traffic towards the WBNET. Such traffic would always be two "hops" longer through a WBNET path than if it travelled directly through the ARPANET. The WBNET was never going to be the chosen route from anywhere to anywhere else in The Internet. > > In the scenario, if the WBNET was somehow effectively utilized, perhaps it would be possible to convey much more detailed maps and other graphics. Maybe even video. > > But there was no way to use WBNET. So we put "Source Routing" mechanisms into the IPV4 headers, as a way for experimenters to force traffic over the WBNET, despite the gateways belief that such a path was never the best way to go. In effect, the "host" computers were making their own decision about how their traffic should be carried through the Internet, likely contradicting the decision made by the routing mechanisms in the Gateways. There was even a term for the necessary algorithms and code in those "host" computers - they had to act as "Half Gateways". To make decisions about where to send their datagrams, the hosts had to somehow participate in the exchange of routing information with the networks' Gateways. At the time that was only done by hand, configuring the host code to send appropriate packets with Source Routing to perform particular experiments. No design of a "Half Gateway" was developed AFAIK. > > In the ICCB's list of "Things that need to be done", this was part of the "Expressway Routing" issue. The analogy we used was from everyone's familiarity driving in urban areas. Even though you can get from point A to point B by using just the city streets "network", it's often better and faster to head for the nearest freeway entrance, even thought it involves going a short distance in the "wrong direction". The route may be longer with three hops through Streets/Freeway/Streets, but it's the fastest way to get there, much better than just travelling on Streets. Datagrams have needs just like travellers in cars; their passengers need to get to the destination before the event starts. Time matters. So does achievable bandwidth, to get enough information delivered so that good decisions can be made. You can't always count on getting both. > > We thought gateways should be smart about Expressway Routing, and offer different types of service for different user needs, but didn't know how to do it. Meanwhile, I don't know the details, but I believe there was quite a lot of such experimentation using the WBNET. The expectation was that such experiments could work out how to best transport voice, graphical, and other such "non traditional" network traffic. Later the gateways would know how to better use all the available resources and match their routes to the particular traffic's needs, and Source Routing would no longer be needed (at least for that situation). > > All of what I just wrote happened almost 40 years ago, so things have changed. A lot. Maybe Time is no longer important, and notions such as TOS are no longer needed. But today, in 2022, I see the talking heads on TV interviewing reporters, experts, or random people "out there" somewhere in the world. The Internet seems to be everywhere (even active battlefields!) and it's used a lot. I've been amazed at how well it works -- usually. But you still sometimes see video breaking up, fragments of conversations being lost, and sometimes it gets bad enough that the anchor person apologizes for the "technical difficulties" and promises to get the interviewee back as soon as they can. > > Perhaps that's caused by a loose cable somewhere. Or perhaps it's caused by "buffer bloat" somewhere, which may have disappeared if you try later. Perhaps it would work better if the Internet had TTL, TOS, and other such stuff that was envisioned in the 80s. Meanwhile, the Users (like me) have just become used to the fact that such things happen, you have to expect them, and just try again. > > The General would not be happy. > > I hope I'm wrong, but I fear "technical difficulties" has become a de facto feature of the Internet technology, now baked into the technical design. Anyway, I hope I've explained why I (still) think Time is important. It's all about The Internet providing the services that the customers need to do what they need to do. > > ------- > > One last thing while I'm remembering it, just to capture a bit more of the 80s Internet history for the historians. At the time, we had some ideas about how to solve these "Time" problems. One idea was somewhat heretical. I don't remember who was in the "We" group of heretics who were pursuing that idea. But I admit to being such a heretic. > > The gist of the Idea was "Packet Switching is Not Always the Right Answer!" > > Pure Heresy! in the 1980s' Internet Community. > > The core observation was that if you had a fairly consistent flow of data (bits, not packets) between point A and point B, the best way to carry that traffic was to simply have an appropriately sized circuit between A and B. If you had some traffic that needed low-latency service, you'd route it over that circuit. Other traffic, that wouldn't "fit" in the circuit could be routed over longer paths using classic packet switching. Clever routing algorithms could make such decisions, selecting paths appropriate for each type of traffic using the settings conveyed in the TOS and TTL fields. A heavy flow of traffic between two points might even utilize several distinct pathways through the Internet, and achieve throughput from A to B greater than what any single "best route" could accomplish. > > In the ICCB, this was called the "Multipath Routing" issue. It wasn't a new issue; the same situation existed in the ARPANET and solutions werebeing researched for introduction into the IMP software. There was quite a lot of such research going on, exploring how to improve the behavior of the ARPANET and its clones (the DDN, Defense Data Network, being a prime example of where new techniques would be very useful). > > In the ARPANET, ten years of operations had led to the development of machinery to change the topology of the network as traffic patterns changed. Analysts would look at traffic statistics, and at network performance data such as packet transit times, and run mathematical models to decide where it would be appropriate to have telephone circuits between pairs of IMPs. Collecting such data, doing the analysis, and "provisioning" the circuits (getting the appropriate phone company to install them) took time - months at least, perhaps sometimes even years. > > In the telephony network, there were even more years of experience using Circuit Switches - the technology of traditional phone calls, where the network switches allocated a specific quantity of bandwidth along circuits between switching centers, dedicating some bandwidth to each call and patching them all together in series so the end users thought that they had a simple wire connecting the two ends of the call. Packet switching provided Virtual Circuits and would try its best to handle whatever the Users gave it. Circuit Switching provided real Circuits that provided stable bandwidth and delay, or told you it couldn't ("busy signal"). > > In the 80s ARPANET, we had experimented with faster ways to add or subtract bandwidth, by simply using dial-up modems. An IMP could "add a circuit" to another IMP by using the dial-up telephony network to "make a call" to the other IMP, and the routing mechanisms would notice that that circuit had "come up", and simply incorporate it into the traffic flows. Such mechanisms were manually triggered, since the IMP software didn't know how to make decisions about such "dynamic topology". We used it successfully to enable a new IMP to join an existing network by simply "dialing in" to a modem on some IMP already running in that network. The new IMP would quickly become just another operating node in the existing network, and its attached host computers could then make connections to other sites on the network. > > The heretical idea in the Internet arena was that a similar "dynamic topology" mechanism could be introduced, where bandwidth between points A and B could be added and subtracted on the fly between pairs of Gateways, as some human operator, or very clever algorithm, determined it was appropriate. > > With such a mechanism, (we hoped that) different types of service could be supported on the Internet. Gateways might determine that there was need for a low-latency pathway between points A and B, and that it was unable to provide such service with the current number of "hops" (more specifically Time) involved in the current best route. So it could "dial up" more bandwidth directly between A and B, thereby eliminating multiple hops through intermediate gateways and associated packet transmission delays, buffering, etc. > > So, Packet Switching was not always the right answer. When you need a Circuit, you should use Circuit Switching.... Heresy! > > There were all sorts of ideas floating around about how that might work. One example I remember was called something like "Cut Through Routing". The basic idea was that a Gateway, when it started to receive a datagram, could look at the header and identify that datagram as being high priority, and associated with an ongoing traffic flow that needed low latency. The gateway could then start transmitting that same datagram on the way to its next outbound destination -- even before the datagram had been completely received from the incoming circuit. This would reduce transit time through that node to possibly just a handful of "bit times", rather than however long it would take to receive and then retransmit the entire datagram. But there were problems with such a scheme - what do you do about checksums? > > Obviously such a system would require a lot of new work. In the interim, to gain experience from operations and hopefully figure out what those clever routing algorithms should do, we envisioned a network in which a "node" contained two separate pieces of equipment - a typical Gateway (now called a Router), and a typical Circuit Switch (as you would find in an 80s telephony network). Until the algorithms were figured out, a human operator/analyst would make the decisions about how to use the packet and circuit capabilities, much as the dial-up modems were being used, and hopefully figure out how such things should work so it could be transformed into algorithms, protocols, and code. > > At BBN, we actually proposed such a network project to one client (not ARPA), using off-the-shelf IMPs, Circuit Switches, and Gateways to create each network node. The Circuit network would provide circuits to be used by the Packet Network, and such Circuits could be reconfigured on demand as needed. If two Gateways really needed a circuit connecting them, it could be "provisioned" by simply issuing commands to the Circuit Switches. The Gateways would (eventually) realize that they had a new circuit available, and it would become the shortest route between A and B. > > BBN even bought a small company that had been making Circuit Switches for the Telephony market. AFAIK, that project didn't happen. I suspect the client realized that there was a bit too much "research" that still needed to be done before such a system would be ready for production use. > > Anyway, I thought this recollection of 1980s networking might be of historical interest. After 40 years, things have no doubt changed a lot. I don't know much about how modern network nodes actually work. Perhaps they now do use a hybrid of packet and circuit switching and use dynamic topology? Perhaps it's all now in silicon deep inside where the fiber light is transformed back and forth into electrons. Perhaps it's all done optically using some kind of quantum technique...? Or perhaps they just have added more memory everywhere and hoped that lots of buffering would be enough to meet the Users' needs. Memory is cheaper to get than new algorithms and protocols. > > In any event, I hope explains why I think Time was, and is still, important to The Internet. It's not an easy problem. And my own empirical and anecdotal observation, as just a User now, is that bad things still seem to happen far too frequently to explain as technical difficulties. > > Although many people use The Internet today, there are some communities that find it unusable. Serious Gamers I've talked with struggle to find places to plug in to The Internet where they can enjoy their games. I also wonder, as we watch the news from "the front", wherever that is today, whether today's military actually uses The Internet as that 1980s scenario envisioned. Or perhaps they have their own private internet now, tuned to do what they need it to do? > > Hope this helps some Historians. Someone should have written it down 40 years ago, in a form more permanent than emails. Sorry about that.... > > Thanks for getting this far, > Jack Haverty > > >> On 10/2/22 12:50, Brian E Carpenter wrote: >> Jack, >>> On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: >>> The short answer is "Yes". The Time-To-Live field was intended to count >>> down actual transit time as a datagram proceeded through the Internet. >>> A datagram was to be discarded as soon as some algorithm determined it >>> wasn't going to get to its destination before its TTL ran to zero. But >>> we didn't have the means to measure time, so hop-counts were the >>> placeholder. >>> >>> I wasn't involved in the IPV6 work, but I suspect the change of the >>> field to "hop count" reflected the reality of what the field actually >>> was. But it would have been better to have actually made Time work. >> >> To be blunt, why? >> >> There was no promise of guaranteed latency in those days, was there? >> As soon as queueing theory entered the game, that wasn't an option. >> So it wasn't just the absence of precise time, it was the presence of >> random delays that made a hop count the right answer, not just the >> convenient answer. >> >> I think that's why IPv6 never even considered anything but a hop count. >> The same lies behind the original TOS bits and their rebranding as >> the Differentiated Services Code Point many years later. My motto >> during the diffserv debates was "You can't beat queueing theory." >> >> There are people in the IETF working hard on Detnet ("deterministic >> networking") today. Maybe they have worked out how to beat queueing >> theory, but I doubt it. What I learned from working on real-time >> control systems is that you can't guarantee timing outside a very >> limited and tightly managed set of resources, where unbounded >> queues cannot occur. >> >> Brian >> >>> >>> Much of these "original ideas" probably weren't ever written down in >>> persistent media. Most discussions in the 1980 time frame were done >>> either in person or more extensively in email. Disk space was scarce >>> and expensive, so much of such email was probably never archived - >>> especially email not on the more "formal" mailing lists of the day. >>> >>> As I recall, Time was considered very important, for a number of >>> reasons. So here's what I remember... >>> ----- >>> >>> Like every project using computers, the Internet was constrained by too >>> little memory, too slow processors, and too limited bandwidth. A >>> typical, and expensive, system might have a few dozen kilobytes of >>> memory, a processor running at perhaps 1 MHz, and "high speed" >>> communications circuits carrying 56 kilobits per second. So there was >>> strong incentive not to waste resources. >>> >>> At the time, the ARPANET had been running for about ten years, and quite >>> a lot of experience had been gained through its operation and crises. >>> Over that time, a lot of mechanisms had been put in place, internally in >>> the IMP algorithms and hardware, to "protect" the network and keep it >>> running despite what the user computers tried to do. So, for example, >>> an IMP could regulate the flow of traffic from any of its "host" >>> computers, and even shut it off completely if needed. (Google "ARPANET >>> RFNM counting" if curious). >>> >>> In the Internet, the gateways had no such mechanisms available. We were >>> especially concerned about the "impedance mismatch" that would occur at >>> a gateway connecting a LAN to a much slower and "skinnier" long-haul >>> network. All of the "flow control" mechanisms that were implemented >>> inside an ARPANET IMP would be instead implemented inside TCP software >>> in users' host computers. >>> >>> We didn't know how that would work. But something had to be in the >>> code.... So the principle was that IP datagrams could be simply >>> discarded when necessary, wherever necessary, and TCP would retransmit >>> them so they would eventually get delivered. >>> >>> We envisioned that approach could easily lead to "runaway" scenarios, >>> with the Internet full of duplicate datagrams being dropped at any >>> "impedance mismatch" point along the way. In fact, we saw exactly that >>> at a gateway between ARPANET and SATNET - IIRC in one of Dave's >>> transatlantic experiments ("Don't do that!!!") >>> >>> So, Source Quench was invented, as a way of telling some host to "slow >>> down", and the gateways sent an SQ back to the source of any datagram it >>> had to drop. Many of us didn't think that would work very well (e.g., a >>> host might send one datagram and get back an SQ - what should it do to >>> "slow down"...?). I recall that Dave knew exactly what to do. Since >>> his machine's datagram had been dropped, it meant he should immediately >>> retransmit it. Another "Don't do that!" moment.... >>> >>> But SQ was a placeholder too -- to be replaced by some "real" flow >>> control mechanism as soon as the experimentation revealed what that >>> should be. >>> >>> ----- >>> >>> TCP retransmissions were based on Time. If a TCP didn't receive a >>> timely acknowledgement that data had been received, it could assume that >>> someone along the way had dropped the datagram and it should retransmit >>> it. SQ datagrams were also of course not guaranteed to get to their >>> destination, so you couldn't count on them as a signal to retransmit. >>> So Time was the only answer. >>> >>> But how to set the Timer in your TCP - that was subject to >>> experimentation, with lots of ideas. If you sent a copy of your data >>> too soon, it would just overload everything along the path through the >>> Internet with superfluous data consuming those scarce resources. If you >>> waited too long, your end-users would complain that the Internet was too >>> slow. So the answer was to have each TCP estimate how long it was >>> taking for a datagram to get to its destination, and set its own >>> "retransmission timer" to slightly longer than that value. >>> >>> Of course, such a technique requires instrumentation and data. Also, >>> since the delays might depend on the direction of a datagram's travel, >>> you needed synchronized clocks at the two endpoint of a TCP connection, >>> so they could accurately measure one-way transit times. >>> >>> Meanwhile, inside the gateways, there were ideas about how to do even >>> better by using Time. For example, if the routing protocols were >>> actually based on Time (shortest transit time) rather than Hops (number >>> of gateways between here and destination), the Internet would provide >>> better user performance and be more efficient. Even better - if a >>> gateway could "know" that a particular datagram wouldn't get to its >>> destination before it's TTL ran out, it could discard that datagram >>> immediately, even though it still had time to live. No point in wasting >>> network resources carrying a datagram already sentenced to death. >>> >>> We couldn't do all that. Didn't have the hardware, didn't have the >>> algorithms, didn't have the protocols. So in the meantime, any computer >>> handling an IP datagram should simply decrement the TTL value, and if it >>> reached zero the datagram should be discarded. TTL effectively became a >>> "hop count". >>> >>> When Dave got NTP running, and enough Time Servers were online and >>> reliable, and the gateways and hosts had the needed hardware, Time could >>> be measured, TTL could be set based on Time, and the Internet would be >>> better. >>> >>> In the meanwhile, all of us TCP implementers just picked some value for >>> our retransmission timers. I think I set mine to 3 seconds. No >>> exhaustive analysis or sophisticated mathematics involved. It just felt >>> right.....there was a lot of that going on in the early Internet. >>> >>> ----- >>> >>> While all the TCP work was going on, other uses were emerging. We knew >>> that there was more to networking than just logging in to distant >>> computers or transferring files between them - uses that had been common >>> for years in the ARPANET. But the next "killer app" hadn't appeared >>> yet, although there were lots of people trying to create one. >>> >>> In particular, "Packet Voice" was popular, with a contingent of >>> researchers figuring out how to do that on the fledgling Internet. There >>> were visions that someday it might even be possible to do Video. In >>> particular, *interactive* voice was the goal, i.e., the ability to have >>> a conversation by voice over the Internet (I don't recall when the term >>> VOIP emerged, probably much later). >>> >>> In a resource-constrained network, you don't want to waste resources on >>> datagrams that aren't useful. In conversational voice, a datagram that >>> arrives too late isn't useful. A fragment of audio that should have >>> gone to the speaker 500 milliseconds ago can only be discarded. It >>> would be better that it hadn't been sent at all, but at least discarding >>> it along the way, as soon as it's known to be too late to arrive, would >>> be appropriate. >>> >>> Of course, that needs Time. UDP was created as an adjunct to TCP, >>> providing a different kind of network service. Where TCP got all of >>> the data to its destination, no matter how long it took, UDP would get >>> as much data as possible to the destination, as long as it got there in >>> time to be useful. Time was important. >>> >>> UDP implementations, in host computers, didn't have to worry about >>> retransmissions. But they did still have to worry about how long it >>> would take for a datagram to get to its destination. With that >>> knowledge, they could set their datagrams' TTL values to something >>> appropriate for the network conditions at the time. Perhaps they might >>> even tell their human users "Sorry, conversational use not available >>> right now." -- an Internet equivalent of the "busy signal" - if the >>> current network transit times were too high to provide a good user >>> experience. >>> >>> Within the world of gateways, the differing needs of TCP and UDP >>> motivated different behaviors. That motivated the inclusion of the TOS >>> - Type Of Service - field in the IP datagram header. Perhaps UDP >>> packets would receive higher priority, being placed at the head of >>> queues so they got transmitted sooner. Perhaps they would be discarded >>> immediately if the gateway knew, based on its routing mechanisms, that >>> the datagram would never get delivered in time. Perhaps UDP would be >>> routed differently, using a terrestrial but low-bandwidth network, while >>> TCP traffic was directed over a high-bandwidth but long-delay satellite >>> path. A gateway mesh might have two or more independent routing >>> mechanisms, each using a "shortest path" approach, but with different >>> metrics for determining "short" - e.g., UDP using the shortest time >>> route, while some TCP traffic travelled a route with least ("shortest") >>> usage at the time. >>> >>> We couldn't do all that either. We needed Time, hardware, algorithms, >>> protocols, etc. But the placeholders were there, in the TCP, IP, and >>> UDP formats, ready for experimentation to figure all that stuff out. >>> >>> ----- >>> >>> When Time was implemented, there could be much needed experimentation to >>> figure out the right answers. Meanwhile, we had to keep the Internet >>> working. By the early 1980s, the ARPANET had been in operation for more >>> than a decade, and lots of operational experience had accrued. We knew, >>> for example, that things could "go wrong" and generate a crisis for the >>> network operators to quickly fix. TTL, even as just a hop count, was >>> one mechanism to suppress problems. We knew that "routing loops" could >>> occur. TTL would at least prevent situations where datagrams >>> circulated forever, orbiting inside the Internet until someone >>> discovered and fixed whatever was causing a routing loop to keep those >>> datagrams speeding around. >>> >>> Since the Internet was an Experiment, there were mechanisms put in place >>> to help run experiments. IIRC, in general things were put in the IP >>> headers when we thought they were important and would be needed long >>> after the experimental phase was over - things like TTL, SQ, TOS. >>> >>> Essentially every field in the IP header, and every type of datagram, >>> was there for some good reason, even though its initial implementation >>> was known to be inadequate. The Internet was built on Placeholders.... >>> >>> Other mechanisms were put into the "Options" mechanism of the IP >>> format. A lot of that was targeted towards supporting experiments, or >>> as occasional tools to be used to debug problems in crises during >>> Internet operations. >>> >>> E.g., all of the "Source Routing" mechanisms might be used to route >>> traffic in particular paths that the current gateways wouldn't otherwise >>> use. An example would be routing voice traffic over specific paths, >>> which the normal gateway routing wouldn't use. The Voice experimenters >>> could use those mechanisms to try out their ideas in a controlled >>> experiment. >>> >>> Similarly, Source Routing might be used to debug network problems. A >>> network analyst might use Source Routing to probe a particular remote >>> computer interface, where the regular gateway mechanisms would avoid >>> that path. >>> >>> So a general rule was that IP headers contained important mechanisms, >>> often just as placeholders, while Options contained things useful only >>> in particular circumstances. >>> >>> But all of these "original ideas" needed Time. We knew Dave was "on >>> it".... >>> >>> ----- >>> >>> Hopefully this helps... I (and many others) probably should have >>> written these "original ideas" down 40 years ago. We did, but I >>> suspect all in the form of emails which have now been lost. Sorry >>> about that. There was always so much code to write. And we didn't >>> have the answers yet to motivate creating RFCs which were viewed as more >>> permanent repositories of the solved problems. >>> >>> Sorry about that..... >>> >>> Jack Haverty >>> >>> >>> >>> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: >>>> Hello Jack, >>>> >>>> Thanks a lot for sharing this, as usual, I enjoy this kind of >>>> stories :-) >>>> >>>> Jack/group, just a question regarding this topic. When you mentioned: >>>> >>>> "This caused a lot of concern about protocol elements such as >>>> Time-To-Live, which were temporarily to be implemented purely as "hop >>>> counts" >>>> >>>> >>>> Do you mean, the original idea was to really drop the packet at >>>> certain time, a *real* Time-To-Live concept?. >>>> >>>> >>>> Thanks, >>>> >>>> P.S. That's why it was important to change the field's name to hop >>>> count in v6 :-) >>>> >>>> >>>> >>>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: >>>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: >>>>>> in the New Yorker >>>>>> >>>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time >>>>>> >>>>>> >>>>>> v >>>>> >>>>> Agree, nice story. Dave did a *lot* of good work. Reading the >>>>> article reminded me of the genesis of NTP. >>>>> >>>>> IIRC.... >>>>> >>>>> Back in the early days circa 1980, Dave was the unabashed tinkerer, >>>>> experimenter, and scientist. Like all good scientists, he wanted to >>>>> run experiments to explore what the newfangled Internet was doing and >>>>> test his theories. To do that required measurements and data. >>>>> >>>>> At the time, BBN was responsible for the "core gateways" that >>>>> provided most of the long-haul Internet connectivity, e.g., between >>>>> US west and east coasts and Europe. There were lots of ideas about >>>>> how to do things - e.g., strategies for TCP retransmissions, >>>>> techniques for maintaining dynamic tables of routing information, >>>>> algorithms for dealing with limited bandwidth and memory, and other >>>>> such stuff that was all intentionally very loosely defined within the >>>>> protocols. The Internet was an Experiment. >>>>> >>>>> I remember talking with Dave back at the early Internet meetings, and >>>>> his fervor to try things out, and his disappointment at the lack of >>>>> the core gateway's ability to measure much of anything. In >>>>> particular, it was difficult to measure how long things took in the >>>>> Internet, since the gateways didn't even have real-time clocks. This >>>>> caused a lot of concern about protocol elements such as Time-To-Live, >>>>> which were temporarily to be implemented purely as "hop counts", >>>>> pending the introduction of some mechanism for measuring Time into >>>>> the gateways. (AFAIK, we're still waiting....) >>>>> >>>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs >>>>> did have a pretty good mechanism for measuring time, at least between >>>>> pairs of IMPs at either end of a communications circuit, because such >>>>> circuits ran at specific speeds. So one IMP could tell how long it >>>>> was taking to communicate with one of its neighbors, and used such >>>>> data to drive the ARPANET internal routing mechanisms. >>>>> >>>>> In the Internet, gateways couldn't tell how long it took to send a >>>>> datagram over one of its attached networks. The networks of the day >>>>> simply didn't make such information available to its "users" (e.g., a >>>>> gateway). >>>>> >>>>> But experiments require data, and labs require instruments to collect >>>>> that data, and Dave wanted to test out lots of ideas, and we (BBN) >>>>> couldn't offer any hope of such instrumentation in the core gateways >>>>> any time soon. >>>>> >>>>> So Dave built it. >>>>> >>>>> And that's how NTP got started. IIRC, the rest of us were all just >>>>> trying to get the Internet to work at all. Dave was interested in >>>>> understanding how and why it worked. So while he built NTP, that >>>>> didn't really affect any other projects. Plus most (at least me) >>>>> didn't understand how it was possible to get such accurate >>>>> synchronization when the delays through the Internet mesh were so >>>>> large and variable. (I still don't). But Dave thought it was >>>>> possible, and that's why your computer, phone, laptop, or whatever >>>>> know what time it is today. >>>>> >>>>> Dave was responsible for another long-lived element of the >>>>> Internet. Dave's experiments were sometimes disruptive to the >>>>> "core" Internet that we were tasked to make a reliable 24x7 service. >>>>> Where Dave The Scientist would say "I wonder what happens when I do >>>>> this..." We The Engineers would say "Don't do that!" >>>>> >>>>> That was the original motivation for creating the notion of >>>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the >>>>> Internet from the antics of the Fuzzballs. I corralled Eric Rosen >>>>> after one such Fuzzball-triggered incident and we sat down and >>>>> created ASes, so that we could keep "our" AS running reliably. It >>>>> was intended as an interim mechanism until all the experimentation >>>>> revealed what should be the best algorithms and protocol features to >>>>> put in the next generation, and the Internet Experiment advanced into >>>>> a production network service. We defined ASes and EGP to protect >>>>> the Internet from Dave's Fuzzball mania. >>>>> >>>>> AFAIK, that hasn't happened yet ... and from that article, Dave is >>>>> still Experimenting..... and The Internet is still an Experiment. >>>>> >>>>> Fun times, >>>>> Jack Haverty >>>>> >>> > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From tte at cs.fau.de Tue Oct 4 16:47:37 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Wed, 5 Oct 2022 01:47:37 +0200 Subject: [ih] The Importance of Time in the Internet In-Reply-To: References: <522b0724-8d6f-b15b-142b-9b5dca6aaad5@3kitty.org> Message-ID: Time (synchronization) is just a marketing term for clock (synchronization). Are there actually any good war stories about intentional misdrection of "internet" "NTP time" ? I guess there must be, but i never stumbled across any. I am only aware of GPS time misdirection, but that is of course so old, that it got even used in James Bond decades ago. To answer your question: A trusted source of time (clock) is in every good heist movie a bunch of conspirators in a room simultaneously setting their clocks to noon. I thought the keys for DNS root zone security where managed in the same way. Bootstrapping into any cryptographic system of trust without that overhead and relying on certificates is just a sham given how certificates illogically where designed to have time based trust conditions. I am just wiggling through that in rfc8994 section 6.2.3.1 because an IETF rfc is (alas) not the right place for a rant against certificates. Oh well. Still doesn't take away anything from Dave and his co-conspirators achievements, but only those who didn't understand the difference between time and clock and partially voided their achievements by relying on the marketing term (such as PKI systems). Cheers Toerless On Tue, Oct 04, 2022 at 05:51:35PM -0400, John Lowry via Internet-history wrote: > Jack, > As an ?adversarial architect ?, I agree. But that is my point. Physics rules. Please PLEASE give me a variable like time to target as a critical asset. I will destroy you. What is the trusted source for time ? Countdown is harder to manipulate. If I want to control the outcome, and time is what you used then control of time will rule. I don?t care about the domain. Take a look at phasor requirements and why they refuse to rely on ?the internet? for synchronizing. You?re better off with ?indeterminancies? like countdowns and physical sensors. > Remember that we live in a physical universe. > > Sent from my iPad > > > On Oct 4, 2022, at 5:26 PM, Jack Haverty via Internet-history wrote: > > > > ?Brian asked: "To be blunt, why? " - from -- [ih] nice story about dave mills and NTP > > > > OK, I'll try to explain why I believed Time was so important to The Internet back in the 1980s. Or at least what I remember.... changing the subject line to be more relevant. > > > > Basically, the "why" is "to provide the network services that the Users need." In other words, to keep the customers happy. That's the short answer. Here's the longer story: > > > > --------------------------- > > > > As far as I can remember, there wasn't any "specifications" document of The Internet back in the early 80s when IPV4 et al were congealing. Nothing like a "Requirements" document that you'd typically find for major government projects that detailed what the resultant system had to be able to do. > > > > Yes, there have been lots of documents, e.g., RFCs, detailing the formats, protocols, algorithms, and myriad technical details of the evolving design. But I can't remembar any document specifying what The Internet was expected to provide as services to its Users. IIRC, even the seminal 1974 Cerf/Kahn paper on "A Protocol for Packet Network Interconnection" that created TCP says nothing about what such an aggregate of networks would provide as services to its users' attached computers. In other words, what should "customers" of The Internet expect to be able to do with it? > > > > That's understandable for a research environment. But to actually build the early Internet, we had to have some idea of what the thing being built should do, in order to figure out what's still missing, what might or might not work, what someone should think about for the future, and so on. > > > > I believe ARPA's strategy, at least in the 80s, was to define what The Internet had to be able to do by using a handful of "scenarios" of how The Internet might be used in a real-world (customer) situation. In addition, it was important to have concrete physical demonstrations in order to show that the ideas actually worked. Such demonstrations showed how the technology might actually be useful in the real world, and that theory and research had connections to practice and real-world situations. > > > > The "customer" of the early Internet was the government(s) - largely the US, but several countries in Europe were also involved. Specifically, the military world was the customer. Keeping the customer happy, by seeing working demonstrations that related to real-world situations, was crucial to keeping the funding flowing. Generals and government VIPs care about what they can envision using. Generals don't read RFCs. But they do open their wallets when they see something that will be useful to them. > > > > At the early Internet meetings, and especially at the ICCB (now IAB) initial meetings, I remember Vint often describing one such scenario, which we used to drive thought experiments to imagine how some technical idea would behave in the real world. It was of course a military scenario, in which a battlefield commander is in contact with the chain of command up to the President, as well as with diverse military elements in the air, on ships, in moving vehicles on the ground, in intelligence centers, and everything else you can imagine is used in a military scenario. Allies too. That's what the customer wanted to do. > > > > In that 1980s scenario, a "command and control" conference is being held, using The Internet to connect the widely scattered participants. A general might be using a shared multimedia display (think of a static graphical map with a cursor/pointer - no thought of interactive video in the 80s...) to understand what was happening "in the field", consult with advisors and other command staffs, and order appropriate actions. While pointing at the map, the orders are given. > > > > Soldier in a Jeep: "The enemy supply depot is here, and a large body of infantry is here" > > ... > > ... > > General: "OK, send the third Division here, and have that bomber squadron hit here." > > > > While speaking, the field commanders and General draw a cursor on their screen, indicating the various locations. Everyone else sees a similar screen. Questions and clarifications happen quickly, in a conversational manner familiar to military members from their long experience using radios. But it's all online, through the Internet. > > > > So what can go wrong? > > > > Most obvious is that the datagrams supporting the interactive conversations need to get to their destinations in time to be useful in delivering the audio, graphics, etc., to all the members of the conversation, and properly synchronized. That need related directly to lots of mechanisms we put into the Internet IPV4 technology - TTL, TOS, Multicast, etc. If the data doesn't arrive soon enough, the conversation will be painful and prone to errors and misinterpretation. > > > > But there was also a need to be able to synchronize diverse data streams, so that the content delivered by a voice transmission, perhaps flowing over UDP, was "in sync" with graphical information carried by a TCP connection. Those applications needed to know how The Internet was handling their datagrams, and how long it was taking for them to get delivered through whatever path of networks was still functioning at the time. Does this speech fragment coincide in time with that graphics update - that kind of situation. > > > > In the scenario, it was crucial that the field reports and General's commands were in sync with the cursor movements on the shared graphics screens. Otherwise very bad things could happen. (think about it...) > > > > Time was important. > > > > Within the physical Internet of the 80s, there were enough implementations of the pieces to demonstrate such capabilities. The ARPANET provided connectivity among fixed locations in the US and some other places, including governmental sites such as the Pentagon. SATNET provided transatlantic connectivity. A clone of SATNET, called MATNET, was deployed by the Navy. One MATNET node was on an aircraft carrier (USS Carl Vinson), which could have been where that squadron of bombers in the Scenario came from. Army personnel were moving around a battlefield in Jeeps and helicopters, in field exercises with Packet Radios in their vehicles. They could move quickly wherever the orders told them to go, and the Packet Radio networks would keep them in contact with all the other players in a demo of that Scenario. > > > > Networks were slow in those days, with 56 kilobits/second considered "fast". ARPA had deployed a "Wideband Net" using satellite technology, that used a 3 megabits/second channel. That could obviously carry much more traffic than other networks. But the Wideband Net (aka WBNET) was connected only to the ARPANET. Like the ARPANET, the WBNET spanned the continental US, able to carry perhaps 10 times the traffic that the ARPANET could support. But how to actually use the WBNET - that was the problem. > > > > Since routing in the 1980s Internet was effectively based on "hop count", despite the name given to the TTL field, the gateways, and the "host" computers on the ARPANET, would never send any traffic towards the WBNET. Such traffic would always be two "hops" longer through a WBNET path than if it travelled directly through the ARPANET. The WBNET was never going to be the chosen route from anywhere to anywhere else in The Internet. > > > > In the scenario, if the WBNET was somehow effectively utilized, perhaps it would be possible to convey much more detailed maps and other graphics. Maybe even video. > > > > But there was no way to use WBNET. So we put "Source Routing" mechanisms into the IPV4 headers, as a way for experimenters to force traffic over the WBNET, despite the gateways belief that such a path was never the best way to go. In effect, the "host" computers were making their own decision about how their traffic should be carried through the Internet, likely contradicting the decision made by the routing mechanisms in the Gateways. There was even a term for the necessary algorithms and code in those "host" computers - they had to act as "Half Gateways". To make decisions about where to send their datagrams, the hosts had to somehow participate in the exchange of routing information with the networks' Gateways. At the time that was only done by hand, configuring the host code to send appropriate packets with Source Routing to perform particular experiments. No design of a "Half Gateway" was developed AFAIK. > > > > In the ICCB's list of "Things that need to be done", this was part of the "Expressway Routing" issue. The analogy we used was from everyone's familiarity driving in urban areas. Even though you can get from point A to point B by using just the city streets "network", it's often better and faster to head for the nearest freeway entrance, even thought it involves going a short distance in the "wrong direction". The route may be longer with three hops through Streets/Freeway/Streets, but it's the fastest way to get there, much better than just travelling on Streets. Datagrams have needs just like travellers in cars; their passengers need to get to the destination before the event starts. Time matters. So does achievable bandwidth, to get enough information delivered so that good decisions can be made. You can't always count on getting both. > > > > We thought gateways should be smart about Expressway Routing, and offer different types of service for different user needs, but didn't know how to do it. Meanwhile, I don't know the details, but I believe there was quite a lot of such experimentation using the WBNET. The expectation was that such experiments could work out how to best transport voice, graphical, and other such "non traditional" network traffic. Later the gateways would know how to better use all the available resources and match their routes to the particular traffic's needs, and Source Routing would no longer be needed (at least for that situation). > > > > All of what I just wrote happened almost 40 years ago, so things have changed. A lot. Maybe Time is no longer important, and notions such as TOS are no longer needed. But today, in 2022, I see the talking heads on TV interviewing reporters, experts, or random people "out there" somewhere in the world. The Internet seems to be everywhere (even active battlefields!) and it's used a lot. I've been amazed at how well it works -- usually. But you still sometimes see video breaking up, fragments of conversations being lost, and sometimes it gets bad enough that the anchor person apologizes for the "technical difficulties" and promises to get the interviewee back as soon as they can. > > > > Perhaps that's caused by a loose cable somewhere. Or perhaps it's caused by "buffer bloat" somewhere, which may have disappeared if you try later. Perhaps it would work better if the Internet had TTL, TOS, and other such stuff that was envisioned in the 80s. Meanwhile, the Users (like me) have just become used to the fact that such things happen, you have to expect them, and just try again. > > > > The General would not be happy. > > > > I hope I'm wrong, but I fear "technical difficulties" has become a de facto feature of the Internet technology, now baked into the technical design. Anyway, I hope I've explained why I (still) think Time is important. It's all about The Internet providing the services that the customers need to do what they need to do. > > > > ------- > > > > One last thing while I'm remembering it, just to capture a bit more of the 80s Internet history for the historians. At the time, we had some ideas about how to solve these "Time" problems. One idea was somewhat heretical. I don't remember who was in the "We" group of heretics who were pursuing that idea. But I admit to being such a heretic. > > > > The gist of the Idea was "Packet Switching is Not Always the Right Answer!" > > > > Pure Heresy! in the 1980s' Internet Community. > > > > The core observation was that if you had a fairly consistent flow of data (bits, not packets) between point A and point B, the best way to carry that traffic was to simply have an appropriately sized circuit between A and B. If you had some traffic that needed low-latency service, you'd route it over that circuit. Other traffic, that wouldn't "fit" in the circuit could be routed over longer paths using classic packet switching. Clever routing algorithms could make such decisions, selecting paths appropriate for each type of traffic using the settings conveyed in the TOS and TTL fields. A heavy flow of traffic between two points might even utilize several distinct pathways through the Internet, and achieve throughput from A to B greater than what any single "best route" could accomplish. > > > > In the ICCB, this was called the "Multipath Routing" issue. It wasn't a new issue; the same situation existed in the ARPANET and solutions werebeing researched for introduction into the IMP software. There was quite a lot of such research going on, exploring how to improve the behavior of the ARPANET and its clones (the DDN, Defense Data Network, being a prime example of where new techniques would be very useful). > > > > In the ARPANET, ten years of operations had led to the development of machinery to change the topology of the network as traffic patterns changed. Analysts would look at traffic statistics, and at network performance data such as packet transit times, and run mathematical models to decide where it would be appropriate to have telephone circuits between pairs of IMPs. Collecting such data, doing the analysis, and "provisioning" the circuits (getting the appropriate phone company to install them) took time - months at least, perhaps sometimes even years. > > > > In the telephony network, there were even more years of experience using Circuit Switches - the technology of traditional phone calls, where the network switches allocated a specific quantity of bandwidth along circuits between switching centers, dedicating some bandwidth to each call and patching them all together in series so the end users thought that they had a simple wire connecting the two ends of the call. Packet switching provided Virtual Circuits and would try its best to handle whatever the Users gave it. Circuit Switching provided real Circuits that provided stable bandwidth and delay, or told you it couldn't ("busy signal"). > > > > In the 80s ARPANET, we had experimented with faster ways to add or subtract bandwidth, by simply using dial-up modems. An IMP could "add a circuit" to another IMP by using the dial-up telephony network to "make a call" to the other IMP, and the routing mechanisms would notice that that circuit had "come up", and simply incorporate it into the traffic flows. Such mechanisms were manually triggered, since the IMP software didn't know how to make decisions about such "dynamic topology". We used it successfully to enable a new IMP to join an existing network by simply "dialing in" to a modem on some IMP already running in that network. The new IMP would quickly become just another operating node in the existing network, and its attached host computers could then make connections to other sites on the network. > > > > The heretical idea in the Internet arena was that a similar "dynamic topology" mechanism could be introduced, where bandwidth between points A and B could be added and subtracted on the fly between pairs of Gateways, as some human operator, or very clever algorithm, determined it was appropriate. > > > > With such a mechanism, (we hoped that) different types of service could be supported on the Internet. Gateways might determine that there was need for a low-latency pathway between points A and B, and that it was unable to provide such service with the current number of "hops" (more specifically Time) involved in the current best route. So it could "dial up" more bandwidth directly between A and B, thereby eliminating multiple hops through intermediate gateways and associated packet transmission delays, buffering, etc. > > > > So, Packet Switching was not always the right answer. When you need a Circuit, you should use Circuit Switching.... Heresy! > > > > There were all sorts of ideas floating around about how that might work. One example I remember was called something like "Cut Through Routing". The basic idea was that a Gateway, when it started to receive a datagram, could look at the header and identify that datagram as being high priority, and associated with an ongoing traffic flow that needed low latency. The gateway could then start transmitting that same datagram on the way to its next outbound destination -- even before the datagram had been completely received from the incoming circuit. This would reduce transit time through that node to possibly just a handful of "bit times", rather than however long it would take to receive and then retransmit the entire datagram. But there were problems with such a scheme - what do you do about checksums? > > > > Obviously such a system would require a lot of new work. In the interim, to gain experience from operations and hopefully figure out what those clever routing algorithms should do, we envisioned a network in which a "node" contained two separate pieces of equipment - a typical Gateway (now called a Router), and a typical Circuit Switch (as you would find in an 80s telephony network). Until the algorithms were figured out, a human operator/analyst would make the decisions about how to use the packet and circuit capabilities, much as the dial-up modems were being used, and hopefully figure out how such things should work so it could be transformed into algorithms, protocols, and code. > > > > At BBN, we actually proposed such a network project to one client (not ARPA), using off-the-shelf IMPs, Circuit Switches, and Gateways to create each network node. The Circuit network would provide circuits to be used by the Packet Network, and such Circuits could be reconfigured on demand as needed. If two Gateways really needed a circuit connecting them, it could be "provisioned" by simply issuing commands to the Circuit Switches. The Gateways would (eventually) realize that they had a new circuit available, and it would become the shortest route between A and B. > > > > BBN even bought a small company that had been making Circuit Switches for the Telephony market. AFAIK, that project didn't happen. I suspect the client realized that there was a bit too much "research" that still needed to be done before such a system would be ready for production use. > > > > Anyway, I thought this recollection of 1980s networking might be of historical interest. After 40 years, things have no doubt changed a lot. I don't know much about how modern network nodes actually work. Perhaps they now do use a hybrid of packet and circuit switching and use dynamic topology? Perhaps it's all now in silicon deep inside where the fiber light is transformed back and forth into electrons. Perhaps it's all done optically using some kind of quantum technique...? Or perhaps they just have added more memory everywhere and hoped that lots of buffering would be enough to meet the Users' needs. Memory is cheaper to get than new algorithms and protocols. > > > > In any event, I hope explains why I think Time was, and is still, important to The Internet. It's not an easy problem. And my own empirical and anecdotal observation, as just a User now, is that bad things still seem to happen far too frequently to explain as technical difficulties. > > > > Although many people use The Internet today, there are some communities that find it unusable. Serious Gamers I've talked with struggle to find places to plug in to The Internet where they can enjoy their games. I also wonder, as we watch the news from "the front", wherever that is today, whether today's military actually uses The Internet as that 1980s scenario envisioned. Or perhaps they have their own private internet now, tuned to do what they need it to do? > > > > Hope this helps some Historians. Someone should have written it down 40 years ago, in a form more permanent than emails. Sorry about that.... > > > > Thanks for getting this far, > > Jack Haverty > > > > > >> On 10/2/22 12:50, Brian E Carpenter wrote: > >> Jack, > >>> On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote: > >>> The short answer is "Yes". The Time-To-Live field was intended to count > >>> down actual transit time as a datagram proceeded through the Internet. > >>> A datagram was to be discarded as soon as some algorithm determined it > >>> wasn't going to get to its destination before its TTL ran to zero. But > >>> we didn't have the means to measure time, so hop-counts were the > >>> placeholder. > >>> > >>> I wasn't involved in the IPV6 work, but I suspect the change of the > >>> field to "hop count" reflected the reality of what the field actually > >>> was. But it would have been better to have actually made Time work. > >> > >> To be blunt, why? > >> > >> There was no promise of guaranteed latency in those days, was there? > >> As soon as queueing theory entered the game, that wasn't an option. > >> So it wasn't just the absence of precise time, it was the presence of > >> random delays that made a hop count the right answer, not just the > >> convenient answer. > >> > >> I think that's why IPv6 never even considered anything but a hop count. > >> The same lies behind the original TOS bits and their rebranding as > >> the Differentiated Services Code Point many years later. My motto > >> during the diffserv debates was "You can't beat queueing theory." > >> > >> There are people in the IETF working hard on Detnet ("deterministic > >> networking") today. Maybe they have worked out how to beat queueing > >> theory, but I doubt it. What I learned from working on real-time > >> control systems is that you can't guarantee timing outside a very > >> limited and tightly managed set of resources, where unbounded > >> queues cannot occur. > >> > >> Brian > >> > >>> > >>> Much of these "original ideas" probably weren't ever written down in > >>> persistent media. Most discussions in the 1980 time frame were done > >>> either in person or more extensively in email. Disk space was scarce > >>> and expensive, so much of such email was probably never archived - > >>> especially email not on the more "formal" mailing lists of the day. > >>> > >>> As I recall, Time was considered very important, for a number of > >>> reasons. So here's what I remember... > >>> ----- > >>> > >>> Like every project using computers, the Internet was constrained by too > >>> little memory, too slow processors, and too limited bandwidth. A > >>> typical, and expensive, system might have a few dozen kilobytes of > >>> memory, a processor running at perhaps 1 MHz, and "high speed" > >>> communications circuits carrying 56 kilobits per second. So there was > >>> strong incentive not to waste resources. > >>> > >>> At the time, the ARPANET had been running for about ten years, and quite > >>> a lot of experience had been gained through its operation and crises. > >>> Over that time, a lot of mechanisms had been put in place, internally in > >>> the IMP algorithms and hardware, to "protect" the network and keep it > >>> running despite what the user computers tried to do. So, for example, > >>> an IMP could regulate the flow of traffic from any of its "host" > >>> computers, and even shut it off completely if needed. (Google "ARPANET > >>> RFNM counting" if curious). > >>> > >>> In the Internet, the gateways had no such mechanisms available. We were > >>> especially concerned about the "impedance mismatch" that would occur at > >>> a gateway connecting a LAN to a much slower and "skinnier" long-haul > >>> network. All of the "flow control" mechanisms that were implemented > >>> inside an ARPANET IMP would be instead implemented inside TCP software > >>> in users' host computers. > >>> > >>> We didn't know how that would work. But something had to be in the > >>> code.... So the principle was that IP datagrams could be simply > >>> discarded when necessary, wherever necessary, and TCP would retransmit > >>> them so they would eventually get delivered. > >>> > >>> We envisioned that approach could easily lead to "runaway" scenarios, > >>> with the Internet full of duplicate datagrams being dropped at any > >>> "impedance mismatch" point along the way. In fact, we saw exactly that > >>> at a gateway between ARPANET and SATNET - IIRC in one of Dave's > >>> transatlantic experiments ("Don't do that!!!") > >>> > >>> So, Source Quench was invented, as a way of telling some host to "slow > >>> down", and the gateways sent an SQ back to the source of any datagram it > >>> had to drop. Many of us didn't think that would work very well (e.g., a > >>> host might send one datagram and get back an SQ - what should it do to > >>> "slow down"...?). I recall that Dave knew exactly what to do. Since > >>> his machine's datagram had been dropped, it meant he should immediately > >>> retransmit it. Another "Don't do that!" moment.... > >>> > >>> But SQ was a placeholder too -- to be replaced by some "real" flow > >>> control mechanism as soon as the experimentation revealed what that > >>> should be. > >>> > >>> ----- > >>> > >>> TCP retransmissions were based on Time. If a TCP didn't receive a > >>> timely acknowledgement that data had been received, it could assume that > >>> someone along the way had dropped the datagram and it should retransmit > >>> it. SQ datagrams were also of course not guaranteed to get to their > >>> destination, so you couldn't count on them as a signal to retransmit. > >>> So Time was the only answer. > >>> > >>> But how to set the Timer in your TCP - that was subject to > >>> experimentation, with lots of ideas. If you sent a copy of your data > >>> too soon, it would just overload everything along the path through the > >>> Internet with superfluous data consuming those scarce resources. If you > >>> waited too long, your end-users would complain that the Internet was too > >>> slow. So the answer was to have each TCP estimate how long it was > >>> taking for a datagram to get to its destination, and set its own > >>> "retransmission timer" to slightly longer than that value. > >>> > >>> Of course, such a technique requires instrumentation and data. Also, > >>> since the delays might depend on the direction of a datagram's travel, > >>> you needed synchronized clocks at the two endpoint of a TCP connection, > >>> so they could accurately measure one-way transit times. > >>> > >>> Meanwhile, inside the gateways, there were ideas about how to do even > >>> better by using Time. For example, if the routing protocols were > >>> actually based on Time (shortest transit time) rather than Hops (number > >>> of gateways between here and destination), the Internet would provide > >>> better user performance and be more efficient. Even better - if a > >>> gateway could "know" that a particular datagram wouldn't get to its > >>> destination before it's TTL ran out, it could discard that datagram > >>> immediately, even though it still had time to live. No point in wasting > >>> network resources carrying a datagram already sentenced to death. > >>> > >>> We couldn't do all that. Didn't have the hardware, didn't have the > >>> algorithms, didn't have the protocols. So in the meantime, any computer > >>> handling an IP datagram should simply decrement the TTL value, and if it > >>> reached zero the datagram should be discarded. TTL effectively became a > >>> "hop count". > >>> > >>> When Dave got NTP running, and enough Time Servers were online and > >>> reliable, and the gateways and hosts had the needed hardware, Time could > >>> be measured, TTL could be set based on Time, and the Internet would be > >>> better. > >>> > >>> In the meanwhile, all of us TCP implementers just picked some value for > >>> our retransmission timers. I think I set mine to 3 seconds. No > >>> exhaustive analysis or sophisticated mathematics involved. It just felt > >>> right.....there was a lot of that going on in the early Internet. > >>> > >>> ----- > >>> > >>> While all the TCP work was going on, other uses were emerging. We knew > >>> that there was more to networking than just logging in to distant > >>> computers or transferring files between them - uses that had been common > >>> for years in the ARPANET. But the next "killer app" hadn't appeared > >>> yet, although there were lots of people trying to create one. > >>> > >>> In particular, "Packet Voice" was popular, with a contingent of > >>> researchers figuring out how to do that on the fledgling Internet. There > >>> were visions that someday it might even be possible to do Video. In > >>> particular, *interactive* voice was the goal, i.e., the ability to have > >>> a conversation by voice over the Internet (I don't recall when the term > >>> VOIP emerged, probably much later). > >>> > >>> In a resource-constrained network, you don't want to waste resources on > >>> datagrams that aren't useful. In conversational voice, a datagram that > >>> arrives too late isn't useful. A fragment of audio that should have > >>> gone to the speaker 500 milliseconds ago can only be discarded. It > >>> would be better that it hadn't been sent at all, but at least discarding > >>> it along the way, as soon as it's known to be too late to arrive, would > >>> be appropriate. > >>> > >>> Of course, that needs Time. UDP was created as an adjunct to TCP, > >>> providing a different kind of network service. Where TCP got all of > >>> the data to its destination, no matter how long it took, UDP would get > >>> as much data as possible to the destination, as long as it got there in > >>> time to be useful. Time was important. > >>> > >>> UDP implementations, in host computers, didn't have to worry about > >>> retransmissions. But they did still have to worry about how long it > >>> would take for a datagram to get to its destination. With that > >>> knowledge, they could set their datagrams' TTL values to something > >>> appropriate for the network conditions at the time. Perhaps they might > >>> even tell their human users "Sorry, conversational use not available > >>> right now." -- an Internet equivalent of the "busy signal" - if the > >>> current network transit times were too high to provide a good user > >>> experience. > >>> > >>> Within the world of gateways, the differing needs of TCP and UDP > >>> motivated different behaviors. That motivated the inclusion of the TOS > >>> - Type Of Service - field in the IP datagram header. Perhaps UDP > >>> packets would receive higher priority, being placed at the head of > >>> queues so they got transmitted sooner. Perhaps they would be discarded > >>> immediately if the gateway knew, based on its routing mechanisms, that > >>> the datagram would never get delivered in time. Perhaps UDP would be > >>> routed differently, using a terrestrial but low-bandwidth network, while > >>> TCP traffic was directed over a high-bandwidth but long-delay satellite > >>> path. A gateway mesh might have two or more independent routing > >>> mechanisms, each using a "shortest path" approach, but with different > >>> metrics for determining "short" - e.g., UDP using the shortest time > >>> route, while some TCP traffic travelled a route with least ("shortest") > >>> usage at the time. > >>> > >>> We couldn't do all that either. We needed Time, hardware, algorithms, > >>> protocols, etc. But the placeholders were there, in the TCP, IP, and > >>> UDP formats, ready for experimentation to figure all that stuff out. > >>> > >>> ----- > >>> > >>> When Time was implemented, there could be much needed experimentation to > >>> figure out the right answers. Meanwhile, we had to keep the Internet > >>> working. By the early 1980s, the ARPANET had been in operation for more > >>> than a decade, and lots of operational experience had accrued. We knew, > >>> for example, that things could "go wrong" and generate a crisis for the > >>> network operators to quickly fix. TTL, even as just a hop count, was > >>> one mechanism to suppress problems. We knew that "routing loops" could > >>> occur. TTL would at least prevent situations where datagrams > >>> circulated forever, orbiting inside the Internet until someone > >>> discovered and fixed whatever was causing a routing loop to keep those > >>> datagrams speeding around. > >>> > >>> Since the Internet was an Experiment, there were mechanisms put in place > >>> to help run experiments. IIRC, in general things were put in the IP > >>> headers when we thought they were important and would be needed long > >>> after the experimental phase was over - things like TTL, SQ, TOS. > >>> > >>> Essentially every field in the IP header, and every type of datagram, > >>> was there for some good reason, even though its initial implementation > >>> was known to be inadequate. The Internet was built on Placeholders.... > >>> > >>> Other mechanisms were put into the "Options" mechanism of the IP > >>> format. A lot of that was targeted towards supporting experiments, or > >>> as occasional tools to be used to debug problems in crises during > >>> Internet operations. > >>> > >>> E.g., all of the "Source Routing" mechanisms might be used to route > >>> traffic in particular paths that the current gateways wouldn't otherwise > >>> use. An example would be routing voice traffic over specific paths, > >>> which the normal gateway routing wouldn't use. The Voice experimenters > >>> could use those mechanisms to try out their ideas in a controlled > >>> experiment. > >>> > >>> Similarly, Source Routing might be used to debug network problems. A > >>> network analyst might use Source Routing to probe a particular remote > >>> computer interface, where the regular gateway mechanisms would avoid > >>> that path. > >>> > >>> So a general rule was that IP headers contained important mechanisms, > >>> often just as placeholders, while Options contained things useful only > >>> in particular circumstances. > >>> > >>> But all of these "original ideas" needed Time. We knew Dave was "on > >>> it".... > >>> > >>> ----- > >>> > >>> Hopefully this helps... I (and many others) probably should have > >>> written these "original ideas" down 40 years ago. We did, but I > >>> suspect all in the form of emails which have now been lost. Sorry > >>> about that. There was always so much code to write. And we didn't > >>> have the answers yet to motivate creating RFCs which were viewed as more > >>> permanent repositories of the solved problems. > >>> > >>> Sorry about that..... > >>> > >>> Jack Haverty > >>> > >>> > >>> > >>> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote: > >>>> Hello Jack, > >>>> > >>>> Thanks a lot for sharing this, as usual, I enjoy this kind of > >>>> stories :-) > >>>> > >>>> Jack/group, just a question regarding this topic. When you mentioned: > >>>> > >>>> "This caused a lot of concern about protocol elements such as > >>>> Time-To-Live, which were temporarily to be implemented purely as "hop > >>>> counts" > >>>> > >>>> > >>>> Do you mean, the original idea was to really drop the packet at > >>>> certain time, a *real* Time-To-Live concept?. > >>>> > >>>> > >>>> Thanks, > >>>> > >>>> P.S. That's why it was important to change the field's name to hop > >>>> count in v6 :-) > >>>> > >>>> > >>>> > >>>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote: > >>>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote: > >>>>>> in the New Yorker > >>>>>> > >>>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time > >>>>>> > >>>>>> > >>>>>> v > >>>>> > >>>>> Agree, nice story. Dave did a *lot* of good work. Reading the > >>>>> article reminded me of the genesis of NTP. > >>>>> > >>>>> IIRC.... > >>>>> > >>>>> Back in the early days circa 1980, Dave was the unabashed tinkerer, > >>>>> experimenter, and scientist. Like all good scientists, he wanted to > >>>>> run experiments to explore what the newfangled Internet was doing and > >>>>> test his theories. To do that required measurements and data. > >>>>> > >>>>> At the time, BBN was responsible for the "core gateways" that > >>>>> provided most of the long-haul Internet connectivity, e.g., between > >>>>> US west and east coasts and Europe. There were lots of ideas about > >>>>> how to do things - e.g., strategies for TCP retransmissions, > >>>>> techniques for maintaining dynamic tables of routing information, > >>>>> algorithms for dealing with limited bandwidth and memory, and other > >>>>> such stuff that was all intentionally very loosely defined within the > >>>>> protocols. The Internet was an Experiment. > >>>>> > >>>>> I remember talking with Dave back at the early Internet meetings, and > >>>>> his fervor to try things out, and his disappointment at the lack of > >>>>> the core gateway's ability to measure much of anything. In > >>>>> particular, it was difficult to measure how long things took in the > >>>>> Internet, since the gateways didn't even have real-time clocks. This > >>>>> caused a lot of concern about protocol elements such as Time-To-Live, > >>>>> which were temporarily to be implemented purely as "hop counts", > >>>>> pending the introduction of some mechanism for measuring Time into > >>>>> the gateways. (AFAIK, we're still waiting....) > >>>>> > >>>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs > >>>>> did have a pretty good mechanism for measuring time, at least between > >>>>> pairs of IMPs at either end of a communications circuit, because such > >>>>> circuits ran at specific speeds. So one IMP could tell how long it > >>>>> was taking to communicate with one of its neighbors, and used such > >>>>> data to drive the ARPANET internal routing mechanisms. > >>>>> > >>>>> In the Internet, gateways couldn't tell how long it took to send a > >>>>> datagram over one of its attached networks. The networks of the day > >>>>> simply didn't make such information available to its "users" (e.g., a > >>>>> gateway). > >>>>> > >>>>> But experiments require data, and labs require instruments to collect > >>>>> that data, and Dave wanted to test out lots of ideas, and we (BBN) > >>>>> couldn't offer any hope of such instrumentation in the core gateways > >>>>> any time soon. > >>>>> > >>>>> So Dave built it. > >>>>> > >>>>> And that's how NTP got started. IIRC, the rest of us were all just > >>>>> trying to get the Internet to work at all. Dave was interested in > >>>>> understanding how and why it worked. So while he built NTP, that > >>>>> didn't really affect any other projects. Plus most (at least me) > >>>>> didn't understand how it was possible to get such accurate > >>>>> synchronization when the delays through the Internet mesh were so > >>>>> large and variable. (I still don't). But Dave thought it was > >>>>> possible, and that's why your computer, phone, laptop, or whatever > >>>>> know what time it is today. > >>>>> > >>>>> Dave was responsible for another long-lived element of the > >>>>> Internet. Dave's experiments were sometimes disruptive to the > >>>>> "core" Internet that we were tasked to make a reliable 24x7 service. > >>>>> Where Dave The Scientist would say "I wonder what happens when I do > >>>>> this..." We The Engineers would say "Don't do that!" > >>>>> > >>>>> That was the original motivation for creating the notion of > >>>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the > >>>>> Internet from the antics of the Fuzzballs. I corralled Eric Rosen > >>>>> after one such Fuzzball-triggered incident and we sat down and > >>>>> created ASes, so that we could keep "our" AS running reliably. It > >>>>> was intended as an interim mechanism until all the experimentation > >>>>> revealed what should be the best algorithms and protocol features to > >>>>> put in the next generation, and the Internet Experiment advanced into > >>>>> a production network service. We defined ASes and EGP to protect > >>>>> the Internet from Dave's Fuzzball mania. > >>>>> > >>>>> AFAIK, that hasn't happened yet ... and from that article, Dave is > >>>>> still Experimenting..... and The Internet is still an Experiment. > >>>>> > >>>>> Fun times, > >>>>> Jack Haverty > >>>>> > >>> > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From dave.taht at gmail.com Tue Oct 4 18:10:27 2022 From: dave.taht at gmail.com (Dave Taht) Date: Tue, 4 Oct 2022 18:10:27 -0700 Subject: [ih] On queueing from len In-Reply-To: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: ---------- Forwarded message --------- From: Leonard Kleinrock Date: Tue, Oct 4, 2022, 5:01 PM Subject: Re: since you indicated you wanted in To: Dave Taht Cc: Leonard Kleinrock Much thanks for connecting me, Dave. I am adding below my updated comments. Feel free to post: _____________________________________ Some of these guys have happily adopted my ?unusual? spelling of ?queueing? with the extra ?e? since I loved the idea of it being the only word in english with 5 vowels in a row (if you spell it the British way, which is why I chose the British spelling). So I guess they read my book ?. If you don?t now what a RFNM is, I?ll be happy to tell you. A real takeaway from the email you sent is that flow control was never really thought out carefully, but was patchwork on guesswork time after time and that has helped bring on the mess we have today. I have been warning about how difficult flow control is for many decades. You should also note that I pointed out in my Volume II how poorly designed were the early Arpanet flow control mechanisms; for example, they exhibited the need for multiple kinds of tokens to get a flow started, and these different tokens came from different places in the Arpanet architecture and protocols. Another point is that my good, but now deceased, friend, Danny Cohen was pushing Network Voice Protocol in the late 1970?s and supported getting TCP to split into TCP/IP so he could run what amounted to UDP over IP Another point. As you know, at UCLA we were designated as the Network Measurement Center from day 1 of the Arpanet. It was our job to push the network to its limits and try to break it - and break it we could and then we determined how to fix the fault and prevent it in the future; of course, BBN was not happy with us doing that (just as Jack Haverty quotes re Dave Mills in your email). We were responsible for the multi-faceted set of measurement tools in each IMP (in this paper of mine and Naylor, in the section ?Measurement Tools? you will see what we added to the IMP tools). BUT, in 1975, our role was taken over by DCA and as far as I know, the did little, if any, further experimentation and so we lost track of what was going on in the net. Of course, we continue to pay the price of not really knowing how the network is performing. I very much agree that the Arpanet was, indeed, and experiment and allowed for lots of research and tinkering for very good purpose. I see that Virtual Cut-Through was mentioned recently. My paper with Kermani was the source paper on that technology, and that paper was "P. Kermani and L. Kleinroc*k*, "Virtual Cut-through: A New Computer Communication Switching Technique," *Computer Networks*, vol. 3, no. 4, pp. 267?286, September 1979 .? Jack makes the good point that direct connections (in his case, circuit-switched channels) may sometimes be of value. Actually, I referred to that in my 1962 PhD dissertation where I say, ??even for small values of rho the introduction of some direct links (between the nodes which carry the bulk of the traffic) is required." Best, Len ______________________________________________________ On Oct 4, 2022, at 7:11 AM, Dave Taht wrote: I just subscribed you. There will be a link to confirm... I am at a wispa conference and not on my laptop so editing your nice email is beyond me. That thread continued here: https://elists.isoc.org/pipermail/internet-history/2022-October/thread.html -- This song goes out to all the folk that thought Stadia would work: https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz Dave T?ht CEO, TekLibre, LLC From louie at transsys.com Tue Oct 4 18:35:33 2022 From: louie at transsys.com (Louis Mamakos) Date: Tue, 04 Oct 2022 21:35:33 -0400 Subject: [ih] The Importance of Time in the Internet In-Reply-To: References: <522b0724-8d6f-b15b-142b-9b5dca6aaad5@3kitty.org> Message-ID: On 4 Oct 2022, at 19:47, Toerless Eckert via Internet-history wrote: > Time (synchronization) is just a marketing term for clock > (synchronization). A good part of what an NTP implementation does computing the frequency and phase errors. Most people think of synchronizing clocks as making the phase error as close to zero (e.g., close to the same "time".) NTP also tries to drive the frequency error close to zero (rate at which time advances) so if you lose connectivity to the reference clock, at least your local clock will continue to advance at the same rate. When we did the first UNIX NTP implementation from the NTP spec at the University of Maryland, it was fun to watch the computed frequency error and relate it to the temperature of the room the (at the time NeXT) workstation was in. Looking at that change hour to hour and day by day revealed that the HVAC in the campus office building was a bit more "relaxed" over the weekends. One fun anecdote I recall from that time was using NTP to discover that NOAA had repositioned a satellite. At the time, there was a GOES satellite clock that was used as an NTP reference. It decoded a signal from one of the NOAA weather satellites in geosynchronous orbit. When you installed the clock, you had to configure what the path delay was to the spacecraft in orbit, based on your position on the ground. At some point, it was noticed that the GOES clock offset error had increased relative to others, and later found out that NOAA had repositioned the spacecraft in orbit to cover for a failed satellite covering the western part of the US. This of course changed the path delay. Other fun was watching the clock offset on a fuzzball using a 60HZ power-line based clock (rather than some local quartz crystal oscillator.) You could watch the error and see the AC line frequency sag as the load increased, and then later at night, speed up to take up the slack. So that all those wall clocks with AC line-driven synchronous motors wouldn't "lose time" - just the right number of 60Hz AC line cycles per day. It was all great fun, especially explaining to my wife how, on New Year's Eve we couldn't rush off to some party because I needed to be at home at (UTC) midnight to see of the leap-second code was going to do the right thing. Louis Mamakos From jack at 3kitty.org Wed Oct 5 09:38:05 2022 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 5 Oct 2022 09:38:05 -0700 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: On 10/4/22 18:10, Dave Taht via Internet-history wrote: (apparently actually from Len Kleinrock) > BUT, in 1975, our role was taken over by DCA and > as far as I know, the did little, if any, further experimentation and so we > lost track of what was going on in the net. Of course, we continue to pay > the price of not really knowing how the network is performing. There was extensive experimentation and data collected during the many years that the ARPANET was in operation by DCA.? The NOC at BBN collected data about network operations, which was used by the NOC Operators to handle everyday issues.? Long-term data, showing operation over days to months, was also collected and analyzed by mathematicians in the BBN "Network Analysis" group.?? Such data was used to make decisions about reconfiguring the ARPANET, both to accommodate its growth and to modify the topology to reflect changes observed in the user traffic loads and network performance.?? The same data was used to motivate changes in the algorithms, which were designed, implemented, and tested experimentally before enabling them for operational use in the entire ARPANET. For example, one such change was simply called "Routing Algorithm Improvements", and introduced new mechanisms and techniques for routing as well as congestion control and other such important issues.?? All of that was based on the data collected from years of observing network behavior during live operations. That work was documented, but IIRC largely in the form of technical reports submitted to DCA as "deliverables" for our contracts.? I believe DCA made many such reports public, but I don't recall that there were many publications of the work in traditional technical journals, student theses, or other such channels.?? So the experimentation, results, and changes made were possibly not very visible in academic environments, unless students or their librarians figured out how to access the government reports, e.g., through the Defense Technical Information Center (DTIC). Many of those old reports appear to be online now at the DTIC site. Here's one example, one of the volumes of a rather large report documenting some of the work done in one of the ARPANET Operations contracts in the early 1980s, to "improve the routing algorithm": https://apps.dtic.mil/sti/citations/ADA121350 There are probably many more such documents, especially in the Quarterly Technical Reports where lots of detail was often captured from the operating ARPANET.?? There's lots of information collected by Dave Walden (one of the original IMP programmers) online at https://walden-family.com/impcode/?? That site includes links to listings of the 1970s IMP code itself, which was resurrected about 10 years ago and the original 4-node ARPANET recreated, all operating within a simulator of the Honeywell 316 computer that was the IMP hardware.? So even today, it would be possible to run experiments using the old IMP code and its embedded algorithms and techniques. So actually BBN, and DCA, did know a lot about how the ARPANET was performing through its lifetime, and used that data to drive changes in the underlying technical mechanisms to better provide service to ARPANET users.? IMHO, it's likely that operational experience was an important factor in the selection of a "clone" of ARPANET to create the DDN - Defense Data Network, which served all military users, and had to "just work." The ARPANET research progressed from ARPA to DCA to pervasive use within all of DoD, with constant data collection employed to keep it running well and evolving.? During the 80s, the ARPANET was a vehicle for trying out new ideas, observing their performance in an operational environment, and only migrating the new technology to the more critical elements of the DDN (MILNET at al), when they had been proven successful in the ARPANET which served as a testbed for such experimentation.?? Research was motivated by data collected during operational experience, and only solutions proven by field experimentation, first in simulation, then in small laboratory networks, and then in the ARPANET were permitted to be deployed into the wider DDN environments. I've often wondered how such technical evolution happens now in The Internet. Jack Haverty (BBN 1977-1990) From bill.n1vux at gmail.com Wed Oct 5 13:00:56 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Wed, 5 Oct 2022 16:00:56 -0400 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: > Some of these guys have happily adopted my ?unusual? spelling of ?queueing? > with the extra ?e? since I loved the idea of it being the only word in > english with 5 vowels in a row (if you spell it the British way, which is > why I chose the British spelling). > I am unreasonably pleased that this was intentional Britishism for this especially nerdy purpose ! (Among my harmless sins is using 'perl' extended regular expressions to cheat at word puzzles.) I suspect my mentor MAP having been a STEM-humanities-STEM double-cross-over would have appreciated also. So I guess they read my book > I for one did. At my first full-time job, we had a weekly brown-bag seminar working through the LK QT 2-volume "book". (The proofs felt like probability/statistics proofs to me. That's not a bad thing, that's more "flavor".) Our purposes were more for Simulations and Mathematical Modeling of physical/social systems than for [IH]-topical reasons; while we were just down the street from Project Mac and MIT LCS, we at DOT TSC? were un-networked. I had to walk over to MIT to use an ITS Guest Account to read SF-Lovers and the like. (Sneakernet!) We didn't even have local email on the TSC PDP-10 running stock DECsystem 10 then. (It may have been an option that Systems group hadn't installed? Or not shared with the great unwashed of applications programmers?) (Hence i hacked up a text-skeuomorphic messaging system using System 1022 DBMS.) The networks (in the more general sense of the word) that we were interested in better simulating were mostly automotive commuter traffic jams. (And potentially airport takeoff and landing queues and rail etc., but the roadways were more likely to exhibit the most surprising, seemingly paradoxical theorems on networks, e.g. Braess's Paradox [1] , mechanically simulated by Steve Mould [2] .) ? (now Volpe Center; i was with SDC A Burroughs Co, onsite contract staff, 1980-81; yes that SDC) [1] https://en.wikipedia.org/wiki/Braess%27s_paradox [2] https://www.youtube.com/watch?v=Cg73j3QYRJc From steve at shinkuro.com Wed Oct 5 13:21:55 2022 From: steve at shinkuro.com (Steve Crocker) Date: Wed, 5 Oct 2022 16:21:55 -0400 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: Miaouing, a variant of meowing, has similar structure. Several decades ago, Vint and I playfully created a small algorithm for compressing English words in a way that approximated actual abbreviations. The rules were: - Always retain first and last letter - Delete a, e, i, o and u except if they're in the first or last position - Delete r and n if they're preceded by a vowel and followed by a consonant The above text becomes Svrl dcds ago, Vt ad I plyflly crtd a smll algrithm fr cmprssg Eglsh wds. The rls wre: - Alwys rtn fst ad lst lttr - Dlte a, e, i, o ad u excpt if thy're in the frst or lst pstn - Dlte r ad n if thy're prcdd by a vwl ad fllwd by a csnt It was natural to ask which words compressed the most. The metric we used was (l+1)/(L+1), where l is the length after compression and L is the length before compression. The "+1" counted the space after a word. "Queueing" came immediately to mind. My girlfriend's mother quickly supplied "miaouing." Steve On Wed, Oct 5, 2022 at 4:01 PM Bill Ricker via Internet-history < internet-history at elists.isoc.org> wrote: > > Some of these guys have happily adopted my ?unusual? spelling of > ?queueing? > > with the extra ?e? since I loved the idea of it being the only word in > > english with 5 vowels in a row (if you spell it the British way, which is > > why I chose the British spelling). > > > > I am unreasonably pleased that this was intentional Britishism for this > especially nerdy purpose ! > (Among my harmless sins is using 'perl' extended regular expressions to > cheat at word puzzles.) > > I suspect my mentor MAP having been a STEM-humanities-STEM > double-cross-over would have appreciated also. > > So I guess they read my book > > > > I for one did. > > At my first full-time job, we had a weekly brown-bag seminar working > through the LK QT 2-volume "book". > (The proofs felt like probability/statistics proofs to me. That's not a bad > thing, that's more "flavor".) > > Our purposes were more for Simulations and Mathematical Modeling of > physical/social systems than for [IH]-topical reasons; while we were just > down the street from Project Mac and MIT LCS, we at DOT TSC? were > un-networked. I had to walk over to MIT to use an ITS Guest Account to read > SF-Lovers and the like. (Sneakernet!) We didn't even have local email on > the TSC PDP-10 running stock DECsystem 10 then. > (It may have been an option that Systems group hadn't installed? Or not > shared with the great unwashed of applications programmers?) > (Hence i hacked up a text-skeuomorphic messaging system using System 1022 > DBMS.) > > The networks (in the more general sense of the word) that we were > interested in better simulating were mostly automotive commuter traffic > jams. > (And potentially airport takeoff and landing queues and rail etc., but the > roadways were more likely to exhibit the most surprising, seemingly > paradoxical theorems on networks, e.g. Braess's Paradox > > > [1] , mechanically simulated by Steve Mould > [2] .) > > ? (now Volpe Center; i was with SDC A Burroughs Co, onsite contract staff, > 1980-81; yes that SDC) > [1] https://en.wikipedia.org/wiki/Braess%27s_paradox > [2] https://www.youtube.com/watch?v=Cg73j3QYRJc > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From bernie at fantasyfarm.com Wed Oct 5 13:29:54 2022 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Wed, 05 Oct 2022 16:29:54 -0400 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: <183a9d729d0.27fc.742cd0bcba90c1f7f640db99bf6503c5@fantasyfarm.com> On October 5, 2022 16:01:29 Bill Ricker via Internet-history wrote: >> Some of these guys have happily adopted my ?unusual? spelling of ?queueing? >> with the extra ?e? since I loved the idea of it being the only word in >> english with 5 vowels in a row (if you spell it the British way, which is >> why I chose the British spelling). > > I am unreasonably pleased that this was intentional Britishism for this > especially nerdy purpose ! > (Among my harmless sins is using 'perl' extended regular expressions to > cheat at word puzzles.) how odd, and i apologize since this is not a forum for this kind of quibbling, but what is "british" about queue? it came from the french {yes, via england since they was no "america" when it was borrowed} but there's no other spelling of "queue" that i know of. and no other word that means queue /b\ Bernie Cosell bernie at fantasyfarm.com --- Too many people, too few sheep --- From john.g.linn at gmail.com Wed Oct 5 13:31:33 2022 From: john.g.linn at gmail.com (John Linn) Date: Wed, 5 Oct 2022 16:31:33 -0400 Subject: [ih] On queueing from len In-Reply-To: References: Message-ID: <394038C8-C129-44CD-B262-221B3CDF3D02@gmail.com> I recall the ad for a stenography or shorthand school that ran in the NY subway ca. 1970: if u cn rd this msg, u cn gt a gd jb wth hi py. I may not have that exactly, but you get the idea. Might work well for a texting school today, were there to be such a thing. --Sent from JL's mobile > On Oct 5, 2022, at 16:22, Steve Crocker via Internet-history wrote: > > ?Miaouing, a variant of meowing, has similar structure. > > Several decades ago, Vint and I playfully created a small algorithm for > compressing English words in a way that approximated actual abbreviations. > The rules were: > > - Always retain first and last letter > - Delete a, e, i, o and u except if they're in the first or last position > - Delete r and n if they're preceded by a vowel and followed by a > consonant > > The above text becomes > > Svrl dcds ago, Vt ad I plyflly crtd a smll algrithm fr cmprssg Eglsh wds. > The rls wre: > > - Alwys rtn fst ad lst lttr > - Dlte a, e, i, o ad u excpt if thy're in the frst or lst pstn > - Dlte r ad n if thy're prcdd by a vwl ad fllwd by a csnt > > It was natural to ask which words compressed the most. The metric we used > was (l+1)/(L+1), where l is the length after compression and L is the > length before compression. The "+1" counted the space after a word. > > "Queueing" came immediately to mind. My girlfriend's mother quickly > supplied "miaouing." > > Steve > > > On Wed, Oct 5, 2022 at 4:01 PM Bill Ricker via Internet-history < > internet-history at elists.isoc.org> wrote: > >>> Some of these guys have happily adopted my ?unusual? spelling of >> ?queueing? >>> with the extra ?e? since I loved the idea of it being the only word in >>> english with 5 vowels in a row (if you spell it the British way, which is >>> why I chose the British spelling). >>> >> >> I am unreasonably pleased that this was intentional Britishism for this >> especially nerdy purpose ! >> (Among my harmless sins is using 'perl' extended regular expressions to >> cheat at word puzzles.) >> >> I suspect my mentor MAP having been a STEM-humanities-STEM >> double-cross-over would have appreciated also. >> >> So I guess they read my book >>> >> >> I for one did. >> >> At my first full-time job, we had a weekly brown-bag seminar working >> through the LK QT 2-volume "book". >> (The proofs felt like probability/statistics proofs to me. That's not a bad >> thing, that's more "flavor".) >> >> Our purposes were more for Simulations and Mathematical Modeling of >> physical/social systems than for [IH]-topical reasons; while we were just >> down the street from Project Mac and MIT LCS, we at DOT TSC? were >> un-networked. I had to walk over to MIT to use an ITS Guest Account to read >> SF-Lovers and the like. (Sneakernet!) We didn't even have local email on >> the TSC PDP-10 running stock DECsystem 10 then. >> (It may have been an option that Systems group hadn't installed? Or not >> shared with the great unwashed of applications programmers?) >> (Hence i hacked up a text-skeuomorphic messaging system using System 1022 >> DBMS.) >> >> The networks (in the more general sense of the word) that we were >> interested in better simulating were mostly automotive commuter traffic >> jams. >> (And potentially airport takeoff and landing queues and rail etc., but the >> roadways were more likely to exhibit the most surprising, seemingly >> paradoxical theorems on networks, e.g. Braess's Paradox >> >> >> [1] , mechanically simulated by Steve Mould >> [2] .) >> >> ? (now Volpe Center; i was with SDC A Burroughs Co, onsite contract staff, >> 1980-81; yes that SDC) >> [1] https://en.wikipedia.org/wiki/Braess%27s_paradox >> [2] https://www.youtube.com/watch?v=Cg73j3QYRJc >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bill.n1vux at gmail.com Wed Oct 5 14:08:27 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Wed, 5 Oct 2022 17:08:27 -0400 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: On Wed, Oct 5, 2022 at 4:22 PM Steve Crocker wrote: > Miaouing, a variant of meowing, has similar structure. > Yes, longer, larger (less prescriptive?) dictionaries remove the uniqueness (provided one has a flexible enough definition of "English"). $ v=[aeiou]; ack "$v{5}" -h | sort | uniq -c 1 aeaean 1 cadiueio 1 chaouia 2 cooeeing 1 euouae 1 guauaenok 2 miaoued 2 miaouing 2 queueing (prescriptively, i prefer Meowed/Meowing, but i accept descriptive dictionaries reporting English-as-it-is-(ab)used. Cooeeing is Australian, so not *proper* English either ;-) ) and one of those gets to 6 vowels, no consonants at all, but is at best questionably English. *Euouae* or Evovae is an abbreviation used as a mnemonic in Latin psalters > and other liturgical books of the Roman Rite to indicate the distribution > of syllables in the differentia or variable melodic endings of the standard > psalm tones of Gregorian chant.Wikipedia > > Abbreviation/Mnemonic, Latin, Medieval. I wonder if it's allowed in Scrabble. I wouldn't count it as English! -- Bill Ricker bill.n1vux at gmail.com https://www.linkedin.com/in/n1vux From tte at cs.fau.de Wed Oct 5 16:54:13 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Thu, 6 Oct 2022 01:54:13 +0200 Subject: [ih] On queueing from len In-Reply-To: <183a9d729d0.27fc.742cd0bcba90c1f7f640db99bf6503c5@fantasyfarm.com> References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> <183a9d729d0.27fc.742cd0bcba90c1f7f640db99bf6503c5@fantasyfarm.com> Message-ID: American routers have no queues but only lines. On Wed, Oct 05, 2022 at 04:29:54PM -0400, Bernie Cosell via Internet-history wrote: > On October 5, 2022 16:01:29 Bill Ricker via Internet-history > wrote: > > > > Some of these guys have happily adopted my ?unusual? spelling of ?queueing? > > > with the extra ?e? since I loved the idea of it being the only word in > > > english with 5 vowels in a row (if you spell it the British way, which is > > > why I chose the British spelling). > > > > I am unreasonably pleased that this was intentional Britishism for this > > especially nerdy purpose ! > > (Among my harmless sins is using 'perl' extended regular expressions to > > cheat at word puzzles.) > > how odd, and i apologize since this is not a forum for this kind > of quibbling, but what is "british" about queue? it came from the > french {yes, via england since they was no "america" when it was > borrowed} but there's no other spelling of "queue" that i know of. > and no other word that means queue From brian.e.carpenter at gmail.com Wed Oct 5 19:06:38 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Thu, 6 Oct 2022 15:06:38 +1300 Subject: [ih] technical evolution [was: On queueing from len] In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: <59e86f5c-29b4-ed2c-2e0f-871e8b112388@gmail.com> On 06-Oct-22 05:38, Jack Haverty via Internet-history wrote: ... > I've often wondered how such technical evolution happens now in The > Internet. Oh dear. I could spend the rest of the day trying to answer that and even then, probably nobody would agree. It isn't a managed process with a single locus of control. You could do a case study of QUIC and a case study of SRV6 and get quite different answers, for example. There's academia where the main focus is probably still SIGCOMM. A lot of useful measurement results come from academia. There's the world of operators where (IMHO) the main foci are RIPE and APNIC; the other regional registries seem less involved in operational issues. There are the *big* service providers who are proactive in solving their own problems. There's the IRTF. There's the IETF. And then there are the hardware and software vendors who decide what will actually be available for deployment. Not to mention open source programmers - what happens to end up in the Linux distros is determinant in many cases. Here's another possible case study: making using of IPv6 extension headers. Right now the nearest thing to a focal point is indeed in the IETF. But who's in charge? Nobody, except the people who design, configure and operate firewalls, because they decide which extension headers survive a trip across the Internet. Brian From dhc at dcrocker.net Wed Oct 5 19:30:06 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 5 Oct 2022 19:30:06 -0700 Subject: [ih] On queueing from len In-Reply-To: References: <5A45C606-E464-4286-97E3-8A5DBF7C4A2B@cs.ucla.edu> Message-ID: On 10/5/2022 9:38 AM, Jack Haverty via Internet-history wrote: > I've often wondered how such technical evolution happens now in The > Internet. Some random folk spontaneously decide they want to evolve things -- modify existing capabilities, create new ones, whatever. They form what is effectively a cabal.? (Not the nicest of terms, but one that gets applies, correctly or not.) Open or closed, they are the locus of control. They decide on goals and means and write specs and code. At some point, they take steps to recruit more folk to the effort.? Trade associations, journal articles, IETF, whatever. At some point, the technology reaches sufficient stability AND sufficient community support to hold sway.? And maybe even formal status. To quote an early sage, all the rest is commentary. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From christinehaughneydb at gmail.com Tue Oct 11 14:01:19 2022 From: christinehaughneydb at gmail.com (Christine Dare-Bryan) Date: Tue, 11 Oct 2022 17:01:19 -0400 Subject: [ih] Looking for anyone who worked on the Arpanet with my father Major Joseph Haughney: Message-ID: My name is Christine Haughney Dare-Bryan. I am a long time journalist and the daughter of Major Joseph Haughney who worked for DCA on the Arpanet project from 1979 through 1981. Here is a link to the final letter he sent before retiring: https://www.rfc-editor.org/rfc/museum/ddn-news/ddn-news.n8.1 I'm trying to find anyone who may have worked with him during that time. He has mentioned Jake Feinler, Vinton Cerf and Bob Kahn. But would anyone in this group know anyone else who may have worked with him? Thank you so much! Christine From brian.e.carpenter at gmail.com Thu Oct 20 14:57:35 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 21 Oct 2022 10:57:35 +1300 Subject: [ih] IPng history [was: Notification to list from IETF Moderators team] In-Reply-To: <89096BA9-FF0F-4501-9104-2868616375A8@sobco.com> References: <4545ee83-1c56-4633-05f0-0576ac297884@ietf.org> <9fff11cc-d48c-8ad6-05b9-9f3709edf0b6@necom830.hpcl.titech.ac.jp> <89096BA9-FF0F-4501-9104-2868616375A8@sobco.com> Message-ID: <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> Hi, A rather provocative message over at the IETF has been bugging me for the last week, and I thought that here might be a better place to correct the record. Of course, Scott's comment is correct, and there's a whole book about it (ISBN 9780201633955). I naturally don't mind people who have deep objections to the design of IPv6, but we do need historical accuracy. On 15-Oct-22 02:28, Scott Bradner wrote: > > >> On Oct 14, 2022, at 8:47 AM, Masataka Ohta wrote: > ... > > there are a number of inaccuracies in the text below - see RFC 1752 for a more detailed description of the > process > > Scott > >> An important point of the thread is why IPv6 address is >> so lengthy. And the history around it recognized by the >> thread with my understanding is that: >> >> 1) There was a L3(?) protocol called XNS, which use L2 >> address as lower part of L3 address, which is layer >> violation, which disappeared and IPv4 won the >> battle at L3. In fact I believe the direct inspirations for the original form of IPv6 interface identifier based on MAC addresses were Novell Netware and DECnet Phase IV**, both of which were very widely and successfully deployed in 1994. IPv6 generalised the concept by defining that part of the address as an interface identifier, with the MAC address model as the first (and now deprecated) format. See RFC7136 and RFC8064. This was not a layer violation. Address resolution in IPv6 is a function that dynamically discovers the layer 2 address corresponding to a given layer 3 address. ** DECnet IV did it backwards - it set the MAC address to match the DECnet layer 3 address, not the other way round. >> >> 2) Though IAB tried to force IETF to accept CLNP >> (developed by OSI) as an alternative for IPv4, True, and that was the end of the old IAB. >> it was >> denied by democratic process in IETF I'd say it was a meritocratic process and was based on practical experience of trying and failing to deploy CLNP. >> and a project to >> develop IPng, which should be different from CLNP, was >> initiated in IETF. No, CLNP (i.e. TUBA) was one of the three main contenders for adoption as IPng. >> 3) the project resulted to choose SIP, which has 8B >> address, as the primary candidate for IPng though >> some attempt to merge it with other proposals There were a whole bunch of proposals and attempts to combine ideas. A very complex story, which is why there's a whole book about it as well as RFC1752. It is correct that SIP started with 64 bit addresses that did not include an interface identifier. But the latter was added during the design process. >> (though such mergers usually result in worse results >> than originals). >> >> 4) then, all of a sudden, a closed committed of >> IPng directorates Yes, following an inconclusive BOF at IETF 27, the IESG decided to convene an ad hoc Directorate. But all the drafts we considered were public, as far as I know, although the I-D mechanism was clunky in those days and not every draft has survived. Scott was too modest above to mention his excellent archive: https://www.sobco.com/ipng/archive/ >> decided that address should be >> 16B long to revive an abandoned, with reasons, >> address structure of XNS, which is not a >> democratic process. The idea of adding an explicit interface identifier (originally 48 bits, soon expanded to 64 bits) was not a clone of XNS, Netware, or DECnet IV. It was actually an architectural innovation, more so than we realised at the time. We could perhaps have gone further by making the locator/identifier split even stronger, but we didn't. >> >> 5) we, including me, was not aware that 16B address >> is so painful to operate, partly because I hoped >> most initial bit can be all zero. But... I can't interpret that statement. There was certainly no intention that the high order bits would be "all zero". We did not design IPv6 as IPv4 with bigger addresses. >> >> That is the recently recognized history of IPv6 and most, if >> not all. of my points in it can be confirmed by the link for >> a mail from Bill Simpson. It's true that Ohta-san and Bill Simpson were dissenters. So, in a sense, were the proponents of TUBA and CATNIP. There could only be one consensus, so some ideas were inevitably rejected. >> >> It should also be noted that unnecessarily lengthy address >> of IPv6 may be motivated to revive CLNP addressing against >> the democratic process. See rfc1888 for such a proposal. That's absurd. I can tell you the exact reason we did RFC1988. I drafted the guts of it sitting on a park bench on University Avenue in Toronto shortly after the IPv6 proposal was announced in plenary at IETF 30. This was 1994, when OSI was still very much alive politically (although not much in reality) and we needed to avoid a political row with various government funding agencies. US GOSIP was still a (theoretical) requirement for many agencies. As the RFC says: This recommendation is addressed to network implementors who have already planned or deployed an OSI NSAP addressing plan for the usage of OSI CLNP [IS8473] according to the OSI network layer addressing plan [IS8348] using ES-IS and IS-IS routing [IS9542, IS10589]. It recommends how they should adapt their addressing plan for use with IPv6 [RFC1883]. That's all, and it was an Experimental RFC, obsoleted in 2005, by which time its political purpose had gone away. Regards, Brian >> >> Masataka Ohta >> > > . From jeanjour at comcast.net Thu Oct 20 18:41:41 2022 From: jeanjour at comcast.net (John Day) Date: Thu, 20 Oct 2022 21:41:41 -0400 Subject: [ih] IPng history [was: Notification to list from IETF Moderators team] In-Reply-To: <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> References: <4545ee83-1c56-4633-05f0-0576ac297884@ietf.org> <9fff11cc-d48c-8ad6-05b9-9f3709edf0b6@necom830.hpcl.titech.ac.jp> <89096BA9-FF0F-4501-9104-2868616375A8@sobco.com> <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> Message-ID: > On Oct 20, 2022, at 17:57, Brian E Carpenter via Internet-history wrote: > > Hi, > > A rather provocative message over at the IETF has been > bugging me for the last week, and I thought that here > might be a better place to correct the record. Of course, > Scott's comment is correct, and there's a whole book about > it (ISBN 9780201633955). > > I naturally don't mind people who have deep objections > to the design of IPv6, but we do need historical accuracy. > On 15-Oct-22 02:28, Scott Bradner wrote: >>> On Oct 14, 2022, at 8:47 AM, Masataka Ohta wrote: >> ... >> there are a number of inaccuracies in the text below - see RFC 1752 for a more detailed description of the >> process >> Scott >>> An important point of the thread is why IPv6 address is >>> so lengthy. And the history around it recognized by the >>> thread with my understanding is that: >>> >>> 1) There was a L3(?) protocol called XNS, which use L2 >>> address as lower part of L3 address, which is layer >>> violation, which disappeared and IPv4 won the >>> battle at L3. > > In fact I believe the direct inspirations for the original form > of IPv6 interface identifier based on MAC addresses were Novell > Netware and DECnet Phase IV**, both of which were very widely and > successfully deployed in 1994. IPv6 generalised the concept by > defining that part of the address as an interface identifier, > with the MAC address model as the first (and now deprecated) > format. See RFC7136 and RFC8064. > > This was not a layer violation. Address resolution in IPv6 > is a function that dynamically discovers the layer 2 address > corresponding to a given layer 3 address. > > ** DECnet IV did it backwards - it set the MAC address to match > the DECnet layer 3 address, not the other way round. They were all wrong. It was well-known in the 70s that network addresses should be location-dependent (relative to the graph of the layer) and route-independent. It was realized in the early 80s, that concatenating the lower layer address with the upper layer address made the address route-dependent. It determines the path. Addresses must be path independent. I know it seems like a natural thing to do. It is what we do for filenames and it works quite well. But remember what Multics called a filename: a pathname. And the separator was quite correctly ?>? rather than ?/?. Address spaces in different layers should be independent. There is a way to do what this trying to do that maintains the route-independence while create the location-dependence. The important thing for the IPng to have done was to route to where all of the points of attachment come together, i.e., the node. This had been realized as early as 1972 by multiple different groups. > >>> >>> 2) Though IAB tried to force IETF to accept CLNP >>> (developed by OSI) as an alternative for IPv4, > > True, and that was the end of the old IAB. > >>> it was >>> denied by democratic process in IETF > And is the basis of why the US has had a representative democracy, rather than a democracy. Just to avoid bad decisions like that. From what I saw, it was more mob rule. A sustained flaming that was generating 70 very nasty emails an hour for days if not weeks. > I'd say it was a meritocratic process and was based on > practical experience of trying and failing to deploy > CLNP. I have been told by reputable sources that there was more CLNP deployed and operational in 1992 than IPv6 in 2014. > >>> and a project to >>> develop IPng, which should be different from CLNP, was >>> initiated in IETF. > > No, CLNP (i.e. TUBA) was one of the three main contenders > for adoption as IPng. > >>> 3) the project resulted to choose SIP, which has 8B >>> address, as the primary candidate for IPng though >>> some attempt to merge it with other proposals > > There were a whole bunch of proposals and attempts to > combine ideas. A very complex story, which is why there's > a whole book about it as well as RFC1752. > > It is correct that SIP started with 64 bit addresses that > did not include an interface identifier. But the latter > was added during the design process. > >>> (though such mergers usually result in worse results >>> than originals). >>> >>> 4) then, all of a sudden, a closed committed of >>> IPng directorates > > Yes, following an inconclusive BOF at IETF 27, the IESG > decided to convene an ad hoc Directorate. But all the drafts > we considered were public, as far as I know, although the > I-D mechanism was clunky in those days and not every draft > has survived. Scott was too modest above to mention his > excellent archive: https://www.sobco.com/ipng/archive/ > >>> decided that address should be >>> 16B long to revive an abandoned, with reasons, >>> address structure of XNS, which is not a >>> democratic process. > > The idea of adding an explicit interface identifier > (originally 48 bits, soon expanded to 64 bits) was not > a clone of XNS, Netware, or DECnet IV. It was actually > an architectural innovation, more so than we realised > at the time. We could perhaps have gone further by > making the locator/identifier split even stronger, but > we didn't. And a good thing too. The so-called locator/identifier distinction is a false distinction. See Saltzer?s definition of ?resolve? a name in his 1977 paper, i.e., to locate an object in a given context given its name. You can?t do one without the other. Generally, the graphs of our networks don?t follow a nice regular pattern like Midwest cities (a grid), but they do exhibit levels of clustering down to some granularity. Interpretation of the address is much like interpreting an address on a letter. World subset to Country subset to State/Province subset to City, then it shifts to linear search for street and linear search for number. For networks, it is similar: Subsetting works up to a point (CIDR), then it shifts to exhaustive search ?locally?. In some cases for large networks, subsetting may continue ?locally.? There are basically 4 kinds of semantics for these identifiers: 1) local identifier if it is simply point-to-point 2) recognition, if it is a multi-access media, e.g. wireless or original Ethernet. 3) forwarding-id,* which may be flat for networks small enough that the tables are tractable, and 4) true addresses, which are assigned to be location-dependent and route-independent. IOW, inspection of two addresses can determine if they are ?near? each other for some definition of ?near.? * the traditional routing algorithms, e.g., link-state, distance vector, etc, do not use true addresses. Forwarding-ids are merely used to keep track of what nodes in the graph are being referred to in the creating the solution. Encoding a ?nearness? property in the forwarding-id is not used. (which is why they are forwarding-ids and not true addresses). > >>> >>> 5) we, including me, was not aware that 16B address >>> is so painful to operate, partly because I hoped >>> most initial bit can be all zero. But... > > I can't interpret that statement. There was certainly no > intention that the high order bits would be "all zero". > We did not design IPv6 as IPv4 with bigger addresses. > >>> >>> That is the recently recognized history of IPv6 and most, if >>> not all. of my points in it can be confirmed by the link for >>> a mail from Bill Simpson. > > It's true that Ohta-san and Bill Simpson were dissenters. > So, in a sense, were the proponents of TUBA and CATNIP. > There could only be one consensus, so some ideas were > inevitably rejected. > >>> >>> It should also be noted that unnecessarily lengthy address >>> of IPv6 may be motivated to revive CLNP addressing against >>> the democratic process. See rfc1888 for such a proposal. > > That's absurd. I can tell you the exact reason we did RFC1988. > I drafted the guts of it sitting on a park bench on University > Avenue in Toronto shortly after the IPv6 proposal was announced in > plenary at IETF 30. This was 1994, when OSI was still very much > alive politically (although not much in reality) and we needed > to avoid a political row with various government funding agencies. > US GOSIP was still a (theoretical) requirement for many agencies. > As the RFC says: > > This recommendation is addressed to network implementors who have > already planned or deployed an OSI NSAP addressing plan for the usage > of OSI CLNP [IS8473] according to the OSI network layer addressing > plan [IS8348] using ES-IS and IS-IS routing [IS9542, IS10589]. It > recommends how they should adapt their addressing plan for use with4 Sort of. IS8348 is the Network Layer Service Definition. It specifies guidelines for addresses and how different addressing plans are to be accommodated, but it is hardly an addressing plan itself. It does deftly cover up the error in the OSI Model forced on it by the ITU of exposing the address at the layer boundary, e.g., that an NSAP address and a Network-Entity-Title may be identified by the same string. Some detailed addressing plans were created for it. However, for the most part it was not yet recognized that the rules noted above, i.e., location-dependence (relative to the graph of the layer) and route-independence were not properties of some of these proposals. The addressing plan needs to be an abstraction of the graph of the layer. As we noted above, the upper levels of the hierarchy can be defined overall, but at some point subsequent levels become a regional or local matter. OSI did have the advantage, documented in IS8648, of what was essentially the network layer address (called Subnet Access, technology-specific) and an internet address (called Subnet Independent Convergence, which technology-independent and was what CLNP carried.)* It is interesting to note that this was the structure that INWG had adopted in 1975 and was independently arrived at by the group working out IS8648, almost 10 years later. IOW, that an internet model is a common overlay over the specific network technologies, rather than protocol conversion at the boundaries. Unfortunately, the Internet had lost the Internet Layer by the early 80s. * In the Saltzer paper of 1982, these are called point of attachment addresses and node addresses. As for IPv6, its current state of confusion speaks for itself. But that's okay, even if they had done it right, it still wouldn?t be enough. Take care, John Day > > That's all, and it was an Experimental RFC, obsoleted in 2005, > by which time its political purpose had gone away. > > Regards, > Brian > >>> >>> Masataka Ohta >>> >> . > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Thu Oct 20 20:23:23 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 21 Oct 2022 16:23:23 +1300 Subject: [ih] IPng history [was: Notification to list from IETF Moderators team] In-Reply-To: References: <4545ee83-1c56-4633-05f0-0576ac297884@ietf.org> <9fff11cc-d48c-8ad6-05b9-9f3709edf0b6@necom830.hpcl.titech.ac.jp> <89096BA9-FF0F-4501-9104-2868616375A8@sobco.com> <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> Message-ID: On 21-Oct-22 14:41, John Day wrote: > > >> On Oct 20, 2022, at 17:57, Brian E Carpenter via Internet-history wrote: >> >> Hi, >> >> A rather provocative message over at the IETF has been >> bugging me for the last week, and I thought that here >> might be a better place to correct the record. Of course, >> Scott's comment is correct, and there's a whole book about >> it (ISBN 9780201633955). >> >> I naturally don't mind people who have deep objections >> to the design of IPv6, but we do need historical accuracy. >> On 15-Oct-22 02:28, Scott Bradner wrote: >>>> On Oct 14, 2022, at 8:47 AM, Masataka Ohta wrote: >>> ... >>> there are a number of inaccuracies in the text below - see RFC 1752 for a more detailed description of the >>> process >>> Scott >>>> An important point of the thread is why IPv6 address is >>>> so lengthy. And the history around it recognized by the >>>> thread with my understanding is that: >>>> >>>> 1) There was a L3(?) protocol called XNS, which use L2 >>>> address as lower part of L3 address, which is layer >>>> violation, which disappeared and IPv4 won the >>>> battle at L3. >> >> In fact I believe the direct inspirations for the original form >> of IPv6 interface identifier based on MAC addresses were Novell >> Netware and DECnet Phase IV**, both of which were very widely and >> successfully deployed in 1994. IPv6 generalised the concept by >> defining that part of the address as an interface identifier, >> with the MAC address model as the first (and now deprecated) >> format. See RFC7136 and RFC8064. >> >> This was not a layer violation. Address resolution in IPv6 >> is a function that dynamically discovers the layer 2 address >> corresponding to a given layer 3 address. >> >> ** DECnet IV did it backwards - it set the MAC address to match >> the DECnet layer 3 address, not the other way round. > > They were all wrong. It was well-known in the 70s that network addresses should be location-dependent (relative to the graph of the layer) and route-independent. It was realized in the early 80s, that concatenating the lower layer address with the upper layer address made the address route-dependent. It determines the path. Addresses must be path independent. I know it seems like a natural thing to do. It is what we do for filenames and it works quite well. But remember what Multics called a filename: a pathname. And the separator was quite correctly ?>? rather than ?/?. Address spaces in different layers should be independent. Yes, and that's exactly the case for IPv6. The interface identifier is an opaque bit string whose only job is to be unique on the link. Unlike XNS, Netware and DECnet IV, it does not map by construction to a MAC address *even it it was formed from a MAC address*. There has always been an address resolution stage in IPv6 (it just isn't called ARP). > > There is a way to do what this trying to do that maintains the route-independence while create the location-dependence. > > The important thing for the IPng to have done was to route to where all of the points of attachment come together, i.e., the node. This had been realized as early as 1972 by multiple different groups. Well, the decision was to route to a specific interface rather the node as a whole, and to allow a node to have as many addresses as it cares to. But that's only a small alteration to the IPv4 model. For clarity, I should add that IPv6 routing is CIDR all the way down; up to /128 if need be (BCP198/RFC7608). The 64-bit interface identifier is not carved in stone (only soft clay); it only applies on links that *need* an interface identifier. > >> >>>> >>>> 2) Though IAB tried to force IETF to accept CLNP >>>> (developed by OSI) as an alternative for IPv4, >> >> True, and that was the end of the old IAB. >> >>>> it was >>>> denied by democratic process in IETF >> > > And is the basis of why the US has had a representative democracy, rather than a democracy. Just to avoid bad decisions like that. From what I saw, it was more mob rule. A sustained flaming that was generating 70 very nasty emails an hour for days if not weeks. > >> I'd say it was a meritocratic process and was based on >> practical experience of trying and failing to deploy >> CLNP. > > I have been told by reputable sources that there was more CLNP deployed and operational in 1992 than IPv6 in 2014. The airline industry among others built a very large private network using CLNP. I don't know the dates for that. But in 1992, we were nowhere near getting the HEP/SPAN DECnets converted to DECnet Phase V/CLNP, because the software wasn't viable. In fact, by the time that was physically possible (1996 or thereabouts), the physicists and space scientists had all switched to TCP/IP. More importantly, there was never a significant public CLNP network. At the end of 2014, about 5% of Google users were connecting via IPv6. Are you saying that in 1992, there were that many active CLNP users? I'd like to see the raw data. > >> >>>> and a project to >>>> develop IPng, which should be different from CLNP, was >>>> initiated in IETF. >> >> No, CLNP (i.e. TUBA) was one of the three main contenders >> for adoption as IPng. >> >>>> 3) the project resulted to choose SIP, which has 8B >>>> address, as the primary candidate for IPng though >>>> some attempt to merge it with other proposals >> >> There were a whole bunch of proposals and attempts to >> combine ideas. A very complex story, which is why there's >> a whole book about it as well as RFC1752. >> >> It is correct that SIP started with 64 bit addresses that >> did not include an interface identifier. But the latter >> was added during the design process. >> >>>> (though such mergers usually result in worse results >>>> than originals). >>>> >>>> 4) then, all of a sudden, a closed committed of >>>> IPng directorates >> >> Yes, following an inconclusive BOF at IETF 27, the IESG >> decided to convene an ad hoc Directorate. But all the drafts >> we considered were public, as far as I know, although the >> I-D mechanism was clunky in those days and not every draft >> has survived. Scott was too modest above to mention his >> excellent archive: https://www.sobco.com/ipng/archive/ >> >>>> decided that address should be >>>> 16B long to revive an abandoned, with reasons, >>>> address structure of XNS, which is not a >>>> democratic process. >> >> The idea of adding an explicit interface identifier >> (originally 48 bits, soon expanded to 64 bits) was not >> a clone of XNS, Netware, or DECnet IV. It was actually >> an architectural innovation, more so than we realised >> at the time. We could perhaps have gone further by >> making the locator/identifier split even stronger, but >> we didn't. > > And a good thing too. The so-called locator/identifier distinction is a false distinction. See Saltzer?s definition of ?resolve? a name in his 1977 paper, i.e., to locate an object in a given context given its name. You can?t do one without the other. Well, you'd better have that argument with the LISP people. RFC9299 through 9306 just came out a few hours ago. Brian > > Generally, the graphs of our networks don?t follow a nice regular pattern like Midwest cities (a grid), but they do exhibit levels of clustering down to some granularity. Interpretation of the address is much like interpreting an address on a letter. World subset to Country subset to State/Province subset to City, then it shifts to linear search for street and linear search for number. For networks, it is similar: Subsetting works up to a point (CIDR), then it shifts to exhaustive search ?locally?. In some cases for large networks, subsetting may continue ?locally.? > > There are basically 4 kinds of semantics for these identifiers: > 1) local identifier if it is simply point-to-point > 2) recognition, if it is a multi-access media, e.g. wireless or original Ethernet. > 3) forwarding-id,* which may be flat for networks small enough that the tables are tractable, and > 4) true addresses, which are assigned to be location-dependent and route-independent. IOW, inspection of two addresses can determine if they are ?near? each other for some definition of ?near.? > > * the traditional routing algorithms, e.g., link-state, distance vector, etc, do not use true addresses. Forwarding-ids are merely used to keep track of what nodes in the graph are being referred to in the creating the solution. Encoding a ?nearness? property in the forwarding-id is not used. (which is why they are forwarding-ids and not true addresses). > >> >>>> >>>> 5) we, including me, was not aware that 16B address >>>> is so painful to operate, partly because I hoped >>>> most initial bit can be all zero. But... >> >> I can't interpret that statement. There was certainly no >> intention that the high order bits would be "all zero". >> We did not design IPv6 as IPv4 with bigger addresses. >> >>>> >>>> That is the recently recognized history of IPv6 and most, if >>>> not all. of my points in it can be confirmed by the link for >>>> a mail from Bill Simpson. >> >> It's true that Ohta-san and Bill Simpson were dissenters. >> So, in a sense, were the proponents of TUBA and CATNIP. >> There could only be one consensus, so some ideas were >> inevitably rejected. >> >>>> >>>> It should also be noted that unnecessarily lengthy address >>>> of IPv6 may be motivated to revive CLNP addressing against >>>> the democratic process. See rfc1888 for such a proposal. >> >> That's absurd. I can tell you the exact reason we did RFC1988. >> I drafted the guts of it sitting on a park bench on University >> Avenue in Toronto shortly after the IPv6 proposal was announced in >> plenary at IETF 30. This was 1994, when OSI was still very much >> alive politically (although not much in reality) and we needed >> to avoid a political row with various government funding agencies. >> US GOSIP was still a (theoretical) requirement for many agencies. >> As the RFC says: >> >> This recommendation is addressed to network implementors who have >> already planned or deployed an OSI NSAP addressing plan for the usage >> of OSI CLNP [IS8473] according to the OSI network layer addressing >> plan [IS8348] using ES-IS and IS-IS routing [IS9542, IS10589]. It >> recommends how they should adapt their addressing plan for use with4 > > Sort of. IS8348 is the Network Layer Service Definition. It specifies guidelines for addresses and how different addressing plans are to be accommodated, but it is hardly an addressing plan itself. It does deftly cover up the error in the OSI Model forced on it by the ITU of exposing the address at the layer boundary, e.g., that an NSAP address and a Network-Entity-Title may be identified by the same string. Some detailed addressing plans were created for it. However, for the most part it was not yet recognized that the rules noted above, i.e., location-dependence (relative to the graph of the layer) and route-independence were not properties of some of these proposals. The addressing plan needs to be an abstraction of the graph of the layer. As we noted above, the upper levels of the hierarchy can be defined overall, but at some point subsequent levels become a regional or local matter. > > OSI did have the advantage, documented in IS8648, of what was essentially the network layer address (called Subnet Access, technology-specific) and an internet address (called Subnet Independent Convergence, which technology-independent and was what CLNP carried.)* It is interesting to note that this was the structure that INWG had adopted in 1975 and was independently arrived at by the group working out IS8648, almost 10 years later. IOW, that an internet model is a common overlay over the specific network technologies, rather than protocol conversion at the boundaries. Unfortunately, the Internet had lost the Internet Layer by the early 80s. > > * In the Saltzer paper of 1982, these are called point of attachment addresses and node addresses. > > As for IPv6, its current state of confusion speaks for itself. But that's okay, even if they had done it right, it still wouldn?t be enough. > > Take care, > John Day > >> >> That's all, and it was an Experimental RFC, obsoleted in 2005, >> by which time its political purpose had gone away. >> >> Regards, >> Brian >> >>>> >>>> Masataka Ohta >>>> >>> . >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From bob.hinden at gmail.com Fri Oct 21 09:26:40 2022 From: bob.hinden at gmail.com (Bob Hinden) Date: Fri, 21 Oct 2022 09:26:40 -0700 Subject: [ih] IPng history [was: Notification to list from IETF Moderators team] In-Reply-To: <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> References: <4545ee83-1c56-4633-05f0-0576ac297884@ietf.org> <9fff11cc-d48c-8ad6-05b9-9f3709edf0b6@necom830.hpcl.titech.ac.jp> <89096BA9-FF0F-4501-9104-2868616375A8@sobco.com> <56ddf70b-009d-7a16-c1d6-eeac904271cb@gmail.com> Message-ID: > > There were a whole bunch of proposals and attempts to > combine ideas. A very complex story, which is why there's > a whole book about it as well as RFC1752. > > It is correct that SIP started with 64 bit addresses that > did not include an interface identifier. But the latter > was added during the design process. SIP as submitted to the IETF IP next generation process was documented in RFC8507 https://www.rfc-editor.org/rfc/rfc8507.html Bob -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From touch at strayalpha.com Sat Oct 29 10:37:39 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 29 Oct 2022 10:37:39 -0700 Subject: [ih] Test - please ignore Message-ID: Test post.